Unit testing Python

Part 2



Dmitry Tantsur (Principal Software Engineer, Red Hat)

Slides: owlet.today/talks/berlin-python-unittest-2

Part 1: owlet.today/talks/berlin-python-unittest

Code: github.com/dtantsur/berlin-python-unittest

Agenda

  • Mocking:
    • Mock objects
    • Patching objects
    • Spec and autospec
  • Measuring coverage
  • Additional runners:
    • PyTest
    • stestr
    • tox
  • Handly libraries:
    • requests-mock
    • fixtures

Mocking

Mocking

Problem statement

How to test code that relies on (a lot of) other code?

How to test code that relies on something not available in a regular testing environment?

Mocking

Example

Example architecture

Mocking

Example

Mocks

Mocking

Mock objects

Magic objects that allow any operations on them and record them for future verification.

from unittest import mock

m = mock.Mock()
r = m.abc(42, cat='meow')
assert isinstance(m.abc, mock.Mock)
assert isinstance(r, mock.Mock)

m.abc.assert_called_once_with(42, cat='meow')
assert r is m.abc.return_value

Mocking

Mock objects

Mocks can simulate functions that return values or raise exceptions.

from unittest import mock

m = mock.Mock(return_value=42)
assert m() == 42

m = mock.Mock(side_effect=[1, 2])
assert m() == 1
assert m() == 2

m = mock.Mock(side_effect=RuntimeError("boom"))
m()  # raises

Mocking

Case study: quadratic equation

my_utils/roots.py
import sys

def main():
    try:
        a = int(sys.argv[1])
        b = int(sys.argv[2])
        c = int(sys.argv[3])
    except IndexError:
        sys.exit('3 arguments required')
    except ValueError:
        sys.exit('all arguments must be integers')
    print(roots(a, b, c))

if __name__ == '__main__':
    main()

Mocking

Case study: quadratic equation

Problems:

  • provide values for sys.argv
  • test how sys.exit is called
  • test how print is called

Mocking

Case study: quadratic equation

class MainTest(unittest.TestCase):

    @mock.patch('sys.argv', [None, '1', '-3', '2'])
    @mock.patch('builtins.print')
    def test_correct(self, mock_print):
        roots.main()
        mock_print.assert_called_once_with((1.0, 2.0))

Forms of mock.patch

Pure replacement

import sys

class MainTest(unittest.TestCase):

    @mock.patch('sys.argv', [None, '1', '-3', '2'])
    def test_patch(self):
        # sys.argv is equal to the replacement value here

    @mock.patch.object(sys, 'argv', [None, '1', '-3', '2'])
    def test_patch_object(self):
        # ...

Forms of mock.patch

Creating of Mock object

import builtins

class MainTest(unittest.TestCase):

    @mock.patch('builtins.print')
    def test_patch(self, mock_print):
        # calling print() here calls mock_print instead

    @mock.patch(builtins, 'print')
    def test_patch_object(self, mock_print):
        # ...

Forms of mock.patch

The same at the class level

import builtins

@mock.patch('builtins.print')
class MainTest(unittest.TestCase):

    def test_patch(self, mock_print):
        # calling print() here calls mock_print instead

    def test_2(self, mock_print):
        # ...

This is equivalent to adding the decorator to each test method.

Forms of mock.patch

Inline as a context manager

import time

class MainTest(unittest.TestCase):

    def test_patch(self):
        with mock.patch('time.time') as mock_time:
            # calling time.time() here calls mock_time
        # but not here

    def test_patch_object(self):
        with mock.patch.object(time, 'time') as mock_time:
            # ...

Spec and autospec

Spec and autospec

Problem statement

Mock objects can simulate anything.

How to make them simulate a specific object or function?

Spot a problem:

mock_print.asset_called_once_with("Hello")

Spec and autospec

Mock specs

A Mock object accepts a spec - a simulated object.

mock_spec_demo.py
class A:
    """The class we are simulating."""
    def x(self, n):
        return n ** 2

m = mock.Mock(spec=A)
print(m.x)
m.y = 42
print(m.y)
print(m.z)

What will this program output?

Spec and autospec

Mock specs

$ python3 mock_spec_demo.py
<Mock name='mock.x' id='140537902817232'>
42
Traceback (most recent call last):
  File "mock_spec_demo.py", line 13, in <module>
    print(m.z)
  File "/usr/lib64/python3.6/unittest/mock.py", line 582, in __getattr__
    raise AttributeError("Mock object has no attribute %r" % name)
AttributeError: Mock object has no attribute 'z'

Accessing z causes an error.

Spec and autospec

Mock specs

spec_set also enforces setting attributes.

mock_spec_set_demo.py
class A:
    """The class we are simulating."""
    def x(self, n):
        return n ** 2

m = mock.Mock(spec_set=A)
print(m.x)
m.y = 42
print(m.y)
print(m.z)

Spec and autospec

Mock specs

$ python3 mock_spec_set_demo.py
<Mock name='mock.x' id='140537902817232'>
Traceback (most recent call last):
  File "mock_spec_set_demo.py", line 11, in <module>
    m.y = 42
  File "/usr/lib64/python3.6/unittest/mock.py", line 688, in __setattr__
    raise AttributeError("Mock object has no attribute '%s'" % name)
AttributeError: Mock object has no attribute 'y'

Setting y causes an error.

Spec and autospec

Mock specs

spec and spec_set can accept a list of attributes.

m = mock.Mock(spec=['x', 'y'])
m2 = mock.Mock(spec_set=['x', 'y'])

Or even wrap a real object:

m = mock.Mock(wraps=A())
assert m.x(2) == 4
m.x.assert_called_once_with(2)

Spec and autospec

patch autospec

The autospec argument of the patch function can even check function signatures:

mock_autospec_demo.py
class A:
    def x(self, y):
        return y ** 2

@mock.patch.object(A, 'x', autospec=True)
def test(mock_x):
    a = A()
    print(a.x(42))
    print(a.x(z=42))

test()

Spec and autospec

Mock specs

$ python3 mock_autospec_demo.py
<MagicMock name='x()' id='140700038039648'>
Traceback (most recent call last):
  File "mock_autospec_demo.py", line 16, in <module>
    test()
  File "/usr/lib64/python3.6/unittest/mock.py", line 1179, in patched
    return func(*args, **keywargs)
  File "mock_autospec_demo.py", line 13, in test
    print(a.x(z=42))
  File "<string>", line 2, in x
  File "/usr/lib64/python3.6/unittest/mock.py", line 171, in checksig
    sig.bind(*args, **kwargs)
  File "/usr/lib64/python3.6/inspect.py", line 2969, in bind
    return args[0]._bind(args[1:], kwargs)
  File "/usr/lib64/python3.6/inspect.py", line 2884, in _bind
    raise TypeError(msg) from None
TypeError: missing a required argument: 'y'

Spec and autospec

Case study: quadratic equation

@mock.patch('builtins.print', autospec=True)
class MainTest(unittest.TestCase):

    @mock.patch('sys.argv', [None, '1', '-3', '2'])
    def test_correct(self, mock_print):
        roots.main()
        mock_print.assert_called_once_with((1.0, 2.0))

    @mock.patch('sys.exit', autospec=True)
    @mock.patch('sys.argv', [None, '1', '-3'])
    def test_missing_argument(self, mock_exit, mock_print):
        mock_exit.side_effect = RuntimeError
        self.assertRaises(RuntimeError, roots.main)
        mock_exit.assert_called_once_with(
            '3 arguments required')
        mock_print.assert_not_called()

Coverage

Coverage

Problem statement

I want to know how much of my code is covered by unit tests.

The answer is the coverage utility coverage.readthedocs.io.

Coverage

Collect coverage

$ coverage3 run -m unittest discover my_utils
.....
----------------------------------------------------------------------
Ran 5 tests in 0.012s

OK

Coverage

Report coverage

$ coverage3 report
Name                           Stmts   Miss  Cover
--------------------------------------------------
my_utils/__init__.py               0      0   100%
my_utils/roots.py                 21      3    86%
my_utils/tests/__init__.py         0      0   100%
my_utils/tests/test_roots.py      21      0   100%
--------------------------------------------------
TOTAL                             42      3    93%

Coverage

Rich report

Collect also branch information:

$ coverage3 run --branch -m unittest discover my_utils
.....
----------------------------------------------------------------------
Ran 5 tests in 0.012s

OK

Coverage

Rich report

Show what is not covered:

$ coverage3 report -m
Name                           Stmts   Miss Branch BrPart  Cover   Missing
--------------------------------------------------------------------------
my_utils/__init__.py               0      0      0      0   100%
my_utils/roots.py                 21      3      8      2    83%   24-25, 30, 22->24, 29->30
my_utils/tests/__init__.py         0      0      0      0   100%
my_utils/tests/test_roots.py      21      0      2      0   100%
--------------------------------------------------------------------------
TOTAL                             42      3     10      2    90%

Additional runners

Additional runners

Running with python3 -m unittest is quite convenient.

But there are more feature-rich runners for Python unit tests.

Additional runners

PyTest

pytest.readthedocs.io

  • Completely different approach to writing tests
  • Hacks into Python assert statement
  • Automagical fixtures based on test method arguments
  • Rich plugin architecture
  • (Mostly?) compatible with regular tests

Additional runners

PyTest

$ pytest-3
============================= test session starts =============================
platform linux -- Python 3.6.6, pytest-3.4.2, py-1.5.4, pluggy-0.6.0
rootdir: /home/dtantsur/Projects/berlin-python-unittest, inifile:
collected 5 items

my_utils/tests/test_roots.py .....                                      [100%]

========================== 5 passed in 0.02 seconds ===========================

Additional runners

stestr

stestr.readthedocs.io

  • Select specific tests to run via regular expressions
  • Machine-parseable output in subunit format
  • Emphasis on parallel execution and streaming results
  • Execution time reporting
  • Testing framework agnostic

Additional runners

stestr

$ stestr-3 --test-path my_utils/tests/ run
{2} my_utils.tests.test_roots.RootsTest.test_correct [0.000341s] ... ok
{3} my_utils.tests.test_roots.RootsTest.test_negative_a [0.000647s] ... ok
{0} my_utils.tests.test_roots.MainTest.test_correct [0.011246s] ... ok
{0} my_utils.tests.test_roots.MainTest.test_missing_argument [0.004206s] ... ok
{1} my_utils.tests.test_roots.RootsTest.test_negative_discriminant [0.000660s] ... ok

======
Totals
======
Ran: 5 tests in 0.4467 sec.
 - Passed: 5
 - Skipped: 0
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 0
Sum of execute time for each test: 0.0171 sec.

==============
Worker Balance
==============
 - Worker 0 (2 tests) => 0:00:00.015929
 - Worker 1 (1 tests) => 0:00:00.000660
 - Worker 2 (1 tests) => 0:00:00.000341
 - Worker 3 (1 tests) => 0:00:00.000647

Additional runners

stestr - real project

======
Totals
======
Ran: 5125 tests in 139.0000 sec.
 - Passed: 5113
 - Skipped: 12
 - Expected Fail: 0
 - Unexpected Success: 0
 - Failed: 0
Sum of execute time for each test: 519.9569 sec.

==============
Worker Balance
==============
 - Worker 0 (1280 tests) => 0:02:12.778625
 - Worker 1 (1280 tests) => 0:02:11.374847
 - Worker 2 (1281 tests) => 0:02:10.885204
 - Worker 3 (1284 tests) => 0:02:07.915481

Additional runners

Tox - runner for runners

tox simplifies routine task on managing virtual environments and running stuff in them.

Run unit tests on Python 2.7:

$ tox -epy27

Run unit tests on the default Python 3:

$ tox -epy3

Build a generic environment and run some commands there:

$ tox -evenv -- python -m some.package

Alternative: pyenv.

Tools: tox

Case study: quadratic equation

[tox]
envlist = pep8,py3

[testenv]
usedevelop = True
deps =
    # e.g. -r requirements.txt
commands =
    python -m unittest discover my_utils
setenv =
    PYTHONDONTWRITEBYTECODE=1

[testenv:pep8]
basepython = python3
deps =
    flake8
commands =
    flake8 my_utils

[testenv:venv]
commands = {posargs}

Tools: tox

Case study: quadratic equation

$ tox
pep8 develop-inst-noop: /home/dtantsur/Projects/berlin-python-unittest
pep8 installed: flake8==3.5.0,mccabe==0.6.1,-e git+git@github.com:dtantsur/berlin-python-unittest.git@1b9ac65cee9ba031eb5d3bea979ab2be60ffb844#egg=my_utils,pycodestyle==2.3.1,pyflakes==1.6.0
pep8 runtests: PYTHONHASHSEED='2318298329'
pep8 runtests: commands[0] | flake8 my_utils
py3 create: /home/dtantsur/Projects/berlin-python-unittest/.tox/py3
py3 develop-inst: /home/dtantsur/Projects/berlin-python-unittest
py3 installed: -e git+git@github.com:dtantsur/berlin-python-unittest.git@1b9ac65cee9ba031eb5d3bea979ab2be60ffb844#egg=my_utils
py3 runtests: PYTHONHASHSEED='2318298329'
py3 runtests: commands[0] | python -m unittest discover my_utils
.....
----------------------------------------------------------------------
Ran 5 tests in 0.007s

OK
_____________________________________________________________________________________________________ summary _____________________________________________________________________________________________________
  pep8: commands succeeded
  py3: commands succeeded
  congratulations :)

Handly libraries

requests-mock

Problem statement

How to test code that does complex network interations?

requests-mock: requests-mock.readthedocs.io

Enables pre-defined answers to specified requests.

requests-mock

Example

>>> @requests_mock.Mocker()
... def test_function(m):
...     m.get('http://test.com', text='resp')
...     return requests.get('http://test.com').text
...
>>> test_function()
'resp'

fixtures

The fixtures library provides a format for defining and using test fixtures.

Fixtures are self-contained helpers that ensure some state on test start and revert to the initial state after a test finishes.

fixtures

Writing fixtures

import fixtures
import os

class SecretFileFixture(fixtures.Fixture):
    def __init__(self, fname, content='Hello'):
        self.fname = fname
        self.content = content

    def _setUp(self):
        with open(self.fname, 'w') as f:
            f.write(self.content)
        self.addCleanup(lambda: os.unlink(self.fname))

fixtures

Using fixtures

class MyTest(fixtures.TestWithFixtures):
    def setUp(self):
        self.useFixture(SecretFileFixture('/tmp/test'))

    def test_with_file(self):
        assert os.path.exists('/tmp/test')
        # /tmp/test will be present here and deleted after the test ends

The testtools library provides support for useFixture out of box.

fixtures

Existing fixtures

  • MockPatch and MockPatchObject
  • EnvironmentVariable
  • LogHandler and FakeLogger
  • FakePopen
  • TempDir and TempHomeDir
  • PythonPackage and PythonPathEntry
  • ... and many more.

Questions?