question-mark
Stuck on an issue?

Lightrun Answers was designed to reduce the constant googling that comes with debugging 3rd party libraries. It collects links to all the places you might be looking at while hunting down a tough bug.

And, if you’re still stuck at the end, we’re happy to hop on a call to see how we can help out.

Adding responses in a pytest fixture leads to unexpected test failures.

See original GitHub issue

How do you use Sentry: We don’t use Sentry but we do use responses

Problem Statement

Using a pytest fixture to build our test infrastructure (and reduce code reuse) has lead to weird errors in our test suite. We believe there is an issue with the registration process when it is done in a pytest fixture.

Minimal code Example

@pytest.fixture()
def register_responses():
    with responses.RequestsMock(assert_all_requests_are_fired=False) as response:
        response.add(
            method=responses.GET,
            url="http://example.com",
            content_type='application/json',
            json={"key": "value"},
            match_querystring=True,
        )
        response.add(
            method=responses.GET,
            url="http://different_example.com?hello=world",
            content_type='application/json',
            json={"key2": "value2"},
            match_querystring=True,
        )
    yield response

def test_request_params_multiple_urls_pytest_fixture(register_responses):
    resp = requests.get('http://example.com')
    assert resp.json() == {"key": "value"}
    resp = requests.get('http://different_example.com', params={"hello": "world"})
    assert resp.json() == {"key2": "value2"}
    assert responses.calls[1].request.params == {"hello": "world"}

Helper Function Example That Works

def helper_function():
    responses.add(
        method=responses.GET,
        url="http://example.com",
        content_type='application/json',
        json={"key": "value"},
        match_querystring=False,
    )
    responses.add(
        method=responses.GET,
        url="http://different_example.com?hello=world",
        content_type='application/json',
        json={"key2": "value2"},
        match_querystring=False,
    )

@responses.activate
def test_request_params_multiple_urls_helper_function():
    helper_function()

    resp = requests.get('http://example.com')
    assert resp.json() == {"key": "value"}
    resp = requests.get('http://different_example.com', params={"hello": "world"})
    assert resp.json() == {"key2": "value2"}
    assert responses.calls[1].request.params == {"hello": "world"}

Expected Result

  • The requests.get calls don’t lead to mock not used errors.
  • with assert_all_requests_are_fired turned off there’s no json stack trace from resp.json

Actual Result with assert_all_requests_are_fired=True

    def stop(self, allow_assert=True):
        self._patcher.stop()
        if not self.assert_all_requests_are_fired:
            return

        if not allow_assert:
            return

        not_called = [m for m in self._matches if m.call_count == 0]
        if not_called:
>           raise AssertionError(
                "Not all requests have been executed {0!r}".format(
                    [(match.method, match.url) for match in not_called]
                )
            )
E           AssertionError: Not all requests have been executed [('GET', 'http://example.com/'), ('GET', 'http://different_example.com/?hello=world')]

Actual Result with assert_all_requests_are_fired=False

register_responses = <responses.RequestsMock object at 0x7f036d8ff280>

    def test_request_params_multiple_urls_pytest_fixture(register_responses):
        resp = requests.get('http://example.com')
        breakpoint()
>       assert resp.json() == {"key": "value"}

responses_test.py:121:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.8/site-packages/requests/models.py:897: in json
    return complexjson.loads(self.text, **kwargs)
/usr/lib64/python3.8/json/__init__.py:357: in loads
    return _default_decoder.decode(s)
/usr/lib64/python3.8/json/decoder.py:337: in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <json.decoder.JSONDecoder object at 0x7f036fcd1340>
s = '<!doctype html>\n<html>\n<head>\n    <title>Example Domain</title>\n\n    <meta charset="utf-8" />\n    <meta http-eq...on.</p>\n    <p><a href="https://www.iana.org/domains/example">More information...</a></p>\n</div>\n</body>\n</html>\n', idx = 0

    def raw_decode(self, s, idx=0):
        """Decode a JSON document from ``s`` (a ``str`` beginning with
        a JSON document) and return a 2-tuple of the Python
        representation and the index in ``s`` where the document ended.

        This can be used to decode a JSON document from a string that may
        have extraneous data at the end.

        """
        try:
            obj, end = self.scan_once(s, idx)
        except StopIteration as err:
>           raise JSONDecodeError("Expecting value", s, err.value) from None
E           json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)

/usr/lib64/python3.8/json/decoder.py:355: JSONDecodeError

Issue Analytics

  • State:closed
  • Created 3 years ago
  • Comments:7 (2 by maintainers)

github_iconTop GitHub Comments

5reactions
ajhynes7commented, Feb 24, 2021

Hi @rawrgulmuffins, I tried out your code.

The sneaky problem was that you were calling add on responses (the library), not response (the context object).

You were also checking responses.calls, when it should be register_responses.calls.

The following code works for me:

import pytest
import requests
import responses


@pytest.fixture()
def register_responses():

    with responses.RequestsMock() as mock:

        mock.add(
            method=responses.GET,
            url="http://example.com",
            content_type="application/json",
            json={"key": "value"},
            match_querystring=True,
        )

        mock.add(
            method=responses.GET,
            url="http://different_example.com?hello=world",
            content_type="application/json",
            json={"key2": "value2"},
            match_querystring=True,
        )

        yield mock


def test_request_params_multiple_urls_pytest_fixture(register_responses):

    resp = requests.get("http://example.com")
    assert resp.json() == {"key": "value"}

    resp = requests.get("http://different_example.com", params={"hello": "world"})
    assert resp.json() == {"key2": "value2"}

    assert register_responses.calls[1].request.params == {"hello": "world"}

2reactions
ajhynes7commented, Feb 3, 2021

Hi @rawrgulmuffins, I don’t work on this project but I happened to come across this issue.

I’m wondering if your problem is that you have yield response outside of the with block. You could try indenting the yield line so it’s inside the block.

import pytest
import responses


@pytest.fixture()
def register_responses():

    with responses.RequestsMock(assert_all_requests_are_fired=False) as response:
        # Other code...
        yield response

There’s no need to use pytest.yield_fixture, because it’s deprecated.

Read more comments on GitHub >

github_iconTop Results From Across the Web

How to handle test failures
This will invoke the Python debugger on every failure (or KeyboardInterrupt). Often you might only want to do this for the first failing...
Read more >
python - Pytest - error vs fail
For pytest, any uncaught exception thrown in a test function is a failure, including but not limited to assertion errors.
Read more >
Pytest with Marking, Mocking, and Fixtures in 10 Minutes
Marked to fail: To indicate that the unit test is expected to fail; Marked to skip/conditional skipping: Unit test default behaviour is to...
Read more >
Effective Python Testing With Pytest
An F means that the test has failed. An E means that the test raised an unexpected exception. The special characters are shown...
Read more >
pytest Documentation
How to re-run failed tests and maintain state between test runs . ... pytest provides Builtin fixtures/function arguments to request ...
Read more >

github_iconTop Related Medium Post

No results found

github_iconTop Related StackOverflow Question

No results found

github_iconTroubleshoot Live Code

Lightrun enables developers to add logs, metrics and snapshots to live code - no restarts or redeploys required.
Start Free

github_iconTop Related Reddit Thread

No results found

github_iconTop Related Hackernoon Post

No results found

github_iconTop Related Tweet

No results found

github_iconTop Related Dev.to Post

No results found

github_iconTop Related Hashnode Post

No results found