As part of the process of creating a Markdown Linter to use with my personal website, I firmly believe that it is imperative that I have solid testing on that linter and the tools necessary to test the linter. In my previous article on Scenario Testing Python Scripts, I described the in-process framework that I use for testing Python scripts from within PyTest. That framework ensures that I can properly test Python scripts from the start of the script, increasing my confidence that they are tested properly.

To properly figure out how my tests are doing and what their impact is, I turned on a number of features that are available with PyTest. The features either make testing easier or measure the impact of those tests and relay that information. This article describes my PyTest configuration and how that configuration provides a benefit to my development process.

Adding Needed Packages to PyTest

There are four main Python packages that I use in conjunction with PyTest. The pytest-console-scripts package is the main one, allowing PyTest to be invoked from the command line. Since I am in favor of automating process where possible, this is a necessity. From a test execution point of view, the pytest-timeout is used to set a timeout on each test, ensuring that a single runaway test does not cause the set of tests to fail to complete. For reporting, the pytest-html package is useful for creating an HTML summary of the test results. The pytest-cov package adds coverage of the source code, with reporting of that coverage built in. I have found that all of these packages help me in my development of Python scripts, so I highly recommend these packages.

Depending on the Python package manager and environment in use, there will be slightly different methods to install these packages. For plain Python this is usually:

pip install pytest-console-scripts==0.20 pytest-cov==2.8.1 pytest-timeout==1.3.3 pytest-html==2.0.1

As I have used pipenv a lot in my professional Python development, all of my personal projects use it for setting up the environment and it’s dependencies. Similar to the line above, to install these packages into pipenv requires executing the following line in the project’s directory:

pipenv install pytest-console-scripts==0.20 pytest-cov==2.8.1 pytest-timeout==1.3.3 pytest-html==2.0.1

Configuring PyTest For Those Packages

Unless information is provided on the command line, PyTest will search for a configuration file to use. By default, setup.cfg is the name of the configuration file it uses. The following fragment of my setup.cfg file takes care of the configuration for those PyTest packages.

addopts=--timeout=10 --cov --cov-branch --cov-fail-under=90 --strict-markers -ra --cov-report xml:report/coverage.xml --cov-report html:report/coverage --junitxml=report/tests.xml --html=report/report.html

While all configuration is important, the following sections are most important in the setting up of PyTest for measuring the effects of testing:

  • testpaths=./test - relative path where PyTest will scan for tests
  • addopts/--junitxml - creates a junit-xml style report file at given path
  • addopts/--cov - record coverage information for everything
  • addopts/--cov-branch - enables branch coverage
  • addopts/--cov-report - types of report to generate and their destination paths
  • default/--cov-config - configuration file for coverage, defaulting to .coveragerc

In order, the first two configuration items tells PyTest where to look for tests to execute and where to place the JUnit-styled XML report with the results of each test. The next three configuration items turn on coverage collection, enable branch coverage, and specifies what types of coverage reports to produce and where to place them. Finally, because the --cov-config is not set, the default location for the coverage configuration file is set to .coveragerc.

For all of my projects, the default .coveragerc that I use, with a small change to the source= line is:

source = pyscan

exclude_lines =
    # Have to re-enable the standard pragma
    pragma: no cover

    # Don't complain about missing debug-only code:
    def __repr__
    if self\.debug

    # Don't complain if tests don't hit defensive assertion code:
    raise AssertionError
    raise NotImplementedError

    # Don't complain if non-runnable code isn't run:
    if 0:
    if __name__ == .__main__.:

To be honest, this .coveragerc template is something I picked up somewhere, but it works, and works well for my needs. The exclude lines work in all case that I have come across, so I haven’t touched them in the 2+ years that I have been writing code in Python.

Benefits Of This Configuration

Given the setup from the last section, there are two main benefits that I get from this setup. The first benefit is machine readable XML information generated for the test results and the test coverage. While this is not immediately consumable in it’s current form, that data can be harvested in the future to provide concise information about what has been tested.

The second benefit is to provide human readable information about the tests that have been executed. The HTML file located at report/report.html relays the results of the last series of tests while the HTML file located at report/coverage/index.html relays the coverage information for the last series of tests. Both of these pieces of information are useful for different reasons.

In the case of the test results HTML, the information presented on the test results page is mostly the same information as is displayed by PyTest when executed on the command line. Some useful changes are present, such as seeing all of the test information at once, instead of just a . for a successful test, a F for a failed test, and so on. I have found that having this information available on one page allows me to more quickly debug an issue that is affecting multiple tests, instead of scrolling through the command line output one test at a time.

In the case of the test coverage HTML, the information presented on this page is invaluable. For each source file in the Python project being tested, there is a page that clearly shows which lines of each Python script are exercised by the tests, By using these pages as a guide, I can determine what tests I need to add to ensure that the scripts are properly covered.

By using these two tools together, I can quickly determine what tests to add, and when tests fail, I can determine why they failed and look for patterns in the failures. This enables me to quickly figure out where the blind spots are in my testing, and to address them quickly. This in turn can help me to figure out the best way to improve the quality of the project I am working on.

If this finds an issue with an existing requirement, that requirement can be adjusted or a new requirement added to fulfil the deficiency. If the requirements were all right and the code it was testing was incorrect, that code can be addressed. If the coverage page shows that code was written but not tested, a new test function can be introduced to cover that scenario. Each observation and its appropriate action work to improve the quality of the software project.

What Was Accomplished

This article showed how to setup PyTest using a configuration file. With that configuration file, it was set up to provide timeouts for tests, provide output on the test results, and provide a coverage report of how well the tests covered the scripts under test. This was all accomplished to better understand the impact of tests on a project and provide better information on how they succeed (test coverage) or fail (test results). By understanding this information, the quality of the software can be measured and improved on if needed.

What Is Next?

In the next article, I will briefly describe the PyScan tool I have written, and how it takes the XML information generate by the --junitxml=report/tests.xml option and the --cov-report xml:report/coverage.xml option and produces concise summaries of that information. I will also give a number of examples of how I use this information during my development of Python projects.

Like this post? Share on: TwitterFacebookEmail


So what do you think? Did I miss something? Is any part unclear? Leave your comments below.

Reading Time

~6 min read



Software Quality


Stay in Touch