Summary

In my last article, I talked about my desire to focus on Nested Container blocks. After a long hard push to get to this point in the PyMarkdown project, I decided to take a bit of a break and focus on refactoring for a couple of weeks.

Introduction

After deciding to take a bit of time off to focus on not fixing issues, I decided that refactoring the codebase was an effective way to stay connected to the PyMarkdown project while taking a more relaxed approach to work. I have been aware for some time that refactoring that needed to be done, it just was never a priority to work on that part of the project. As I try to give my best for every part of every project that I work on, I decided that I needed to make the time, and that now was that time.

Why Take Time Now To Refactor?

The truth? I was tired of fixing issues and having more issues waiting for me to fix. I am under no illusion that I need to get back to fixing the remaining issues, but I decided that I needed a break from the issue resolution process. I know it might seem weird, but choosing to focus on improving the quality of the PyMarkdown project is relaxing to me. And when it came down to it, I knew that the right thing for me to do at this point was take time to focus on something else for a couple of weeks. I need to make sure that I have the drive and energy to enable me to reach the finish line on the remaining issues.

And it was not like the project was desperately in need of refactoring, so I am confident that it is a relatively relaxing task that I am undertaking. When developing the PyMarkdown project, I undertook most of the small refactors as I was developing the code itself. If I have developed the project according to this belief, hopefully I just have the larger, more complicated refactorings to complete.

What Do I Mean By Improving Quality?

There are numerous ways to refactor code. For each of those ways, there are recipes that can be followed to improve the quality of the code. But before the quality can be improved, there are two things that greatly enhance the success of improving the quality of the code: solid tests and good metrics.

The more complete the testing of a given codebase is, the more confidence that I have that a given refactoring will not negatively affect that codebase. When it comes down to it, the reason for that confidence is a simple one. Codebases are complex entities where all the side effects of changing a given line of code may not be easily known. Each test helps increase the level of monitoring of that codebase for any unwanted side effects. With the right monitoring in place, changes can be more confidently made with more certainty that the change is creating a positive change, not a negative one.

Good metrics are needed because of a simple reason: not every piece of code requires refactoring. Code only needs to be refactored if there are warning signs that are clear using tools specifically designed to look for a sampling of those warning signs. For the PyMarkdown project, I have used the flake8 and pylint tools to look for obvious issues as I write the code. With very few exceptions, if Flake8 reports an issue, I fix it before checking in code. For PyLint, I try and fix the issue if it is one of the simpler issues to fix. If the issue is one of too-many-locals, too-many-branches, or too-many-statements, I often delay fixing the issue until later. This helps me stay creative, fixing the issue that I am working on.

But as soon as the bulk of the creativity is order, that is when my focus on quality takes over. Creativity helps me get near to the finish line, but I know that my focus on quality and solid testing is what gets me over the finish line. Relating it to the woodworking that I do, the creativity is what gets the item built. But it is the quality that makes the item usable and attractive to others. You cannot have just one of them, you need them both working together to cross that finish line.

What Tools Should I Use?

Deciding that I wanted to have a better picture of what PyLint suppressions I have added to the PyMarkdown project, I created a small Python script to help me out with that analysis. It extracts that information from a project on a module-by-module basis, as well as a convenient cross-project total, and saves that information to a JSON file of my choosing. With that information in hand, I have the start of some good metrics on how I can improve the quality. But I often wondered if there were other tools that would supply additional benefits with useful metrics.

To that extent, I started looking for potential candidates. The first qualification is that the tool must be free for Open Source Projects. The second qualification is that it must provide one or more added metrics that help me to find issues with the PyMarkdown project. Finally, the third qualification is that the tool must be decently usable. While that third qualification is more intangible than the others, it is an important one.

Following those guidelines, I found three potential candidates:

All three of these tools provide for simple installation into the GitHub workflow of the PyMarkdown project, so they are on equal footing there. Both Sourcery and Code Inspector have VSCode plugins, so they both gets extra marks there. That is where things start to differ.

CodeBeat

CodeBeat is a tool that provides each repository with a GPA score, outlining how each part of the repository contributes to that score. Looking like a high school report card, each module can be clicked on to discover how that module’s score was calculated. One of the benefits of CodeBeat is that it is a cross-platform tool, supporting 20+ different languages, of which Python is just one.

I can only investigate the metrics that are reported for the PyMarkdown project, as there is no information on their web page about what metrics are calculated and with which tools. Based on the metrics that I see, there are only two possible metrics that may be useful, Block Nesting and ABC.

Block Nesting Level

Block nesting is meant to calculate the maximum number of distinct indentation levels that are applied to a given function. Given this example:

def next_token(self, context, token):
    if (
        token.is_blank_line
    ):
        self.__have_incremented_for_this_line = False

the block nesting level should be calculated as 1 due to the single indentation level that occurs on the fourth line of the function. Similarly, the following example:

    if (
        token.is_end_token
    ):
        # do something here
        if some_condition:
            self.__have_incremented_for_this_line = False

has a block nesting level of 2. Nicknamed the Arrowhead Anti-pattern, anything beyond three nested levels is considered to follow this anti-pattern, decreasing comprehension, and making it difficult to support.

While interesting, there are two problems with this metrics. The first is that a rule for this metric is already present in PyLint as too-many-nested-blocks, though with a default value of 5. The second is that this tool reports the total number of distinct indents, not the maximum indent. Due to both, this metric is not an option for me to use.

Assignment Branch Condition

The other possible metric is the “Assignment Branch Condition” or ABC metric. With a more complete description of the metric here, the basic idea is to calculate the magnitude of the elements for those three classes of elements in the functions. Using the last example, there is one assignment on line 6, two branches on lines 1 and 5, and zero conditions. Therefore, the ABC value for that example is the square root of 1*1 + 2*2 + 0*0 or the square root of five.

My issue with this metric is that its accuracy is very dependent on how the calculation is applied to the function being evaluated. This rule only reports errors when the (unconfigurable) limit of 10 is exceeded, without showing any indication of how the triggering value was arrived at. Based on that lack of information, I do not find this metric to be actionable, as it is difficult to figure out what to try and optimize for in the triggered code.

Where Does That Leave Me?

Things are not looking good for this tool with respect to the PyMarkdown project and its needs. It may be that this this tool benefits projects that are not using PyLint and Flake8, but that is not the case for this project. That means that for me, this at best is a curiosity.

Code Inspector

Code Inspector is a tool that supplies a high-level categorization of the issues that it finds, something that is useful. Another cross-platform solution, this tool benefits from a decently responsive VSCode extension.

There are multiple things that this tool brings to the table. The first is that it is decent at discovering duplicated code. There seems to be a five- or six-line threshold for detecting those duplicates, but that seems reasonable. Other than that, this tool relies on PyLint and Bandit for Python analysis. Bandit is a decent tool that adds security checking to the suite of tools being applied against the project. As the PyMarkdown project is an application with very few, if any, security concerns, adding Bandit to the mix does not help.

However, one thing that should not be overlooked is that the Code Inspector VSCode extension allows for PyLint to be executed against the codebase with every save. This has proven invaluable so far for being able to assess changes to see if there are any negative consequences of that change. While I am currently using their Basic package which is free, there is a noticeable 10 to 30 second delay between when I save a Python file and when the scan information is updated. But since I am not paying anything for their service, I am okay with that.

Where Does That Leave Me?

While the tool itself does not add any useful metrics that I can use, it does execute PyLint with each save, which is useful to me. Having the information there as I am making the changes is unbelievably valuable.

Sourcery

Sourcery is a Python only tool that integrated directly into the GitHub action process and into VSCode as an extension. Because of the focus on Python and the integration with VSCode, Sourcery can not only find issues but suggest fixes to issues that it finds. For a full list of suggested refactorings, look at this web page. The list is long and substantial.

Two other parts of this tool really appeal to me and the way I develop in Python. The first is that the duplicate analysis in Sourcery is decent. While not as powerful as in the paid versions, the free version has decent pattern matching for detecting duplicates. The second part that appeals to me is the code quality percentage metric. It is very clearly explained and broken down into its components parts when displayed. That breakdown helped me figure out how to best address the issue.

Next up is the email it sends out with every Pull Request with the same information on where things were before the Pull Request and where they will be after the Pull Request. I know that it might appear to be duplication, but I find it nice to be able to go over what I did the day before and see how any commits affected the quality of the project.

The final part of this tool that I like is the Pull Request that is created for me with any suggested refactorings based on my last commit. Just to be clear, I did not add anything or turn anything on to enable that feature. That is a stock option for this tool. For me, that is an extremely useful feature to have. While I may decide to not include that Pull Request for assorted reasons, if I do decide to approve the Pull Request, it has already started to pass any tests or metrics that I use for a normal Pull Request. And if nothing else, if my own Pull Request gets a clean bill of health from Sourcery, I know that I have solid code in that Pull Request.

Where Does That Leave Me?

While I have been developing software for decades now, I am still learning when it comes to Python. Even if I was not learning the ropes, I know that I do not always follow best practices. Sourcery is a great tool for keeping me honest and for helping me to refine my understanding of Python.

What is Next?

Well, it is not my usual ending, but it is an ending… kind of. I am going to continue using these tools for another week while I see if there are any other features that show up during my refactoring. I know I am leaning heavily on Sourcery to augment my knowledge of Python and Code Inspector to help with executing PyLint as I developer, but things may change! Stay tuned!

Like this post? Share on: TwitterFacebookEmail

Comments

So what do you think? Did I miss something? Is any part unclear? Leave your comments below.


Reading Time

~9 min read

Published

Category

Software Quality

Tags

Stay in Touch