Summary

In my last article, I talked about my progress in refactoring and my experience with the three added tools. In this article, I talk about the work I did this week, including the bug that almost knocked me down for the count!

Introduction

What a long two weeks of work I just had on the PyMarkdown project! I took those two weeks and invested them wisely in doing some major refactoring on the project. The project itself was in decent shape, but I had let some of the functions grow to large and with too many responsibilities. For my own sanity and for the project’s own good, I needed to take some time and get it closer to what I would consider to be a good, healthy code base.

But that did not mean the work was over. Not by far. There were still a couple of cleanup items that I needed to attend to, plus a list of issues that I found while doing the refactoring. And while it was nice to take a bit of a break, I needed to get back to focusing on getting things progressing forward to a non-beta release soon.

What Is the Audience for This Article?

While detailed more eloquently in this article, my goal for this technical article is to focus on the reasoning behind my solutions, rather than the solutions themselves. For a full record of the solutions presented in this article, please consult the commits that occurred between 22 Nov 2021 and 28 Nov 2021.

Issue 151 - Doing Some Cleanup

It will probably not surprise any long-time readers that I decided to take a small amount of additional time to tidy up some loose ends left after the work I did in the last two weeks. Having touched a substantial part of the files in the project’s core, I wanted to at least go over the code once and look for anything that I could easily simplify to make it more maintainable.

There were no tremendous changes, just a lot of little ones. Things like simplifying:

coalesced_list = []
coalesced_list.extend(first_pass_results[0:1])

into:

coalesced_list = [first_pass_results[0]]

and moving small classes such as the ContainerIndices class into their own modules. I also double checked the use of return codes from functions to make sure that they were still being used and removed them if they were not being used.

Other than that, there were cases where I took multiple variable initializations, such as:

start_index = 0
processed_parts = []

and condensed them down into:

start_index, processed_parts = 0, []

Even with Black reformatting such assignments across multiple lines, I have gradually been moving towards grouping all my variable initializations together in that manner. I am not sure if every Python developer feels that it is a good style, but I do. For me, it clearly states that these variables are being set together and eases my ability to read the functions. Basically, it works for me.

But after an evening of going through the source code and cleaning it up, I knew I had to start making progress on the other issues.

Issue 155 - Refactoring Common Structures

Looking at the code base over the last few weeks, I really wanted to be able to harmonize a few of the variables that get passed repeatedly into functions. While there are more examples of that in the code base than I would like, the ones that I picked to work on for this ticket were the this_bq_count and stack_bq_count variables. These are just one example of variable tuples where two or more variables are usually seen together.

While not used outside of the Container Block Processor, these two variables are heavily used inside of the Container Block Processor to capture the current count of block quotes. As their names suggest, the this_bq_count variable tracks the “observed” count of block quote characters and the stack_bq_count variable tracks the current number of block quote tokens on the stack. As a Block Quote element can start and stop on any line, these variables are passed together to ensure the current state is well known.

At least they were passed together until this work was completed. When I looked at the source code for the Container Block Processor, the Block Quote Processor, and the List Block Processor, there were only five places where these values were not passed together. As such, it made sense to add a new class, BlockQuoteData, and encapsulate those values inside an instance of that class. It was a bit of work to move the code to use the new variables, but it did make the code cleaner. And that means lower maintenance, so that was good!

Issue 157 - Upgrading Test Scripts

From experience, depending on how software developers work, they typically either love typing things out manually on the command line or they love to use scripts. With my propensity for mistyping things, otherwise known as “fat-fingering”, I fall into the “love scripts” category. The PyMarkdown project is a fitting example of how I use scripts to make the simple development tasks into scripts.

My ability to run Python tests using a script is a example of this practice. As I do my PyMarkdown work on Windows, I have a ptest.cmd script that has evolved to have a fair amount of usability, some of which is the re-used by my clean.cmd script. But even with the functionality that it has, there were times where I could literally go to my kitchen, make a simple sandwich, eat it, and be back in front of my computer before the script completed. Granted, that was running the clean.cmd script with a slowdown for measuring code coverage, but it was still a long time to wait for a script to complete.

I had looked around and found a couple of PyTest addons that could possibly let me run the tests on multiple cores, something that would hopefully reduce those long testing times. As the upside to finding a workable addon was huge, I took an evening and experimented with two candidates. While the first candidate failed right away, the second candidates looked promising. After a bit of research on the issues that I was seeing, I was able to resolve those issues and I added support for the pytest-xdist package into the project.

I think the reasons that this package works is that it seems to keep things simple, and it has been around for a while. By adding the -n !CORES_TO_USE! argument to where I invoke PyTest in the script, I was able to control how many cores were used by PyTest on my machine. Deciding to take a cautious approach, I added the following code to the script to only use one-half of the available cores for testing:

if defined PTEST_MULTI_CORE_ARGS (
    set /a CORES_TO_USE=%NUMBER_OF_PROCESSORS%/2
    if !CORES_TO_USE! LSS 1 (
        set CORES_TO_USE=1
    )
    set PTEST_MULTI_CORE_ARGS=-n !CORES_TO_USE!
)

Digging into the occasional failures I was getting, I added the --dist loadscope flag as per a suggestion from a reply to a similar question on Stack Exchange. With the occasional failure not showing up since then, I knew I had a viable solution.

To be open about the entire process, the bulk of the work for these changes was done while I was working on the previous two issues, so I had a lot of time to experiment with what settings work best with the way that I work. That one-half of the available cores was arrived at because it seemed to be a good balancing point between making my system crawl and getting a good speed boost for the tests. Similarly, I added an optional --maxfail=5 to the PyTest command line to better manage the failure cases. However, as there is a greater chance that failures will happen in large batches, such as when I am refactoring, the limit is enabled by default and disabled by using the -a flag.

Based on the refactoring work I did during this last week, this work has been extremely useful in cutting down test run times. The time for a full test run with coverage on my machine went from 7 minutes 32 seconds to 1 minute 35 seconds. For a nomrmal full run, the duration went from 1 minute 44 seconds to 36 seconds. Between that and similar tweaks to the other linters, I was happy with this progress.

Issue 93 - This Is That Bug

Yes, this is the bug that almost knocked me down. In all honesty, it probably knocked me down more than a couple of times, I just go back up right away and got back to work. I do know that at least once I needed to walk away and collect my cool before getting back to solving the problem. Basically, resolving this issue was a test of my will to properly fix this issue instead of doing a partial job. And it was not always pretty.

Due to commitments at home, I spent the American Thanksgiving holiday at home with my son. While we had loose plans to chill out during the holiday weekend, we both had a lot on our plates and just decided to watch a Champions League game together and recharge our batteries. This ended up being useful because of some research that I had started to do earlier on in the week on Issue 93.

In doing my research and playing around with finding a quick solution for this issue, there was one thing that became at once clear. Depending on how you look at the tokens, I either did not have the correct information or I had conflicting information in the tokens. Specifically, I was noticing that the whitespace in and around List elements and Block Quote elements was not being stored in the tokens properly.

The issue was around how this document was parsed:

> 1. list
>    this
>    *****
> 1. that

When parsed, the Block Quote token and the List token were both recording the whitespace associated with the first line. In particular, the Block Quote token was storing >{space} and the List token was storing {space}{space}.1 My original thought was that this decision would be convenient in that the inner List token would simply “know” that its two space characters were the greater-than character and the space character from the Block Quote token. When I was working through that part of the parser, it made sense.

Given time to work on other features, that decision was looking less and less correct. There were competing parts of the algorithms that needed their data in one of two formats, and it was not always clear which format was correct. On top of that, the extra work to convert between the two formats was not done consistently and needed more maintenance than I hoped for.

So, before I did anything else, I needed to make a choice on how to approach this: pay now or pay later.

The Decision

Looking at a couple of relatively open days ahead of me, I needed to decide how to most effectively apply that time to the PyMarkdown project. The one option was to clear out multiple issues, but only touch the code that I needed to. This would incur extra costs in the future, but I could budget those costs over a couple of weeks. The other option was to buckle down with the current time I had and make major progress to getting that work done. That would incur an immediate expense but would involve less rewriting of code later.

It was not any easy decision for me to make. I knew that at the very least, I was talking about eight hours of work. On the other end of the range, the duration for that same work could be in the three-to-four-day range. At that point, it was in the middle of the day on Thursday, and the prospect of my entire holiday being swallowed up was not a good one.

But as I thought about it over the next couple of hours, my mind kept coming back to two points. The first point was that this cost was something I was going to have to pay in the future. That was not up for debate. To make the whitespace in the container tokens more usable, I would need to make changes. The second point was about whether I wanted to pay that cost in one large chunk and get it over with, or incur that cost in smaller chunks, but with interest added on. The logic there was that if I chose smaller chunks, not only would I have to pay the main cost for the changes, but I would have to pay interest on those changes until the main changes were completed.

With those two points crystalizing as the main differentiators in my decision, the decision almost made itself. If I had the time to spend, it would be easier to do the work in one chunk and avoid the extra work incurred by the spreading that work out.

And with that… the work started.

Just Get On With It

For my own sanity, I broke the work out into three chunks: getting the scenario tests passing, getting the Markdown checks passing, and getting the other consistency checks passing. It was 11 am Pacific time on Friday morning when I started to make the changes, and it was after 2pm Pacific time on Sunday when I finished making the changes. I will not go into all the changes that I had to do, as I would prefer not to try and relive all that effort. Do not get me wrong, it was the right thing to do, but it was just excessively precise work.

The good points were as follows.

The updated tooling effort from earlier in the week paid off handsomely during this effort. Most the changes looked good when I sketched them out in my head, but when I implemented them about half of them, things changed. Either that design did not work as well as I had hoped, or there were nasty side effects to consider. By having a quickened “change-test-observed” loop, I could get to the observed part of the loop that much quicker.

Having a good set of scenario tests helped me make those changes with confidence. While I cannot account for tests that I have not added yet, the “test” part of the “change-test-observed” loop meant that these changes were about going forward and not introducing more problems. And as I debug best by looking at the overall picture, having good logging throughout the code base helped me diagnose the issues and verify the results of the changes. A good set of tests and 100 percent code coverage gives me confidence. But being able to follow the flow of data through the newly changed portions of the code raises that confidence even higher.

I was also happy to see that one of the design points of the project, the keeping the container processing separate from the leaf processing, was still in place. While fixing this issue properly required substantial work in the Container Block Processor, only a couple of lines on the boundary between the Container Block Processor and the Leaf Block Processor needed to be changed. And I believe most of those changes were just with logging input values.

And now for the bad points.

Changing the method with which I save whitespaces in container tokens would have been better made when I was designing the parser. Doing it at this point was a nightmare like trying to talk on the phone on the tarmac at a busy airport. I had to try and keep my focus on what I specifically trying to target. And that was not always easy. But while I think I may have been able to design that part of the system better, I am more convinced that I needed to see the old design and its flaws to arrive at this improved design. Sometimes, we need to take one step backwards to take two steps forward.

The other bad point that I saw is that I think I still have a way to go in simplifying the container handling. These changes were obviously a huge step in the right direction, but I am sure I can simplify them even more. It could be the nature of preserving the whitespace, but I found myself writing code that was “if in a list block that is in a block quote block”. In one or two cases, I believe I had to write something even more devious than that code.

While there may have been other points that I could include here, most were too minor to mention or on a list of things to look for when I get free time. Things like more minor refactorings and patterns to look out for when writing any new Python code.

What Was My Experience So Far?

Given the arduous work that I put in, I found it useful to find my balance between the work I was doing and not letting the weekend slip away without taking advantage of it. Similarly, I had to balance those previously mentioned good points and bad points with what I have carried out with the project.

Yes, there were a couple of times that I yelled “why did I want to write a linter?” to my computer screen. If I had chosen to write a Markdown-to-HTML parser, like CommonMark, then I would be done. As HTML throws almost all the whitespace away, the parser just needed to understand the whitespace, not properly store it in a token. If not for the fact that I have written a linter and that I wanted to be able to verify that I had parsed the Markdown properly, I would not need to get the extracted whitespace exactly correct.

And to me… knowing that I made a hard call, but it was for what I consider to be the right reasons… that is good enough for me.

What is Next?

Having made good inroads to fixing the spacing issues in the parser, I know I want to find other related issues and see if I can resolve them or easily resolve them. But I also would not mind getting some more small refactors done. Not sure which is going to win out yet. Stay tuned!


  1. As Markdown does not like space characters inside of code spans, I have replaced any such spaces with the text {space}

Like this post? Share on: TwitterFacebookEmail

Comments

So what do you think? Did I miss something? Is any part unclear? Leave your comments below.


Reading Time

~13 min read

Published

Markdown Linter Beta Bugs

Category

Software Quality

Tags

Stay in Touch