In my last article, I talked about elevating the extension object support in the PyMarkdown project to the same level as plugins rules. In this article, I talk about starting to work on getting rid of some long-standing issues: nested container blocks.
There are times in my professional career where I have written chunks of code that I knew were going to be as near to eternal as anything is in our profession. There are also times that I wrote code understanding that it would be replaced in a couple of months. Whether it was replaced in that time frame was above my paygrade. Because of situations like those, I try and “slant” my code a bit towards those goals, but I inevitably try and treat both situations with the same amount of care and respect as each other. What changes for me is whether to code is just a prototype or production code.
Yes, both of those phrases were in italics. Why was that? Due to external pressures from people in those higher paygrades, developers often feel the need to take some code that was meant as a prototype and productionize it as quickly as possible. The request is usually to take code that we know “just worked… barely” and code that was used for experiments and to show possibilities, and get that code ready for public consumption as quickly as possible. And, as with anything that is rushed, things get lost in the process, with quality usually being the first thing to go.
And as much as the PyMarkdown project is on my own time and pace, I still occasionally find myself looking at some code that was only meant as a placeholder. It is those times that I must remind myself that sometimes, in order to take good steps forward, I need to take a couple of steps back.
What Is the Audience for This Article?¶
While detailed more eloquently in this article, my goal for this technical article is to focus on the reasoning behind my solutions, rather that the solutions themselves. For a full record of the solutions presented in this article, please consult the commits that occurred between 26 Jun 2021 and 27 Jun 2021.
Giving It Some Context¶
I am going to start this article with a set of statements that I believe sums up my work on this project. The first is that writing a Markdown parser is hard. Not impossibly hard, but hard enough. If there is any doubt about how hard it is, click on this link and count the number of different interpretations of that simple Markdown document. Each set of results is another way in which Markdown was interpreted by someone writing a parser. Each one is another road taken by another set of developers.
Then bring into that mix, the GitHub Flavored Markdown specification and the reference implementations of the CommonMark family of parsers. These efforts try to bring those different roads together by providing a single specification, a single answer for questions that previously caused parser developers to take divergent roads. Even better, it provides a set of examples that parser developers can test against. The downside of this is that there is now a standard that implementors must measure up to, otherwise they cannot claim compliance with that standard. That is of course, if they decide that they want their parser to be GFM compliant.
And then there are people like me that want a grammar checking or
linting ability for their Markdown documents. I decided to work on this project because
I want to bring the same kind of “sanity” to my Markdown documents that I bring
to my source code. For Python there is
pylint, and for Java there
checkstyle. So, why can I not have a similar linter for Markdown?
From my point of view, it was a niche that
needed to be filled, and I had an interest in learning more about Python and
addressing that niche.
Writing a GFM compliant Markdown linter adds yet another level of complexity to that already difficult process. Not only does the project have to pass the qualifications for writing a compliant parser, but I need to be able to be extremely confident that the token stream the parser produces is correct. As the PyMarkdown project allows for rules to be written that analyze that token stream, everything needs to be reported properly and cleanly, including whitespace. If the token stream is off, the rules that execute on that token stream are off. So I need to be very confident about the accuracy of the token stream.
To put it bluntly, Markdown parser developers only worry about whether the HTML they output looks right. As a Markdown linter developer, I worry about whether the output looks right, whether the rules are coded right, and whether the tokens used to power those rules are right. And that takes a lot of effort to get right!
Getting To Work¶
And so, this week I decided to start tackling one of the issues I have been avoiding for at least six months, if not longer: nested container blocks.
At the start of this week’s work, I knew that this work was going to be split into multiple blocks of work. I was not sure how many at the outset, but I knew it was going to be multiple blocks.
Do The Research First¶
To be clear, the parser was not broken when I went to do this work, nor was
it cleanly working. It was kind of working and kind of not working. Yeah, those statements
are a bit fuzzy, so let me bring it into context. The parsing of some of the
rudimentary nesting all hinged
on one specific line of code in the
forced_close_until_index = possible_list_start_index
It was a kludge, I admit, but it was a decent kludge that had worked well. The usual metric that I use for things like that are:
- Are the scenario tests passing?
- Are they maintainable?
The answer to the first question was yes, every scenario test was passing. I was initially hesitant to initially answer the second question, as I knew that very few examples dealt with these kinds of scenarios. Because of that, I thought that I had something in there that was decent, otherwise the scenario tests that I already head would not have been passing. But then I started to look at the nesting blocks code more closely. After a bit of examination, I realized that I had used code that was more of a sledgehammer than an artfully crafted tool.
Let me explain that comparison a bit more. I am not sure when I added that exact line and for what reason, but its intent is clear to me. The next time that the close function is called, that variable will be passed into the close function. That will tell the close function to force a close, and to only stop when that specific index is reached. There is no finesse, no fine tuning, just a simple removal until that index is hit. And just by looking at the code, I could tell there would be issues. I was concerned about the maintainability of that code going into this refactoring task.
I had solved that issue with a sledgehammer, and I was not happy about that. I knew I could do better; I just needed some time to figure out how. This was a good time, so I “gave” myself the time I needed to think it through properly.
Taking some time to get to know the code better, one thing became very evident. As
soon as I started experimenting with different values, the
series of scenario tests started failing. Nothing else, just those scenario tests.
That was good news! That meant that the impact of this fix was relatively
self-contained. When I thought about it, it was also bad news as the series
of tests that failed were all extra tests. That probably meant that I was
going to have get creative near the end of this series of tasks to ensure
there was good coverage for all these scenarios, but that was a task for later.
As I looked at that set of tests, I noticed another thing that the tests had in common: they were all scenarios in which a double Block Quote element was followed by a start List Block element. Even as I played around with changing other pieces of the nesting code, the only things that seemed to be impacted were those tests. From where I was, that was a good observation to make!
Isolating The Changes¶
Given the above information and a set of scenario tests that I needed to
get passing again, I started at the beginning: with the trailing List Block
element. And for the initial scenario tests, I chose
which had a Markdown document of:
> start > - quote > > end
Looking at that Markdown and the source code,
I was confident that I was going to need to keep track of that
List Block element, so I added the following line to the start of the
parser_state.nested_list_start = None
This ensured that every time I started to look for a new container block, I would start with a clean slate.
Then came the hard part: defining what set that variable and when. Using the information I had, I noticed that the token stream was correct until the point where the last line was encountered. It was then that things went haywire. So, taking a stab at it, I added the following three lines, with tons of debugging around it to test my theories:
if this_bq_count + 1 < len(parser_state.token_stack): ... if ( parser_state.token_stack[this_bq_count + 1].is_list and adjusted_text_to_parse.strip() ): ... parser_state.nested_list_start = parser_state.token_stack[ this_bq_count + 1 ]
I put lots of debug in there, and kept in in there, as I was not sure what I was going to need for the other nested cases that I knew were going to follow.
Following through with the information that I had, I knew that I needed
to look for cases where a series of Block Quote elements were followed
by another container element, the List Block element. Once I had that,
I wanted to confirm that the following element was indeed a list, and that
the line being processed was not a blank line (after the initial Block
Quote characters were removed). At that point, I was comfortable setting
nested_list_start member variable to indicate that we had one of
those messy nested container situations to deal with.
And to be totally clear, this code was added as a step forward, not a final solution. I could already think up some weird combinations in my head that would make this fail, but they would come later. At that point, I was simply happy that I identified the list start that was causing all the commotion.
Responding To The Trigger¶
Now that I had the trigger, I needed the code to respond to that trigger.
Having looked through the code multiple times in the last week, there
was one function that I knew was the nexus of all things nested:
__handle_nested_container_blocks function. This is the function
that helps handle the recursion that can come with handling nested
But when I specifically investigated the code handling a List element within a Block Quote element, all I saw was this one statement:
adj_line_to_parse = adj_line_to_parse[ end_container_indices.block_index : ]
I knew that would not handle the scenarios properly. So, after a lot of thinking and experimenting, I changed that code to:
if parser_state.nested_list_start: x_tokens, _ = parser_state.close_open_blocks_fn( parser_state, include_lists=True, ) parser_state.token_document.extend(x_tokens)
Essentially, if the parser was in a scenario where it had a Block Quote element that contains a List Block element, it needed to close that List Block element. This was not the end goal, but a first step. I knew that this had started me moving in the right direction, and that was what was important.
Looking at the third line and the GFM specification, I realized that the Block Quote element was terminating itself as soon as it encountered the third line. Block Quote elements get terminated right away with Blank Lines, but List Block elements only get terminated when a non-compliant text line is encountered. And according to the parser, it was still within the List Block element created on line two. I needed to fix that next.
To address that, I changed up the code a bit to make sure that the List Block element stayed open when faced with a Blank Line element in this scenario. That is when that above code changed into:
if parser_state.nested_list_start and adj_line_to_parse.strip(): if ( parser_state.token_document[-1].is_blank_line ): x_tokens, _ = parser_state.close_open_blocks_fn( parser_state, include_lists=True, ) parser_state.token_document.extend(x_tokens)
This moved the closing of the block quote down to the last line in the document, where it belonged. Now, when that document was processed, it only closed the List Block element on the last line of the document. That was great.
The only problem left after that change was that the Blank Line tokens were appearing inside of the List Block element instead of outside of the List Block element. As any concept of a token stream is not present in the GFM specification, I had to read between the lines to answer this question: do the Blank Line tokens belong before the List Block element or after the List Block element?
I thought about this long and hard, but in the end, I did not feel like it made any sense for the List Block element to contain those Blank Line tokens. Looking at the specification, I read how a list’s looseness determined how it was presented in the final HTML output. And when I looked at the HTML output for those test scenarios, I saw output that did not indicate that the looseness was being impacted by those Blank Line tokens.
Given that observation, I changed the code once again, arriving at the following code:
if parser_state.nested_list_start and adj_line_to_parse.strip(): if ( parser_state.token_document[-1].is_blank_line ): y_tokens =  while parser_state.token_document[-1].is_blank_line: y_tokens.append(parser_state.token_document[-1]) del parser_state.token_document[-1] x_tokens, _ = parser_state.close_open_blocks_fn( parser_state, include_lists=True, ) parser_state.token_document.extend(x_tokens) parser_state.token_document.extend(y_tokens)
With that in place, all the original tests were passing, but something was still bugging me. It was that last line. I knew I needed to do something about it.
That Last Line¶
With all the existing tests passing, I took a look at that last line in
the Markdown documents and had one question for myself: what if that line was
a valid List Block element continuation? With that, the
function was created. This was simply a variant of the main Markdown test scenario,
but with a final line that continued the list, rather than end it:
> start > - quote > > end
It took me about twenty minutes to get it right, but in the end, the code that had to change to accommodate that new scenario was small:
if parser_state.nested_list_start and adj_line_to_parse.strip(): start_index, _ = ParserHelper.extract_whitespace( adj_line_to_parse, 0 ) if ( parser_state.token_document[-1].is_blank_line and (end_container_indices.block_index + start_index) < parser_state.nested_list_start.indent_level ):
It was now that I could look back at these scenarios and feel that the code was now maintainable. I was not really scared about changing the code before, but I was concerned. Now I was confident that the code was in a good place to allow any changes to be made.
Echoes of Stories Long Forgotten¶
I remember a story that someone told me a long time ago that went something like this:
A machine in a factory is not working. As much as the workers try, they cannot get it to work. After a lot of frustrating effort, one of the workers suggest to management that they give a shout to the old worker who, after working at the factory for 40 years, just retired earlier that year. With every other idea not working, the management of the factory eventually decide to give him a call.
Early the next day, the old worker shows up and is shown to the machine by the management. He looks and prods at the machine for a good hour before he turns to them saying “I am confident I can fix the machine, but I want 1 million dollars to do so. I know you need to talk about it, so let me know when you have made up your mind.”
Time goes by, and the management is frustrated. They ask the workers that are there why they cannot fix the machine. Those workers just shrug. “We did not work here for our entire lives, like he did! He knows that machine better than he knows his own kids!” being their response. Management is in a tight situation. They are losing one hundred thousand dollars for each day that the machine is not working. They need something in this situation to give, or they will soon be out of business.
A week later, the management looks at their numbers, calculating that they can minimize their losses if they can just get the old worker to fix the machine. They give the old worker a call, and he comes in the next morning. After they give him a cashier’s check for the money he asked for, he goes to the machine, takes out a small screwdriver, and makes three small adjustments to the machine in less than 30 second before standing up. Walking over to the power switch for the machine, he turns it on, and it begins to work. The machine works so well, the other worker could have sworn it had just been delivered from the factory.
Management is outraged. “We want that check back!” they say. “You conned us. We would have never paid one million dollars for less than one minutes worth of work! We will sue you.”
The old man smiled back at them. “Please, go ahead. I never said that I was charging one million dollars for the work that I did. I was charging one million dollars for the knowledge required to do the work that I did! I am sure any sane judge would see it that way too!”
The moral of the story? Knowledge is power, and timing is everything.
Putting That To Work¶
The work that was done in the previous section was not done to directly fix any of the issues that I wanted to fix. It was done to allow me the latitude to make those fixes that I knew I needed to do later. Just like that old worker, I knew that if I wanted to clean up the nested container scenarios, I first needed to ensure that the parser was running cleanly. With what was there before I cleaned it up, I was confident that it would have caused me more effort to work around it than to fix it properly.
Basically, I believe that knowing that I needed to do that work and scheduling it right before I handled the issues I wanted to fix was a smart move. Did it take time? Yes. But did it save time in the long run? Inevitably.
Fixing Scenarios 270 and 271¶
Spending the time fixing the already passing scenario tests for nested containers also had a nice side effect of letting me be more familiar with that code and the resultant token streams. That helped with what I did next.
To set the stage for the rest of this section, what these tests were parsing were all variations on this piece of Markdown:
> 1. > Blockquote continued here.
Nothing complicated, just nested containers, going back and forth between Block Quote elements, List Block elements, and back to Block Quote elements.
Finding The First Issue¶
When I started looking at the token stream for
Scenario 270, I quickly noticed that something was missing: the inner Block
Quote token was not there. Enabling debug mode, I quickly traced through the
__handle_block_quote_section function until I hit this part of that function
for the inner Block Quote element:
( container_level_tokens, stack_bq_count, requeue_line_info, ) = BlockQuoteProcessor.__ensure_stack_at_level( parser_state, this_bq_count, stack_bq_count, extracted_whitespace, position_marker, original_start_index, )
It was here that I noticed that the
this_bq_count variable was set to
Doing some digging, what I determined was that in making the parsing of
the Block Quote element easier in other cases, I was only keeping track
of consecutive cases of Block Quote elements. The first Block Quote element
and the Ordered List element were both recognized, but when it hit this
function, it thought the right number of Block Quote tokens were already
Making It Right¶
It took a bit of effort to make these tests work right, but it
was completed within around ninety minutes. Because of the existing
logic, I knew that I needed to create a variable like
set it at the start of the
and pass it into the Block Quote element handling. I felt that rewriting
a lot more code was not the best thing to do, so I just needed to work with
the information that I had. Given that scenario, I also felt that I could
adjust for it by adjusting the
this_bq_count variable to include the number
of any previous Block Quote elements:
if container_start_bq_count: this_bq_count += container_start_bq_count
From there, things were looking better, but the extracted whitespace was off, adding more whitespace than was needed when creating the new Block Quote token. That also was a quick fix, adding this code:
if container_start_bq_count: extracted_whitespace = extracted_whitespace[original_start_index:]
Finally, everything was looking good except for the other whitespace part of the token: the leading whitespace. But as with the previous fix, this one was easy to spot and easy to do, changing:
found_bq_stack_token = parser_state.token_stack[stack_index] found_bq_stack_token.matching_markdown_token.add_leading_spaces( removed_text )
adjusted_removed_text = ( removed_text[original_start_index:] if container_start_bq_count and original_start_index else removed_text ) found_bq_stack_token = parser_state.token_stack[stack_index] found_bq_stack_token.matching_markdown_token.add_leading_spaces( adjusted_removed_text )
HTML Output and Markdown Output Changes?¶
While there were changes to both the HTML generator and the Markdown generator, both changes were small. It was getting the token stream to be correct that allowed those changes to remain small.
After double and triple checking the changes that I made for scenario test 270, I was not surprised to see that scenario test 271 was working as well. There was only a small difference between the two, and that difference was on the second line. Once the inner Block Quote elements was properly inserted, the rest of both scenario tests, just fell into place.
What Was My Experience So Far?¶
I am not sure where to place how I feel about these changes on a scale. I did not feel like I was making amends for leaving that kludge in there. But I also know that I wish I had dealt with that kludge a long time ago. If anything, I guess I was a bit sad that I did not take the time to fix that issue when I needed to, leaving it until now.
But that is how things often work with software development. You approach things with the best of intentions, and sometimes you must make hard choices. In this case, I did not make a bad choice, just not a pretty choice. It worked for what it needed to do, but if left in there, it would have incurred more cost in the long run.
From my point of view, my feelings aside, I believe I made the right choice at the time. These changes affect less than 8 tests out of 2700+ tests, which is a very small percentage. Should I have delayed the work I was doing just to fix those tests? While my feelings and ego may say no, in hindsight it looks like it was within the range of right things to do. It is made even more so by the fact that I understood this at the time I made that decision and added an item to the issues list to return to it.
This really was a case of taking a couple of steps back, to ensure I can take these forward steps without worrying about tripping over bad code. And that is a good thing!
What is Next?¶
Since I have started down the road of addressing these nesting issues, next week is going to be about continuing, and hopefully completing that work. Stay tuned!
So what do you think? Did I miss something? Is any part unclear? Leave your comments below.