In my last article, I continued in my quest to reduce the size of the issues list. In this article, I split my time between adding to the scenario cases tables and dealing with items from the issues list.
I jokingly referred to this week as the week from hell. It was hellish in nature because of a bad cold that got into my sinuses and would not leave, no matter what I tried to get rid of it. As such, I felt that I got approximately half the work done that I had wanted to. Therefore, I felt that it was appropriate to talk about the work done during the two-week period that I had the sinus cold instead of just my usual one week period. Even though my brain was fogging with the sinus cold for a good solid two weeks, I was able to get some good work done, even if it was not at the pace that I am used to having.
The big focus at this point in the project was on reducing the number of items on the issues list that dealt with List elements. Having taken a significant amount of time working on the leaf block elements and getting those items resolved, I was hoping to get a good chunk of the list issues dealt with. But I also knew that the impending American Thanksgiving holiday and a nasty sinus cold were going to slow me down. It was just a matter of being honest with myself about what I could accomplish during this period.
What Is the Audience for This Article?¶
While detailed more eloquently in this article, my goal for this technical article is to focus on the reasoning behind my solutions, rather that the solutions themselves. For a full record of the solutions presented in this article, please go to this project’s GitHub repository and consult the commits between 17 Nov 2020 and 29 Nov 2020.
Indented Code Blocks and List Blocks¶
Up to that point, I had a good variety of cases for each leaf block type in Markdown save for one: Indented Code Blocks. Hence, I had logged this issue:
- 256i tests, not computing indent properly if empty list and indented
I gave a good solid first try at getting them working, but in my mind, Indented Code
Blocks were different enough that they posed additional difficulty. As such, I left
them for last. The example that gave me the trouble was an additional test that I
test_list_blocks_256i or function
test_list_blocks_256ix as I
renamed it. The Markdown for the example was simple:
With the list start indented to the maximum with three leading spaces, the indentation
of the text
foo should have been enough to make it eligible for an Indented Code Block
with four leading spaces. Instead, it was just getting captured as part of the text
for the List Item element.
Working Through It¶
Granted, my cold was raging in my head, and I was not thinking clearly, but eventually I
looked at the tokens long enough and something stuck out at me. The start List Block
token for line 1 was
[olist(1,3):.:1:3: : ] which did not look weird to me at
first. Looking at the end of the token, where I usually look, everything was fine.
There were three leading spaces before the start List element, and there were three
spaces registered for that token. Exactly as it should be! And it was an start
Ordered List token with a single digit start number, so the
indent_level for that
list should be 3: 1 for the single digit, 1 for the list character
., and one for the
whitespace after it. Check!
Then it dawned on me. While the three leading spaces were appearing in the token
itself, they were not being accounted for in the
indent_level. As such, when the
parser got to the second line, the
indent_level was set to
3, and it looked like the
that line was only indented by one character, not enough to start an Indented Code
Block. After making some changes to pass through the
from the first line, the
indent_level was adjusted by the length of the
extracted_whitespace variable, resulting in an
indent_level of 6. As the four
leading spaces on the second line was less than that value, it was properly interpreted
as an Indented Code Block element.
After adding some additional variations to test and make sure that the change was the right change, I was happy to resolve this issue, and get some rest.
The Birth of Series M¶
Having documented this process before for the other series, I will not take the time to go through all the steps performed to move these scenario tests over. I will point out that for me, with a sinus cold that was not letting up, it was the perfect thing for me to work on. It was a lot of moving tests over two days, but it was slow, and it was methodical. More importantly, it had built in error checking. A good thing to have when you are not 100% sure of how clearly you are thinking.
As this series was genuinely moving scenario tests over from their origin module
test_markdown_list_blocks.py, I did not expect any issues and there were none. Due
to some clarity in thinking when setting up this work, any errors that I did make during
that process were caught and recovered from right away. Other than that, the entire
process was a blur.
“Weird” List Contents¶
Mostly due to the sinus cold, which was finally starting to ease up, it took me another couple of days to get the next issue resolved. Mentally, I realized that I could either push myself hard and perhaps prolong the cold, or I could take more breaks and have that energy go towards resolving the cold. Whether it was the positive thinking or the natural course of the cold, I will never be sure which one helped more. But by noon on Saturday, I was starting to feel better, and I started to tackle these issues:
- code span inside of a list - multi-line link inside of a list
The first issue was easy. I started with something simple, adding the function
test_list_blocks_extra_2a to test split paragraphs with the following Markdown:
1. abc def 1. ghi jkl 1. three
From there, I made a small modification to test for Code Spans by using the following Markdown:
1. `one` 1. ``one-A`` 1. `two` 1. ``two-A``
With the Code Spans dealt with, I moved on to links, using Inline Link elements and splitting them between two lines are various points in the link itself. While not that interesting, it was a good solid scenario that I wanted to make sure was working:
1. [test](/me "out") 1. [really test](/me "out") 1. three
Tracking Down the Issues¶
After coding those new tests, I started executing the tests and everything within the changing parts of the lists looked fine. However, on the third line of each example, when the next item of the base list was defined, some of the tests emitted their text surrounded by a Paragraph tag. As this relates to whether a List is considered loose, I took some time to poke around and debug through it.
Looking at the debug, I realized that I had some issues with the function
__reset_list_looseness in the
transform_to_gfm.py module. In trying to be smart
about locating the end of the relevant tokens belonging to a given list, I was going
forward from the start List token looking for the matching end List token. The problem
was that I was not being selective about which end List token I found, just that I found
A short while later, I had some changes coded up that kept track of the
associated with the start List tokens and end List tokens that were seen. The start
List tokens bumped the count by one and the end List tokens reduced the count by one.
stack_count variable was ever zero, it meant that the algorithm had found
the matching end List token, and it broke out of the loop.
After I finished executing the tests and verifying the results, it was clear to me that I had found and remedied the issue. While it was not a big issue to fix, it was a sneaky one to find, and I was happy to resolve it.
Sometimes It Is Not Obvious¶
Feeling good from my success and solving the last issue, and with the sinus cold allowing, I started to work on another issue:
- 242 with variations on where the blank lines are
While I remembered adding this item to the issues list, I could not remember anything
around the reason that made me add this to the list. As I was not aware of the
behind the inclusion of this item into the list, and I could not figure out from the
item, I decided to make copies of function
test_list_blocks_242 and experiment with
the positioning and number of blank lines within the document. What I found was
It was a time where I was very happy that I had taken the time to add consistency
checks, as they caught this problem right away, where the output HTML comparison tests
did not. The problem? In cases where the
in the blank line handling of the
tokenized_markdown.py module were removing
blank lines to be added to the document, it was doing so in reverse order. That reverse
order meant that in cases with multiple blank lines, the latest blank line would be
added first, messing up the ordering in the document.
Once again, a quick fix, and with a couple of iterations of testing to make sure other functions were not impacted by that side effect (and mitigating those), things were taken care of. Another issue solved and resolved.
Variation on Example 297¶
I had some energy left from fighting my cold, and some time left before the American Thanksgiving holiday started, so I figured I could work on something light. Hopefully picking something easy, I picked this task off the list:
- 296 and 297 - added in case for LRD, but need to test: - other types of blocks - block, blank, then multiple blocks
After a quick look at the Markdown for example 297:
- a - b [ref]: /url - d
I had a good feeling that I would be able to deal with this issue in a couple of hours
or less. To deal with this issue properly, I quickly created variations on function
test those different scenarios. Instead of a Link Reference Definition in each
variation, I used an Atx Heading element, a SetExt Heading element, a HTML Block
element, an Indented Code Block element, and a Fenced Code Block element. Just for
good measure, I added an extra scenario test that had a Fenced Code Block element
followed by a HTML Block element.
After adding those scenario test and executing them, I was greeted by
the good news which was that the tokens and the output HTML matched what was expected
of each test. The only issue was in the Markdown generator where the original Markdown
was being reconstructed from the tokens. After a quick bit of debugging was done
around the processing of the Html Block token, a small change was needed in the function
__merge_with_container_data to allow the
remove_trailing_newline variable to be set
if the block ends with a newline character. With those small changes in place, the
newly added scenarios worked fine, generating the correct Markdown to match the
Fun with List Elements¶
I do not have any notes on why I picked this task:
- test_link_reference_definitions_185f & test_link_reference_definitions_183f
but it was a fairly interesting task to pick. Previously, I had disabled both tests as I was not able to get them working previously. And it was not much, but I somewhat remembered working on both these items for at least a couple of hours each, and not making much progress. As I said, this was going to be interesting.
The good news was that, after a small amount of debugging, I was convinced that I
was looking at two separate issues. While I did not have any concrete information,
I had a strong feeling that the
failures were due to the Block Quote element in the Markdown, while the
test_link_reference_definitions_185f function was simply an issue of getting
the Markdown generator adjusted properly.
Debugging the Issues¶
Picking what I thought was the easy issue to solve, I decided to start working on
the problem with the handling of the Block Quote element. This happened to be a good
choice as some simple debugging showed me that the issue was a simple one of
not closing off an active List before starting off the Block Quote element. I
quickly fixed that by adding a simple loop in the
__ensure_stack_at_level function of
BlockQuoteProcessor class to ensure that occurs before the Block Quote itself
With that part of the issue fixed, my focus shifted to dealing with ensuring that
the Markdown was being properly generated. After a couple of hours of debugging,
I finally figured out that the failures were being caused when the already transformed
data ends with a newline character, and the next token to transform is either a normal
text token, or one of the
SpecialTextToken related tokens: Links, Images, and
Emphasis tokens. In each case, these tokens somehow interrupted the accumulated text,
leaving it ending with a newline character. To properly add any more text to that
accumulated text, the new data to be added needs to be processed differently to
accommodate that break.
Like one of the previous sections, the first issue was relatively quick to fix, while the second issue took hours. Working through the debugging with a sinus cold was a bit of a slog, but it was a good issue to take off the list.
Bulking Up the Series M Tests¶
It was Saturday afternoon and I had finished doing some work around my house. While I was a bit fatigued, I felt that the sinus cold was letting up enough that I could spend some weekend time making some progress on getting more depth to the Series M scenarios. To do that, I basically started by placing each of the tests in Series M of the scenario tests into their own tables. Having over 45 tests at that point, that separation was equal parts necessity for my sanity to keep each table separate and readability for anyone looking at them.
Adding 60 scenario tests to the series, I added 10 scenario tests in each of the six groups within the series. While there were small variations to each group of tests, the underlying tests were essentially the same 10 tests added each time. And just as I have mentioned before, the process was a long one: adding the rough form of the specific test to the table, adding a scenario test to match that rough form, and then dialing in the example, the token list, and cleaning up the final form of the entry in the table. And as usually, it was a long, grueling process.
Powering Through the Scenarios¶
The bad news was that I did not get everything done. After working hard to
get all tests passing, there were 35 tests that for one reason or another were not
passing. Between the scope of the changes and the last vestiges of my sinus cold,
I did not think twice of marking those failed tests with
@pytest.mark.skip, to be
handled in the following week. I had a feeling that this task was more than I
could handle in the time allotted with the energy I had, and I was right. Regardless,
I had 25 new scenario tests passing where I did not have them before.
The good news was that in those 25 new scenario tests, I only found two issues that
I needed to fix and was able to fix. The most obvious one was in the case of two
empty start List elements, nested together on the same line. Following through the
code and the log files for that scenario test, it was immediately obvious to me that
assigning the first element of the result from the function
_ (in essence, throwing it away), was the
wrong thing to do. Assigning that first element to the
variable and adding the line:
Fixed that problem. One down, one to go.
Digging Deep into The Issue¶
The other issue that I found was in dealing with empty
list starts in the
__pre_list function. In one of the first iterations of this
function, I added code in the
True evaluation of
if after_marker_ws_index == len(line_to_parse): to handle those empty list items.
After a lot of work to come up with the correct formula, I had settled on the code
in that function, part of it for empty list items, and the other part of it for the
non-empty list items. And that worked well.
That is until I started looking at it considering the new examples added during these
tasks. Looking at why scenario tests with empty list items were failing, I kept on
looking at this
__pre_list function. And with each debugging session that I came
back to that function, the surer I was that I missed something pivotal. And that
feeling was getting stronger each time.
Given that feeling, I spent a couple of hours taking that
if statement apart and
putting it back together. Ultimately, I left the
True case of the
as it was, but I changed the condition to
after_marker_ws_index == len(line_to_parse) and ws_after_marker. As for the cases
0, I added the following code to the
False case to
if after_marker_ws_index == len(line_to_parse) and ws_after_marker == 0: ws_after_marker += 1
After my experimentation, it just seemed like the right thing to do. I did find other
solutions that were way more complicated than this one, but those solutions were a lot
more convoluted. This one was simple. Instead of doing a complicated calculation and
having lots of
if statements, this just added a slight adjustment to the variable
ws_after_marker, after which the rest of the
False part of the
if statement was
executed without change.
While the first solution with the tokens took less than a half an hour to code and test, when all was said and done, more than five hours had been spent on the task. But even though it took a while, I was pleased with the result, and I am confident that the time was well spent in upgrading those solutions.
What Was My Experience So Far?¶
In the beginning of this project, having those 35 scenario tests marked as skipped would have hung heavily over me. But at this stage of the project, I recognized that it was a necessary tool at my disposal. Instead of waiting until all 60 new scenario tests were working 100%, it was better to chip away at those tests, committing those changes to the repository as I went. Having worked on this project for almost a year at this point, I knew there were going to be things that ended up running away from me. I also knew that while I try and break bigger issues into smaller issues, there are times that is not possible, for one reason or another. In this case, I was concerned that if I did not add all 60 scenarios at once, I would miss one and it would be hard to detect. It just meant I would have to adjust.
And for me, both in my professional life and with this project, is the big takeaway that I have learned in the last couple of years. It is extremely important to set expectations at a healthy level that can be sustained. Too little, and you can be viewed as taking it easy. Too much, and you may be expected to sustain that level of output for months or years. I have found great success in clearly stating my goals and how I plan to achieve them, and resetting expectations on a weekly or daily basis. It just makes sense to me. Well, it does now. That was not always the case.
From my point of view, I could not see a clear way to break up that big issue without sacrificing the quality in the Series M group changes. So, I reset my own expectations for resolving that issue, promising myself that I would address each of those skipped tests in the next week. And I was at peace with my decision to do that.
What is Next?¶
Leaving 35 scenario tests marked as skipped because I could not figure them out did not sit well with me, so I made them the priority for the following week.
So what do you think? Did I miss something? Is any part unclear? Leave your comments below.