Summary

In my last article, I took a break to look back over the last year’s work and what I learned on my path to this point. In this article, I look at some refactoring that I have wanted to do for a while, and why I can accomplish those changes successfully.

Introduction

Refactoring can often be a tricky subject to bring up with software developers. Depending on the specific goals and experiences of a given software developer, saying the phrase “Can we talk about refactoring your code?” can either be met with enthusiasm, with judgement, or with something in between. In the majority of situations, developers have been trained to only write code that is good enough to meet the set of requirements before them, and not more. While this satisfies the immediate needs of the project, it can often cause long term problems in terms of maintenance and support. As with the more personal reactions to the phrase, I find that they occur more rarely these days, but they still occur. If someone hears that phrase, it is possible that they think you are judging their code to not be correct or “good enough”. In those cases, you want to try and remove that judgement aspect from their and move on.

Luckily for me, I am the only one on this project and I have a very enthusiastic response to that phrase. For me, if someone asks that questions, they are seeing the same problem that I solved, but through a different light. I see it as a chance to see the given project from a different viewpont, engaging in a good discussion along the way. From there, if it is agree that I need to work on the refactoring, it is just a question of making sure that I have solid requirements for any refactoring and a solid block of time to work on them.

As luck would have it, I control both for the PyMarkdown project, so let’s go!

What Is the Audience for This Article?

While detailed more eloquently in this article, my goal for this technical article is to focus on the reasoning behind my solutions, rather that the solutions themselves. For a full record of the solutions presented in this article, please go to this project’s GitHub repository and consult the commits between 27 Jan 2021 and 02 Feb 2021.

Refactoring

The most concise and accurate definition of what I believe refactoring to be, comes from the website Refactoring.Com which defines refactoring as:

verb: to restructure software by applying a series of refactorings without changing its observable behavior.

For me, that definition hits all the salient points that I believe need to be in the definition. The first point is “to restructure software”. Refactoring is not about adding any new features to the software project, it is about improving some aspect of the software in question. The second point is that it is “a series of refactorings”. Very rarely in my years of experience have I found that a refactoring is as simple as just a small change. Usually, it involves a number of small changes that bring about the stated requirements of the refactorings, each one doing their part to meet the goal.

The third point is “without changing its observable behavior”, and I mostly agree with that statement. In my eyes, to be more correct, I would add the phrase “unless directed to” to the end of that statement. It is normally correct that any refactoring should not change the observable behavior. However, I would argue that there are times where the purpose of the refactoring is to change that observable behavior along a very narrow and specific axis. A good example of this is performance refactoring. In that case, I want every other observable part of the system to remain the same, but I want the observed time taken to process the data to be reduced.

But with my little caveat at the end, I think it is a solid definition of refactoring.

How I Work

It is my usual practice to start writing a new feature or resolving an issue by having a group of scenario tests that clearly define the problem. From there, I write the code as it comes to me, iterating over it in a natural way until it passes the scenario tests that I am coding against. While this may not be the way others work, I find that it helps me keep the big concepts broken up into smaller concepts that I can more deftly handle. Once I am finished with the code and all the scenario tests are passing, I usually look locally and see if there are any simple refactorings that I can do. If I have something specific in mind, I may look globally to see if I can apply whatever I just coded on a more global scale, but in many cases, I leave that for later.

This process is very useful to me because I am very insistent on keeping my code coverage numbers above the 95% mark, and in many cases above the 99% mark. While this can often lead to some tests that are frustrating to write, the reward for that pain is in the confidence with which I can make changes. Due to my high code coverage percentage, I can make changes in any given piece of code, knowing that if there are any unintended side effects, the scenario tests providing that high code coverage will detect them.

For me, that is a plus that allows me to perform certain tasks, such as refactoring, with a very high degree of confidence.

Cleaning Up Requeuing

One of the first things that I wanted to cleanup was my use of the parameters lines_to_requeue and force_ignore_first_as_lrd. A long time ago (in project years), I needed to add code throughout the code base to handle the requeuing needed by Link Reference Definition parsing. That concept of requeuing was introduced in April 2020 along with an explanation of why Link Reference Definition parsing needs the ability to rewind the stack. At the time, I wasn’t sure if those two parameters were the only two parameters that I would need for any needed requeues, or if there were any other elements that would require requeues. After more than nine months with no changes, I believed that those two parameters were the first target for refactoring.

When I looked at that code, there were two possible refactorings that came to mind: reducing the number of parameters to one and raising an exception instead of using return codes. Since the second refactoring would make use of the work done for the first refactoring, it made sense to start with that refactoring. But replacing those two parameters with an instance of a new class would require some work.

Phase 1: Search and Replace

The first phase of the refactoring was dead simple: locate any instances of the variables lines_to_requeue and force_ignore_first_as_lrd and place them within a new instance of the newly created RequeueLineInfo class. This took a while to complete but it was relatively easy to do. In each case, I started with a call to the process_link_reference_definition function and worked backwards through the code. As I worked backwards from that function, I changed the calling function’s two return values to a new single instance of the RequeueLineInfo class. To ensure that there were no external changes other than that, I immediately dereferenced both member variables as local variables. Because that practice kept any changes localized, I was able to execute the scenario tests after each step to verify that the current set of changes were correct.

The big issues that I ran into with these changes were the blank line function handle_blank_line_fn and the close blocks function close_open_blocks_fn. As both functions can terminate a Link Reference Definition, both had to be changed to return a single argument instead of the double arguments. While the change for the handle_blank_line_fn function only required 1 change, the change for the close_open_blocks_fn function had to be implemented in 22 separate calls, each change following the pattern mentioned above. It was a handful but was completed without incident. Well, without incident after I corrected some typos.

A good example of these changes are the changes performed to the __close_required_lists_after_start function. Before this change, the function’s code around the call to the close_open_blocks_fn function looked like this:

        (
            container_level_tokens,
            lines_to_requeue,
            force_ignore_first_as_lrd,
        ) = parser_state.close_open_blocks_fn(
            parser_state,
            until_this_index=last_list_index + 1,
            caller_can_handle_requeue=True,
        )
        if lines_to_requeue:

After the change, this code was changed to:

        (
            container_level_tokens,
            requeue_line_info,
        ) = parser_state.close_open_blocks_fn(
            parser_state,
            until_this_index=last_list_index + 1,
            caller_can_handle_requeue=True,
        )
        if requeue_line_info and requeue_line_info.lines_to_requeue:
            lines_to_requeue = requeue_line_info.lines_to_requeue
            force_ignore_first_as_lrd = requeue_line_info.force_ignore_first_as_lrd

The pivotal thing to notice here is that while the function was handling the single requeue_line_info return value instead of the two return values lines_to_requeue and force_ignore_first_as_lrd, those changes are completely localized. That isolation of the change is what helped make this a great refactoring.

Phase 2: Reduce the Instances

With the scenario tests still verifying that everything was working properly, I started to work on reducing the number of new instances of the RequeueLineInfo class. While isolating the changes was a good priority for Phase 1, it resulted in a lot of extra overhead whenever those new instances were returned from a function. For this phase, the focus was on identifying paths where that dereferencing was not necessary, replacing them with the returned instance of the RequeueLineInfo class.

Following along with the example of the __close_required_lists_after_start function, that function was again modified. Inside of that function, any references to the variable lines_to_requeue was replaced with a reference to requeue_line_info.lines_to_requeue, with a similar change being made for the force_ignore_first_as_lrd variable. At the end of the function, instead of returning:

    return None, None, lines_to_requeue, force_ignore_first_as_lrd

the following was returned:

    return None, None, requeue_line_info

At that point, any function that calls the function was located, and switched to use the new single value instead of the old double value. This was usually repeated until the function to be modified was the main parse_line_for_container_blocks function.

Executing scenario tests as I went, there were only two issues that popped up. The first issue had to do with the requeue_line_info being None. In those cases, I had to make sure to add some if requeue_line_info: statements around sensitive code, ensuring that it wouldn’t fail. As I worked my way back up to the main function, most of those if statements that I added were removed as I refactored my way up towards the main function. The second issue occurred in functions that had multiple paths that required the changes. In that case, I simply noted down the name of the functions as I went. When I finished all the “easy” paths, I went back to that list as started checking items off that list one-by-one.

By the end of this phase, there were only a few handfuls of new instances being created. Cleaning those up were my next priority.

Phase 3: And Then There Was One

With the above work accomplished, there was only a little bit of cleanup left to do. My goal was to get from the few new instances that still existed down to one new instance. Ideally, the process_link_reference_definition function was the only function that needed to create a new instance, and I wanted to make that so.

The new instances that were usually in places that had already been transformed before this refactoring began. These were usually functions like the parse_line_for_container_blocks function which had been one of my test functions for this refactoring. In those cases, I simply repeated the process that I used in Phase 2 to remove those extra instances. In cleaning up those instances, there were a few cases where a “non-requeue” was being returned in a requeue_line_info instance, accomplished by setting the lines_to_requeue member variable to None. While these were not difficult to deal with, it did increase the challenge factor for the refactoring.

The Result

At each step of each phase, I was able to execute the scenario tests, ensuring that the changes were not changing the output of the parser. This was pivotal. With 238 additions and 271 deletions over 11 changed files, I would not have noticed any bad effects from some of these changes without those scenario tests. This refactoring was also very pivotal in ensuring that I had confidence to make other refactorings. Because this was not a simple refactoring, I knew that completing this refactoring meant many others were not only possible, but possible with confidence that they could be accomplished.

What about raising exceptions instead of returning the instances? After having done this work, I had a high degree of confidence that there will be issues with that approach. The main issue that I saw was that a few functions recursively call the parse_line_for_container_blocks function. While I think that it may be possible, I wanted to focus on refactorings that I believe will positively impact the code with a high probability of success. At the time that I wrapped the refactoring up, I made a decision that raising an exception with the information fell below my definition of “a high probability of success”.

Bolstering The Consistency Checks

As I scanned through items in the issues list, there were a couple of items that I thought would be easy to either verify or complete. The first one was about confirming that the rehydrate_index was being verified in the consistency checks. Looking into that one, I was pleased to find that there was already a check in the Markdown transformer. Looking at the verify_line_and_column_numbers.py module, I did not find the same check, but adding it was very easy. Similarly, confirming the presence of the same types of checks for the leading_space_index was just as easy. However, in this case, all the needed checks were already in place, so no code changes were required.

Validating Inline Tokens

The next item was not as easy, but it was close to it. For some reason that is now lost to me, I had a suspicion that I had a blind spot with the inline tokens at the end of a Text block. If I had to guess, I was worried that I did not have at least one example of every Inline token and end Inline token at the end of a Text block, meaning there was a possibility that one of those calculations were off.

While I was pretty sure that I had since covered that item, I wanted to make sure that I had concrete proof of that. Looking around the scenario test groups, I was happy to find that the Paragraph Series D tests were constructed to have only a single inline element in each paragraph. By that very definition, each of the inline elements used in those tests were both the first and the last token in each paragraph. After verifying that each inline token was present at least once, I resolved that item.

A Quick Refactor

Before moving on to some more substantial refactorings, I wanted to make sure I had a bit more practice to boost my confidence. To that extent, I started by taking the function count_of_block_quotes_on_stack from the BlockQuoteProcessor class and moving it to the ParserHelper class. It just did not feel right being in the BlockQuoteProcessor class, and this took care of that problem.

Once that task was accomplished, I started work on a new related function: find_last_block_quote_on_stack. The purpose of function itself was simple. Start at the end of the token_stack list and work backwards until a Block Quote token was encountered. I was doing this manually in four different places, and this change simplified that code in those four locations.

    def find_last_block_quote_on_stack(self):
        last_stack_index = len(self.token_stack) - 1
        while not self.token_stack[last_stack_index].is_document:
            if self.token_stack[last_stack_index].is_block_quote:
                break
            last_stack_index -= 1
        return last_stack_index

Once that was completed, I also created variants of that function, the find_last_container_on_stack for finding the last container block on the stack (which was useful) and the find_last_list_block_on_stack function (which was not useful). While these refactorings did not have the impact that I had hoped, each one still left the code more readable by describing what they were doing in their function name.

Doing More Of A Heavy Lift

After completing the above tasks, I felt sufficiently confident to attempt some more interesting refactorings. These refactorings were only interesting in that the code that used them was all over the place, usually in calculations that would have to be individually examined before each refactoring. But, as this group of refactorings is a group that I have noted in the issues list, I need to push through them. On top of that, with my recent successes, it just felt like the right time to deal with those issues.

On the surface, none of these new functions created for the refactorings are special. Each one does a single, very basic thing, but it does it once. In visiting different parts of the project’s code base, I had a loosely organized picture of which patterns were common, and I started looking for them to confirm that they were indeed common. While I talk about the five patterns that I noticed here, there were also two other patterns that I chose not to do as I did not see a common pattern in them. While I do believe that refactoring those patterns to clean up the code would be useful, I do not believe that a refactoring of either one of those two patterns would substantially benefit the code base. At least not yet.

count_newlines_in_text

This function was already in the ParserHelper class, but it was not widely used outside of the parser code itself. In particular, every instance of this pattern was found in the consitency checks. In each case, a pattern like:

inline_height = len(resolved_text.split(ParserHelper.newline_character)) - 1

was replaced with:

inline_height = ParserHelper.count_newlines_in_text(resolved_text)

It just made sense to replace the old code with the function call. The function was already in the ParserHelper class and it was already battle tested. In addition, by using the function, the function itself could be tweaked to be more performant without having to change the calling code.

calculate_deltas

Always near the top of my “refactor list” was a new function to calculate the deltas for the line number and column number for a given text string. I already knew that I had this copied this in various forms into at least eight locations, and it just seemed bad form not to refactor it. While this code was easy to use once tested, there were a number of small tunings that could make it more performant once its various implementations were gathered together in a function with one single implementation.

The essence of this new function is the logic to ensure that the line/column number pair is updated to properly account for any element that was just parsed. While the case for updating a single line element is trivial, there are a few steps that must be followed to ensure that a multiple line element’s position is updated properly. The change in the line number is the simple part, being a count of the number of newline characters in the element’s source text. The column is trickier. Once any special characters have been removed, the new column number is the number of characters that occur after the last newline character. With its little nuances, it just seemed right to do this properly once.

    @staticmethod
    def calculate_deltas(text_to_analyze):

        delta_line_number = 0
        delta_column_number = 0
        if ParserHelper.newline_character in text_to_analyze:
            split_raw_tag = text_to_analyze.split(ParserHelper.newline_character)
            delta_line_number += len(split_raw_tag) - 1

            last_element = split_raw_tag[-1]
            last_element = ParserHelper.resolve_replacement_markers_from_text(
                last_element
            )
            last_element = ParserHelper.remove_escapes_from_text(last_element)
            length_of_last_elements = len(last_element)

            delta_column_number = -(length_of_last_elements + 1)
        else:
            delta_column_number = len(text_to_analyze)
        return delta_line_number, delta_column_number

I picked up the code from one of the three places that I found the code and was able to easily create the calculate_deltas function. Replacing the code from the original function that I copied it from, I found it just as easy to replace the other two instances of that code. To make its use simpler, I did some extra work to massage the input to include surrounding characters in the calculation, to avoid extra math. For example, in the handle_angle_brackets function, instead of a more complex calculation of the column number, I simply adjusted the between_brackets variable passed to the calculate_deltas function to include the actual angle brackets that were used.

calculate_last_line

This is one that I thought I had in more places than I found when I started looking. I thought that I was splitting a line in more than two places, using that last line to determine what was left on a line. My best guess is that most of those cases were removed when I condensed most of that logic into the calculate_deltas function.

    @staticmethod
    def calculate_last_line(text_string):

        split_label = text_string.split(ParserHelper.newline_character)
        return split_label[-1]

But even so, clearly spelling out that the code was working with the last line of text in a multiple line string was worth it to me!

recombine_string_with_whitespace

The last refactoring that I wanted to get out of the way was essentially a Swiss Army Knife function that I could use to recombine a string with its leading whitespace. In each case, the pattern started the same: split the text, split the whitespace. It was the next part, piecing the whitespace and the text together once split, where the various functions varied in their approach. But, taking my time, I was able to assemble a single function with five default parameters that worked in each of the twelve instances where that logic appeared.

    @staticmethod
    def recombine_string_with_whitespace(
        text_string, whitespace_string, start_index=0, add_replace_marker_if_empty=False,
        post_increment_index=False, start_text_index=1, add_whitespace_after=False,
    ):
        split_text_string = text_string.split(ParserHelper.newline_character)
        split_whitespace_string = whitespace_string.split(
            ParserHelper.newline_character
        )
        for i in range(start_text_index, len(split_text_string)):
            if not post_increment_index:
                start_index += 1
            ew_part = split_whitespace_string[start_index]
            if ew_part and add_replace_marker_if_empty:
                ew_part = ParserHelper.create_replace_with_nothing_marker(ew_part)
            if add_whitespace_after:
                split_text_string[i] = split_text_string[i] + ew_part
            else:
                split_text_string[i] = ew_part + split_text_string[i]
            if post_increment_index:
                start_index += 1
        return ParserHelper.newline_character.join(split_text_string), start_index

It looks a bit ugly because of the optional parameters, but it is basically simple. Given a processed string with its leading spaces removed and a separate string with those leading spaces that were removed, recombine them into the original string. All the “ugliness” is making sure to handle the various combinations and twists needed to accommodate each use of this function.

calculate_next_leading_space_part

Unlike the other refactorings in this section, this refactoring was added to the ContainerMarkdownToken class. One pattern that I found was that I kept on copying and pasting code to to deal with Block Quote tokens, specifically to determine what part of the leading_spaces property of a Block Quote token was current. Furthermore, I also needed to ensure that in multiple cases, if I used that leading space, that I incremented the leading_text_index member variable to reflect that usage.

Hence, the calculate_next_leading_space_part was created:

    def calculate_next_leading_space_part(self, increment_index=True):

        split_leading_spaces = self.leading_spaces.split(ParserHelper.newline_character)
        leading_text = split_leading_spaces[self.leading_text_index]
        if increment_index:
            self.leading_text_index += 1
        return leading_text

And It Logically Follows

After completing the last set of refactorings, I took a bit of time to look around the code and see if I could see anything that jumped out at me. The first thing that I noticed was that the recombine_string_with_whitespace was already set up to handle both single line and multiple line strings. As such, doing a check to see if the string being passed into the recombine_string_with_whitespace function contained a newline character did not have any benefit. Hence, I simplified a few if statements that occurred before a call to recombine_string_with_whitespace.

Once that was done, I also noticed that the __handle_extracted_paragraph_whitespace function was almost a duplicate for the recombine_string_with_whitespace function. This meant that I could rewrite the __rehydrate_inline_code_span function to use the recombine_string_with_whitespace function instead. The only other reference to the __handle_extracted_paragraph_whitespace function was in the __rehydrate_inline_raw_html function, which was mainly using it to increase the rehydrate_index if it was within a paragraph. Once that code was written inline, the entire __handle_extracted_paragraph_whitespace function was deleted.

While neither of these refactorings were needed, they both left the source code in a better condition, so I went ahead and completed them. In my evaluation, as I was refactoring the code in that area anyways, it did not seem right to not do those refactorings as a cleanup task.

Ending On A High Note

In some cases, a refactor is simple but looks really messy. That is the case with the commit for this refactoring. If you look at the commit, it looks like I changed the entire function to do something, but what that something is seems obscured by the changes. Looking at that commit closely, there are only two small changes.

The first change was an extremely simple one. I changed the code in the except block of the __transform function from:

raise BadTokenizationError() from this_exception

to:

raise BadTokenizationError("An unhandled error occurred processing the document.") from this_exception

to provide more information to the user if such an exception is ever caught. That change is very clear in the commit referenced above.

The second change in the commit was also an equally simple one. I changed the bulk of the code in the __parse_blocks_pass function to be enclosed in a try/except block to be able to more accurately report on any assertions that were fired within the code. To be clear, I added a try: near the top of the function and the following code at the end of the function:

        except AssertionError as this_exception:
            error_message = (
                "A project assertion failed on line "
                + str(line_number)
                + " of the current document."
            )
            raise BadTokenizationError(error_message) from this_exception

The rest of the changes that are reported? Indentation. Yes, adding an indentation of four to ensure that it compiled properly created the mess that shows up as differences in the commit log.

What Was My Experience So Far?

While I am not a perfectionist in my coding, I do believe that I have a different sense of weights when trying to figure out what level of weight to associate with an aspect of a project. For me, having 10 mostly working and tested scenarios is not as important as having 3 or 4 well tested and maintainable scenarios. While that is my default weighting, I have also learned to be able to adjust that weighting depending on certain situations. In my professional life, I think what helps me be successful with those changing weightings is that I usually state each of the relevant options, their weightings, and how I calculated the weighting.

For this project, I am both the developer, the tester, and the manager, so it is a bit more nuanced. During most of the duration of this project, my focus was to drive the project towards completion, so I use the set of weightings that allowed me to develop and test more efficiently. But I always knew that those weightings would be temporary, and that I am now moving towards the proper set of weightings for this project.

So why that long description? It is to back up my statement which is: meh. I am still confident about this project, but refactorings of this type were always expected, and I realized that. As such, it is just something to get done. No hype, no fanfare, just grunt work.

Don’t get me wrong, I am happy that it moves me one more step towards the release, and I am happy that it went off without that many issues, but it is just a step. To me, refactorings are just a fact of life when you are a software developer. If you are doing them right, they will be simple, methodical, and uneventful. As they should be.

I was able to keep the refactorings uneventful by having some clear goals in mind. The first goal I have is to maintain a set of scenario tests that provide a high percentage of test coverage for the project. If something changes, I want to know about it as quickly as possbile. The second goal I have it to always take small enough steps that I can use my scenario tests to continually ensure that any changes are going in the right direction. Finally, my third goal is to have a solid plan going into each refactoring, ensuring that I don’t get distracted along the way.

I believe it was by following these goals that I was able to keep these refactorings simple, methodical, and uneventful!

What is Next?

While I do continue with some refactorings next week, my big focus was to apply some of my recent learnings regarding performance to the project. Stay tuned!

Like this post? Share on: TwitterFacebookEmail

Comments

So what do you think? Did I miss something? Is any part unclear? Leave your comments below.


Reading Time

~19 min read

Published

Markdown Linter Road To Initial Release

Category

Software Quality

Tags

Stay in Touch