Summary¶
In my last article, I talked about how I believe that the benefit of adding consistency checks to the project outweighed the costs of developing those tests and maintaining them. In this article, I talk about how I decided to double down on the consistency checks by adding a token transformer that transforms the tokenized document back into its original Markdown.
Introduction¶
Since implementing the consistency checks for the line numbers and column numbers in the tokens produced by PyMarkdown’s parser, I have found enough errors to remove any questions in my mind regarding their usefulness. From my point of view, adding those consistency checks is not a “pull the fire alarm” concern, but more of a “let’s put some extra water on the campfire and wait to be sure” concern. These checks are an important tool in a collection of tools that I use with each build to help me ensure that my desired level of quality for the project is maintained.
But while I have made great progress on the automated validation of those line numbers and column numbers, validating the content of those tokens was a different story. Each test already includes a comparison of the output text to the reference implementation’s output, but I felt that it was only testing the scenario’s output, not the input. After all, there were times when I introduced a small change to the structure of the token and token itself changed, but the HTML did not change one bit. While I knew I had 100% coverage for the token’s output, I did not feel that I had the right amount of coverage for the tokens themselves.
The only way to really test this out? Use the tokens themselves to generate the Markdown that created them. If the tokens contained all the needed information, the regenerated input text should match the actual input text.
What Is the Audience for This Article?¶
While detailed more eloquently in this article, my goal for this technical article is to focus on the reasoning behind my solutions, rather that the solutions themselves. For a full record of the solutions presented in this article, please go to this project’s GitHub repository and consult the commit of 16 Jul 2020.
A Small Note on the Commit¶
We all have weeks that are busier than others, and the week of 12 Jul 2020 was one of those weeks for me. All the substantial work on the commit was completed and tested on 12 Jul 2020 before I started writing the last article. However, it was not until that Thursday that I was able to complete the usual cleanup and refactoring that I require before I submit a normal commit.
While this does not affect the work that was done, the timing of the actual work was important to me, for reasons described in the next section.
Beware of Rabbit Holes¶
While I feel that I both wanted and needed to add these new checks, I also knew that I needed to be cautious. It was less than a month ago when I lost my way trying to add tab support to the consistency checks, and I was still smarting from that experience. Instead of hitting a difficult problem and taking a step back to reevaluate the situation, I just kept on going, not realizing how much time had passed on that one problem. When I did stop and look up at where I was, it was almost immediately evident that I got lost in the problem… again.
As this was another major task for consistency checks, I was keenly aware that I was going to need to take better precautions this time around. If I did not, I was likely to fall into the same pattern and get lost again. As such, I was determined to come up with a firm set of rules that I would follow for this task. After some thought on those rules, the rules that I came up with are as follows:
- no tab support
- no need to go down that path again so soon!
- no container block support
- get the basics down, then complicate things!
- primary support for the text token, the paragraph token, and the blank line token
- these are the primary building blocks for the document
- no other leaf block support except for the thematic break and the indented code block tokens
- the indented code block token modifies the output from the text tokens, thus making sure that changing that output
- the thematic break token provides another external touchpoint that non-text related tokens are possible
- no inline support except for the hard break token
- one inline token would prove that the others would be possible
- proper support for the character sequences and the backslash escape character
In addition, I decided to help mitigate the risk of going down a rabbit hole for this new feature by timeboxing the work on the task to approximately 36 hours clock time. While I did do a bit of research before that Friday night, the time I allocated for this task was from Friday after work until I started writing my article on Sunday morning. As I have never been late with an article, despite coming close a couple of times, I knew that it would be a good stopping point that I would not ignore easily.
Starting Down the Path¶
While I have plans to simplify it later, as I did with my work on validating the base scenarios, the first iteration of this code was going to be somewhat messy while I figured things out. But the base bones of the transformer started out very cleanly with the following code:
def transform(self, actual_tokens):
"""
Transform the incoming token stream back into Markdown.
"""
transformed_data = ""
for next_token in actual_tokens:
# do stuff
return transformed_data
Not very glamorous, but a good starting point. As with anything that transforms
something list related, I needed to perform some action on each token, that action
being represented by the comment do stuff
. Just a good and solid place to start.
Baby Steps - Setting up for Token Discovery¶
The next step was a simple one: discover all the tokens I would need to eventually
transform. As I took this same approach with the TransformToGfm
class, and that
approach was successful, I decided to adopt the same process with this new class.
I started by adding this code in place of the # do stuff
comment:
if False:
pass
else:
assert False, "next_token>>" + str(next_token)
Once that was done, I then modified it to take care of the end tokens:
if False:
pass
elif next_token.token_name.startswith(EndMarkdownToken.type_name_prefix):
adjusted_token_name = next_token.token_name[
len(EndMarkdownToken.type_name_prefix) :
]
if False:
pass
else:
assert False, "end_next_token>>" + str(adjusted_token_name)
else:
assert False, "next_token>>" + str(next_token)
Once again, this code is not glamorous, but it is setting up a good solid framework for later. The purpose of this code is to make sure that when I start dealing with actual tokens, I get a clear indication of whether an if statement and a handler function exist for that token. If not, the appropriate assert fails and lets me know which token is not being handled properly. In this way, any encountered token must have a matching if statement and handler, or the transformation fails quickly.
Setting Up the Test¶
With that in place, I started with a very simple test function,
verify_markdown_roundtrip
. This function started with the code:
def verify_markdown_roundtrip(source_markdown, actual_tokens):
if "\t" in source_markdown:
return
transformer = TransformToMarkdown()
original_markdown = transformer.transform(actual_tokens)
print("\n-=-=-\nExpected\n-=-=-\n" + source_markdown
+ "\n-=-=-\nActual\n-=-=-\n" + original_markdown + "\n-=-=-\n")
assert source_markdown == original_markdown, "Strings are not equal."
While I added better error reporting over the course of this work, it started
with a simple test and simple error reporting. The first two lines of this function
check for a tab character and, if present, exit quickly before any real processing is
done, as tab handling is out of scope. With that check accomplished, the next 2 lines
create an instance of the transformer
and invoke the transform
function on the list of tokens. Finally, after printing
some debug information, the source_markdown
variable is compared to the
original_markdown
variable containing the regenerated Markdown. If the two strings
match, the validation passes, and control is passed back to the caller for more
validation. If not, the assert fails, and the test is halted.
The invoking of this function was easily added at the top of the
assert_token_consistency
function, which conveniently was already in place and being
called by each of the scenario tests. As such, the extra consistency checking was
added to the consistency checks with only a single line change to invoke the
verify_markdown_roundtrip
.
def assert_token_consistency(source_markdown, actual_tokens):
"""
Compare the markdown document against the tokens that are expected.
"""
verify_markdown_roundtrip(source_markdown, actual_tokens)
split_lines = source_markdown.split("\n")
...
After running the tests a couple of times, it was obvious that some work needed to be done to add if statements and handlers. And as it is the most central part of most Markdown documents, it made sense to start with the paragraph token.
Starting to Discover Tokens¶
Once that foundational work was done, I started running the tests and dealing with the asserts that fired. Each time I encountered an assert failure, I added an if statement to the normal token or end token block as shown here with the paragraph token:
if next_token.token_name == MarkdownToken.token_paragraph:
pass
elif next_token.token_name.startswith(EndMarkdownToken.type_name_prefix):
adjusted_token_name = next_token.token_name[
len(EndMarkdownToken.type_name_prefix) :
]
if adjusted_token_name == MarkdownToken.token_paragraph:
pass
else:
assert False, "end_next_token>>" + str(adjusted_token_name)
else:
assert False, "next_token>>" + str(next_token)
Once I was no longer getting failures from one of the two asserts, I was faced with
another issue. There were tokens that I recognized with an if statement, but any
handler for that token was out of scope for the time being. To deal with this,
I made a small modification to the transform
function to allow me to skip those
tokens that were not yet supported by setting the avoid_processing
variable to True
.
def transform(self, actual_tokens):
"""
Transform the incoming token stream back into Markdown.
"""
transformed_data = ""
avoid_processing = False
for next_token in actual_tokens:
if next_token.token_name == MarkdownToken.token_paragraph:
pass
elif ( next_token.token_name == MarkdownToken.token_thematic_break or
...
):
avoid_processing = True
break
elif next_token.token_name.startswith(EndMarkdownToken.type_name_prefix):
adjusted_token_name = next_token.token_name[
len(EndMarkdownToken.type_name_prefix) :
]
if adjusted_token_name == MarkdownToken.token_paragraph:
...
else:
assert False, "end_next_token>>" + str(adjusted_token_name)
else:
assert False, "next_token>>" + str(next_token)
return transformed_data, avoid_processing
Basically, the avoid_processing
flag was set to True
for any token that was
recognized by the function but did not have a handler implemented. Then, with a small
change to the verify_markdown_roundtrip
function, that function could then be
instructed to avoid comparing the two markdown variables.
def verify_markdown_roundtrip(source_markdown, actual_tokens):
if "\t" in source_markdown:
return
transformer = TransformToMarkdown()
original_markdown, avoid_processing = transformer.transform(actual_tokens)
if avoid_processing:
print("Processing of token avoided.")
else:
print("\n-=-=-\nExpected\n-=-=-\n" + source_markdown
+ "\n-=-=-\nActual\n-=-=-\n" + original_markdown + "\n-=-=-\n")
assert source_markdown == original_markdown, "Strings are not equal."
While this sometimes felt like cheating to me, it was a solid plan. If any token in the token list was not supported, it clearly stated it was avoiding processing. If that statement was not present and the debug output was present, I was sure that the comparison was made cleanly.
Why Was This a Clear Stop-gap Solution?¶
I believe it is a very clean solution. As I knew from the start that I was going to be
implementing this check in stages, returning a boolean value from the transform
function allows the transformer to specify if it has any trust in the results. But
unlike my emotion-based trust in the code base for the project, this trust was binary:
it was True
if I encountered any tokens that I had not yet accounted for, otherwise
it was False
. Basically, if there was no code to handle the token, the function
returned True
to indicated that it was confident that the transformed_data
value
was incorrect.
Given the situation, and wanting to handle the tokens in stages, I believe this is the cleanest solution that I could come up with. No hidden parts, a very small area of code to determine if the check would be skipped, and almost no code to trip out when handling for all the tokens was completed.
Leaving the Foundational Work Behind¶
This foundational work put me in a good position to start transforming the tokens back into their original Markdown. While I was sure that this was not going to be easy, I was sure that I had taken the right foundational steps to make this effort as easy as it could be. And if I was slightly wrong and needed a couple more things added to the foundational code, I was sure that I could easily add them.
Terminology¶
As I start to talk about actual work to reconstruct the original Markdown text from the parsed tokens, I found out that I needed a simple name to describe the process to myself. I prefer to have functions named descriptively after the action they are coded for, preferably with at least one verb describing the action. Repeating the last half of the first sentence in each function name did not seem to be a sustainable solution, especially not for the naming of Python variables. I needed something more compact.
After a certain amount of thinking, the process that I feel the comes closest to what
this transformation is accomplishing is rehydration. Possibly taking influence from my
Java experience, the word serialize
means, according to
Wikipedia:
translating data structures or object state into a format that can be stored […] and reconstructed later in the same or another computer environment
Since the word is overused a lot, I looked up synonyms for serialize and hydrate
was one of the words that was in the list. In my mind, I was “just adding water” to the
data to get the original Markdown text back, so the new word hydrate
fit pretty wel.
Therefore, I will use the word hydrate
in the rest of the article and in the
transformer code to signal that the transformer is reconstituting the Markdown.
Starting with The Paragraph Scenarios¶
In terms of picking a good place to start, I feel that the paragraph tokens were the
best place to start. As paragraphs are usually the foundation of any Markdown
document, I was confident that cleaning up all the scenario tests in the
test_markdown_paragraph_blocks.py
module would be a good initial case. Being simple
in their nature, that set of tests would cover the following types of tokens:
- paragraph token (start and end) - all tests
- text token - all tests
- blank line tokens - 3 of 7 tests
- hard line break token - 2 of 7 tests
- indented code block token (start and end) - 1 of 7 tests
It was a small set of tokens, easily constrained, and built on afterwards.
Paragraph Tokens, Text Tokens, and Blank line Tokens¶
In this group of tests, the simple tests were the easiest to verify, but the most important to get right. With a grand total of 7 tests, 5 complete tests were simply around the handling of these 3 basic tokens. But it was early in the coding of their handlers when I recognized that I needed to implement a simple stack to process these tokens properly.
Simple Token Stack¶
The reason for the token stack was simple. While I was just dealing with paragraph tokens around the text token for the first 6 tests, the last test would require that I handle a different leaf token around the text token: the indented code block token. Instead of doing the work twice, once to just save the paragraph token somewhere and second to implement a proper token stack, I decided to skip right to the stack implementation.
This stack was created to be simple in its nature. The current block would remain at the top of the stack, to be removed when it went out of scope with the end block token. The initial test was to make sure that the text tokens for the examples can extract information from the encompassing paragraph token as needed. This is important because any whitespace at the start or end of each paragraph-text line is removed for the HTML presentation but stored for other uses in the paragraph token.
Therefore, the following functions were added to handle the task of keeping the stack synchronized with the paragraph tokens:
def rehydrate_paragraph(self, next_token):
self.block_stack.append(next_token)
return ""
def rehydrate_paragraph_end(self, next_token):
top_stack_token = self.block_stack[-1]
del self.block_stack[-1]
return ""
Back to the Text Token¶
Getting back to the actual hydration cases, the rehydration of the basic text block is simple to explain but takes a lot of code to accomplish. The general algorithm at this stage was as follows:
def rehydrate_text(self, next_token):
leading_whitespace = next_token.extracted_whitespace
if self.block_stack:
# Get whitespace from last token on the stack and split it on new lines
# Get the text from the current token and split it on new lines
# Properly recombine the whitespace with the text
return leading_whitespace + combined_text
For basic paragraphs, because of the GFM specification, any leading or trailing whitespace on a line is removed from the text before that text transformed into HTML. However, as I thought there was a rule about excess space at the start and end of a line in a paragraph, I made sure to append that text to the owning paragraph token. In addition, when the paragraph itself starts but before the text token takes over, there is a potential for leading whitespace that must also be considered.
So, in addition to the above code to rehydrate the text token, the following changes were needed to handle the start and end paragraph tokens properly.
def rehydrate_paragraph(self, next_token):
self.block_stack.append(next_token)
extracted_whitespace = next_token.extracted_whitespace
if "\n" in extracted_whitespace:
line_end_index = extracted_whitespace.index("\n")
extracted_whitespace = extracted_whitespace[0:line_end_index]
return extracted_whitespace
def rehydrate_paragraph_end(self, next_token):
top_stack_token = self.block_stack[-1]
del self.block_stack[-1]
return top_stack_token.final_whitespace + "\n"
With that done, the text within the paragraph and around the paragraph was being rehydrated properly. At that point, I raised my glass of water and toasted the project as the first 2 scenarios were now checking their content and passing. Yes! From there, it was a short journey to add a 3 more tests to that roster by adding the handling of the blank line token, as such:
def rehydrate_blank_line(self, next_token):
return next_token.extracted_whitespace + "\n"
While it was not a running start, this was the first time the entire content of those 5 scenario tests was validated! It was enough to make me go for number 6!
Hard Line Breaks¶
From there, the scenario test for example 196, was the next test to be enabled, adding support for hard line breaks. Interestingly, when I wrote the algorithm for coalescing the text tokens where possible, the new line character for the hard break was already setup to be added to the following text token. This leaves the hard line token primarily as a “marker” token, with some additional information on the extra whitespace from the end of the line. As such, rehydrating the hard break properly was accomplished by adding the following text.
def rehydrate_hard_break(self, next_token):
return next_token.line_end
And that made 6 tests that were now fully enabled! But knowing that the last test in that group dealt with indented code blocks, I decided to take a bit of a break before proceeding with that token. I needed some extra confidence.
Handling Backslash Characters¶
The interesting part about the parsing of this Markdown character is that once it is dealt with, the original backslash character disappears, having done its job. While that was fine for generating HTML, rehydrating the original text from a tokenized version of a string that originally contained a backslash was a problem. If it disappears, how does the code know it was there in the first place?
To solve this issue, I had to resolve to a bit of trickery. I needed to determine a way to make the backslash character visible in the token without it being visible in the HTML output. But anything obvious would show up in the HTML output, so I had to take a preprocessing step on the data and remove whatever it was that I would add to denote the backslash.
Thinking Inside of the Outside Box¶
Trying a couple of solutions out, the one that held the most promise for me was to use
(or misuse) the backspace character. In Python, I can easily add the sequence \b
to
a string to denote the backspace character. When use this format to write out the text
for the token containing a backslash, I would now add \\\b
in place of the backslash
to allow it to be placed in the token.
To show an example of this, consider the Markdown text a\\*b\\*
, used to create HTML
output of a*b*
without the asterisk character getting misinterpreted as emphasis.
Before this change, the text in the token would have been a*b*
, without the inline
processor splitting the emphasis sequences into their own tokens for interpretation.
After this change, the text in the token is a\\\b*b\\\b*
, keeping the backslashes
in the data, but with the backspace character following them.
But now that I had a special character in there, I would need to preprocess those strings.
Dealing with the Complications¶
How does the preprocessing work? In the case of the HTML transformer, the
preprocessing uses the new resolve_backspaces_from_text
function to scan the incoming
string for any backspace characters. When a backspace characters are encountered, it
is removed along with the character proceeding that backspace character. In this
manner,
the HTML output is identical to how it was before this change. Using the above example
of a\\\b*b\\\b*
, this preprocessing would render that string as a*b*
, removing
each of the backspace characters and the backslash characters before them.
In the case of the new Markdown transformer, a similar algorithm is used that simply
replaces any backspace characters with the empty string. Because the end effect is to
restore the data to the way it was
before, removing the backspace character by itself leaves the data in its original
form. Once again using the above example of a\\\b*b\\\b*
, when the backspace
characters are removed from the string, the string is changed into a\\*b\\*
.
While it took me a while to arrive at this preprocessing solution, it worked flawlessly without any modifications. It was just a simple way to handle the situation. And because it is a simple way, it is also simple to read and understand when dealing with the data for the scenarios. After all, when I type in code or a document, I use the backspace key to erase the last character I typed. This just extends that same paradigm a small amount, but to good use.
The Fallout¶
As this change affects everywhere that a backspace character can be used, there were
some sweeping changes needed in multiple locations to deal with the now escaped
backslash characters. The InlineHelper
module’s handle_inline_backslash
function
was changed to take an optional parameter add_text_signature
to determine whether
the new \\\b
sequence was added when a backslash was seen in the Markdown.
That was the easy part.
In any of the places where that function was called, I traced back and figured
out if there was a valid escape for adding the new text signature. In a handful of
cases, the original string was already present, so no new transformations were
required, passing in a False
for the add_text_signature
. But the more prevalent
case was for the calls that passed in True
. And it did not end there. This process
needed to be repeated with each function that called each of those functions, and so
on.
In the end, it was worth it. It was a clean way to deal with having the backslash in the token if needed and removing the backslash when it was not needed.
Indented Code Block Tokens¶
For the most part, the indented code blocks were simple. Like how the text tokens were handled for paragraphs, the trick was to make sure that the right whitespace was added to the text tokens.
def reconstitute_indented_text(self, main_text, prefix_text, leading_whitespace):
split_main_text = main_text.split("\n")
recombined_text = ""
for next_split in split_main_text:
if next_split:
recombined_text += prefix_text + leading_whitespace + next_split + "\n"
leading_whitespace = ""
else:
recombined_text += next_split + "\n"
return recombined_text
The nice thing about the new reconstitute_indented_text
function was that it was
simple to start with, as shown above. Take the text from the text token, and put
it back together, keeping in mind the extra whitespaces at the start of each line.
In short order, the single scenario test in the test_markdown_paragraph_blocks.py
module dealing with indented code block tokens was passing, and most of the
indented code block scenario tests were also passing. It was then down to 2
scenario tests to get enabled.
Handling Character References¶
Character references on their own are a vital part of Markdown. When you want to be specific about a character to use, there is no substitute for using the ampersand character and the semi-colon character and specifying the specifics of the character you want between the two. But as with backslashes, these character references represented an issue.
Like the backslash disappearing after it is used, the character references also
disappear once used. But in this case, the mechanics were slightly different. If
the resultant token and HTML contains the copyright character ©
, there are three
paths to getting that Unicode character into the document. The first is simply to use a
Unicode aware editor that allows the typing of the ©
character itself. If that
fails, the next best approach is to use a named character entity, adding ©
to the document. Finally, the numeric character reference of ©
or &%169;
can also be used to insert that character into the token. The problem is, if the
token contains the character ©
, which of the 4 forms were used to place it there?
Similar to the way I used the backspace character with the backslash character, in
this case I used the alert character (\a
) as a way to specify that a series of
characters have been replaced with another series of characters. Using the previous
example of the copyright character, if the character was specified by using the
actual Unicode character itself, no alert characters were needed as nothing changed.
But in the cases where the character entities were used, to indicate “I saw this
entity, and I replaced it with this text”. For example, if the Markdown contained
the text © 2020
, the text would be replaced with \a©\a©\a 2020
. While
it takes a bit of getting used to, reading this in the samples quickly became easy
to read. For the HTML output, all 3 occurrences of the alert character were searched
for, and the text between the second and third alert was output, and the rest ignored.
In the case of rehydrating the Markdown text, the text between the first and the second
alert was output, and the rest of that text was ignored.
The fallout of this change was of a similar scope to that of the fallout for the backspace character changes. There were a few places where this change had to be turned off, but due to sheer luck, most of those places were the same for the backspace character and for the alert character. While it took a small amount of time to get things right, once again it was a clean solution.
Indented Code Blocks and Blank Lines¶
All the other scenarios down, and one to go! And… I hit a wall. But unlike some of the other walls that I hit, this one was a good one. When I looked at this remaining case, the scenario test for example 81, I knew that there was going to be a cost to getting this test to pass its consistency check. And while I could go ahead and work on it, I made the decision that the work to get this one case passing was out of the present cope of work that I agreed to do.
The scenario?
chunk1
chunk2
{space}{space}
{space}
{space}
chunk3"""
(To make the spaces visible on the blank lines, I replaced them in the above Markdown
sample with the text {space}
.)
Double checking the code for the indented code blocks, if the blank lines contained at least 4 or more space characters, the tokenization proceeded properly, and the rehydration of the text was fine. But in the case where the blank lines did not contain enough spaces, that was another issue.
While it is not specifically spelled out in the GFM specification, example 81 makes it clear that blank lines do not end indented code blocks, regardless of whether they start with 4 space characters. But looking at the tokens, the only way that I could think of to address this issue was to put any extracted spaces in the indented code block token itself. This would allow them to be used later, if needed, by transformers such as the Markdown transformer.
But thinking about it clearly, I felt that this work was beyond the scope of the current rules for this task. I figured that I had a choice between finishing up the task with thematic break token support completed or getting mired down with this one scenario and not properly completing the task. While I was not initially happy about the idea, I noted the item down in the issues list, disabled the consistency checks for the test, and continued.
Thematic¶
To wrap up this block of work, I just needed to complete the handling of the thematic break token. As this is a simple token, I expected a simple implementation to rehydrate it, and I was not disappointed. The text that it took to complete the rehydration of the thematic breaks was as follows:
def rehydrate_thematic_break(self, next_token):
return next_token.extracted_whitespace + next_token.rest_of_line + "\n"
Simple, short, and sweet. No surprises. A nice way to end.
Along the way…¶
In total, I added 6 items to the issue list because of things I noticed during this work. While I was sure that 4-5 were actual issues, I was okay with the remaining issues being good questions for me to answer later. It just felt good to be able to write a new item down and clear it from my mind. It helped me stay on track and keep my focus. And that is a good thing!
What Was My Experience So Far?¶
To be honest, I believe I was so worried about going down another rabbit hole that it got in the way of my creativity a bit. Not so much that I could not get the work done, but it was there. And thinking back to that time, I am not sure if that was a good thing or a bad thing.
On the bad side, it caused me to question myself a lot more. With each decision I made, I reviewed it against the goals that I specified at the start of this article. Did it pass? If no, then try again. If yes, what was the scope? If depth of the scope was too unexpected or too large, then try again. If yes, start working on it. At various points within the task, stop to reevaluate those same questions and make sure I was staying within the scope. It definitely was annoying at first.
On the good side, it was probably what I needed. And I hate to say it, it probably was a good annoying. I do not need to continue to have this internal conversation for smaller tasks. But for this big task, that frequent dialogue helped me focus on keeping the task on track. If I noticed something that was not in scope, I just added it to the issues list and moved on. If I had a question about whether something was written properly, I just added it to the issues list and moved on. It is not that I do not care about these issues, it is that I can more about completing the task and not getting lost on something that is off task. There will be time later to deal with those.
All in all, I believe this chunk of work went well. Sure, I pushed my 36 hour time limit to the max, resulting in my article getting written later than I am usually comfortable with. I also pushed my definition of “complete” to the max, as I noted in the section on A Small Note on the Commit. All the work was completed before I started that week’s article, even if I took me another 3-4 hours to clean it up enough before committing it to the repository. After all an agreed upon rule is a rule, and I kept to each of them. Even if I strained them a bit.
I was happy with how I handled myself with this task. I did not get too bogged down that I got nothing done, and I did not go down any rabbit holes. It was a good week!
What is Next?¶
Having encountered a number of bugs and possible logs that were logged in the issue’s list, it just made sense to tackle those right away. Especially for something that is meant to track the consistency of the tokens, it does not look good to have bugs against it.
Comments
So what do you think? Did I miss something? Is any part unclear? Leave your comments below.