Summary

In my last article, I talked about working on some of my projects to get their code coverage up to 100% and why I felt that the effort was important. In this article, I focus more on the work I have been doing in the background on the PyMarkdown project.

Introduction

Yes, I have been doing work on the PyMarkdown project in the background. I may not be visible, but it is going on. It was not until this week that I have something concrete to show for that work. And since not having something to talk about on my flagship project caused me concern, I thought I would devote an article to the reason for that delay: debugging.

Shift Left

During my recent round of interviews, one of the concepts that I talked about was shifting the debugging process left as much as possible using automated testing. As an SDET (Software Development Engineer in Test), a large part of my job is to provide solid automated tests that can be executed within a continuous integration pipeline. Shortened to “shift left”, this form of thinking strives to get any kind of test failure as close to the development of the code being tested as possible. And yes, that does include trying to find things at the architecture and design phases if possible.

What Do You Mean “Left”?

Let me start with the “left” part of that phrase. In most development processes, whether it is explicit or not, there is a workflow that happens from ideation to release. In more formal environments, these steps are usually something like ideation/requirements, architecture, design, vetting the design prior to implementation, implementation, unit and functional testing, peer review, integration and end-to-end testing, and release. In less formal environments, these steps are still there, just compressed into fewer steps with some steps “missing” or “implied. While each of those individual steps is deserving of an article of its own, the important thing that I want to communicate is that flow from “an idea” to “a released thing”.

Technically speaking, once the bug fix or feature has been released, there may be added iterations of that workflow. These iterations can be done to tighten up misunderstood requirements but are most often performed to address bugs in the design or implementation. It might be tempting to think of that new workflow as part of the original workflow, I believe there are clear reasons that the workflow is separate. As that is probably enough content for a separate article, please take my word on that belief for now.

Given those foundations, it should be easy to see that the further left in the workflow that a team gets, the more cost accumulates from the previous steps. Before the implementation step, most likely everything has been done with some form of project lifecycle management system, be it something like the popular Jira, using a whiteboard, or writing things down on paper. The implementation and first testing step introduce code provided by a developer, increasing the cost by a sizable amount. Peer review adds to that by bringing the cost of having multiple developers spend time looking at the changed code, as well as any requested changes that they ask for to be implemented. The second level of testing, integration and end-to-end testing, adds another cost multiplier as those types of tests are more fragile automated tests because of their distance from the implemented code. Finally, the release step adds another multiplier of cost as any issues that are reported once a change is released must go through yet another team of people to report any issues with those changes, prioritize those issues, and create new workflows to address any prioritized issues.

As someone who has done a lot over thirty years of development in his career, please believe me. The further an issue makes it into that workflow, the more costly it is. When I say “costly”, those multipliers are usually between three times multipliers and 10 times multipliers. And that is if the team is lucky.

So Where Does the Left Come In?

Given that information about the traditional development workflow, there is a simple directionality to it. Writing that information on a whiteboard, I would simply create a simple line with dots on the line representing the various points in the workflow. At each point, I would annotate the point with one of the names associated with the workflow. Given that perspective, the workflow has a simplified flow from the left of the line to the right of the line.

Therefore, when I am talking about “Shift Left”, I am talking about trying to detect issues in a project as early on in the project as possible. While this may seem like a “duh” moment to some people, having issues creep to the left is a quite common occurrence on development teams. As a developer, I want my code to get out there and be used. As a developer, I want to do something interesting, not the same old thing repeatedly. And as a developer, I have felt pressure from above to get one thing done and to more on to the next thing on my plate.

However, about 12 years ago when I was a developer, I realized I was more concerned about the quality of what my team was producing than the velocity with which my team was producing. I was more concerned with taking extra time to ensure that the requirements were correct before moving forward with implementing them. And automated testing? Back then I faced a lot of pushbacks for adding too many tests to a project, as those tests were believed to be unwieldy and hard to support.

But my justifications for wanting to do those tests were solid. Based on my experience and my reading of peer articles, others in the industry had started to see things in a similar light. More importantly, they were starting to talk about it in more clear terms than I could manage at the time. For those of us who “saw the light”, it came down to a simple bit of calculus. Either a team can impose a small overheard to take care of those issues before they escape OR that team can pay a cost for those issues later. A team can call those issues “tech debt” or anything else they want to, but they are misses for the team just the same.

And those misses are costly and can be demoralizing. The cost part of any miss is easy one to calculate. Instead of incurring a small cost to find and solve the issue before it escapes the team’s view, one or more distinct workflows must be spun up to address that issue. In terms of human cost, one workflow is that people were needed to report the issue and people were needed to confirm that it is an issue. Another workflow was then needed to triage the issue to figure out if it has a high enough priority to fix. And those two workflows are needed before the team needs to create a new workflow to fix the issue. In financial costs, each person in those workflows has a salary. Paying a team to fix issues means that the team cannot be working on improvements to the project. There is the cost for them contributing to one or more workflows, and there is the cost of not having those people working on new work. Simple math.

As to the demoralizing aspect, that aspect is one that I have seen quite often. I have been in meetings where teams have been told of the issues related to their project. Most teams try not to assign blame, but it does happen. There are often questions raised as to how the team missed finding that issue. If I had a dollar for each time in my career that I have heard “How did we miss that?” in a meeting, I would be able to buy my wife a fancy seafood dinner with an expensive bottle of wine. And when the team gets such a backlog of issues that they must dedicate an entire block of work to dealing with those issues? Let me just say that I can usually sense a drop in the energy level in the room without much effort when the manager says, “we are going to need a bug fixing sprint.”

Shift Left Is About Paying The Right Cost At The Earliest Time

“Shifting Left” is about dealing with these issues as efficiently as possible. To get a project to be better, the proper investments need to be made as far to the left in the workflow as possible. Problems with architecture and design? Make sure the requirements are solid and the architects and designers understand those requirements, with a solid understanding of the tools and choices at their disposal. Problems with poor implementations? Make sure the developers understand the requirements and designs and supply guidelines for them to follow to prove that they have met those goals. Problems with changes to implementations creating new issues? Make sure that there are solid integration tests that are independent from the developer-created tests.

Will these catch everything? Not even a chance. However, when I have seen practices like this implemented, it has always made a sizable impact in the quality of the project. And truthfully, any decent reduction in the cost of a change is usually worth it. It is just about paying the right cost as the earliest time possible. Noticing a requirement seems off before coding starts? It can be a five-minute conversation, or it may evolve into a meeting with a small group. But avoiding the act of properly reading that requirement until one or more integration tests expose that issue? That cost will definitely exceed the cost of that small group meeting.

And the other part of that is simple. As a developer, I always wanted to write decent quality code, because the person maintaining that code was most likely going to be me.

Shift Left on PyMarkdown

How does this all apply to the PyMarkdown project?

While I try as hard as possible to catch everything up front during the implementation phase, things do slip through. As of this Sunday morning, I have 4530 scenario tests that are executed for each change. Of those tests, 36 are skipped with 12 of those skips being for placeholder implementations of extensions. That leaves 24 scenario tests for issues that slipped through the cracks. According to my math, which means that over 99.4% of the scenario tests are passing. Not a bad number, but I still would like it to be better.

And to be clear, that count of 4530 scenario tests are not just the test scenarios provided by the Github Flavored Markdown specification, but every scenario test I have been able to create. That includes the 673 scenarios presented in the GFM specification, but also stresses the complications that arise from container elements. If I had to guess, I would say that at least half of the current scenario tests are tests specifically for the handling of container elements.

But that percentage of passing tests is a deliberate focus of mine to “Shift Left” on this and other projects. While it can often lead me to be frustrated with a new feature or change not working properly in all cases, I sincerely believe that this is the right approach. As I find a single issue at any point in the process, I look and see if it is an isolated issue or part of a bigger issue. If it is part of a bigger issue, I try and identify related scenarios in the “area” of that issue and add more scenario tests.

If possible, I do this when I am adding something new, but I am not always that lucky. But from where I sit, I am still trying to push it to the left. I am not waiting around for a user to complain that it is not working properly, I am actively investing my time to prove to myself that the project is working properly. Sure, some of the scenario tests are probably never going to get hit by users, but those tests are still important. Each one of those tests is a path that may not have been covered before.

What it comes down to me is the answer to a simple question: What amount of confidence do I have that the project is working properly? My simple answer is: yes! I have thrown everything I can think of against it and can prove that. And if I miss something, I am graceful in that I know I cannot think of everything and use that new information to build better tests.

For me, shifting left just helps me get that confidence as efficiently as possible.

What is Next?

Having recently been able to squash an issue that impacted three scenario tests, I compiled a brief list of other tests to revisit. I am not sure if I will be able to find their solutions, but at least I feel momentum in that direction. Stay tuned!

Like this post? Share on: TwitterFacebookEmail

Comments

So what do you think? Did I miss something? Is any part unclear? Leave your comments below.


Reading Time

~9 min read

Published

Markdown Linter Beta Bugs

Category

Software Quality

Tags

Stay in Touch