Test-Driven Development (Part 2)

facebook twitter

Test-driven development (TDD) is not writing tests first. Of course, writing tests before code is a primary part of the practice, but it’s not the first thing. In my opinion, you can’t write tests first and expect to be successful in TDD. I say this for the same reason that solving a business problem shouldn’t be done by first jumping into the code and banging out logic in your favorite programming language. The first step is always creating a plan.

Step 1a: Identify the business problem and the solution before writing code.

Okay, maybe not the entire solution—but a high-level logical solution first. If you’re part of a development team, I highly encourage doing this as a full team exercise. This gives the entire team perspective on the problem and solution and allows everyone to come to an agreement on how to solve the problem. This prevents tunnel vision and prevents you from being the only engineer that knows how the business problem was solved when this solution is maintained later.

Start with a whiteboard and write out what the problem is. In the case of a new feature to solve the issue, this is easy. Solve the problem in a few small abstract pieces—ignoring any implementation details. Keep SOLID—especially the single responsibility principle—in the forefront of your mind when determining what abstract pieces need to be created. Design out the dependency tree for this solution, and keep concerns isolated from each other. These dependencies have to be loosely coupled.

Here’s an example: For a client-facing application, I need a REST endpoint that’ll take in a signed-in user’s auth token and spit out all the widgets that they’ve previously purchased so that I can display those widgets in my app.

The team may look at this story and decide that they need to:

  • Create a web-facing controller that takes in a GET request, makes a call to an inner service, and handles any exceptions—returning adequate responses and HTTP Status Codes
  • Create a service that takes in a user’s auth token and spits back out a user’s ID
  • Create a service that gets data from a database
  • Create a service that builds a SQL statement with that user’s ID
  • Create a service that coordinates the retrieval of the user id from the token and passes that to the service that gets data from a database

Step 1b: Temporarily forget about all of the abstract pieces of the solution that your team has whiteboarded and write a set of integration or acceptance tests that test the whole solution as an entire unit.

For the example of the RESTful endpoint, this becomes that endpoint as it stands. Find a way to seed data into a database, then make a GET request to this endpoint that hits this database and ensure that its response matches exactly what’s seeded. Here’s a fun exercise: Do this as a team! Use your team members to determine what they would expect from this endpoint, as well as what kinds of things should happen if the data doesn’t exist, if an invalid auth token is sent in, or what the user sees when a catastrophic failure occurs.

Pro tip: This data seeding should be done in a shared place, like in a setup method that gets run before each test so that the test itself doesn’t get unwieldy and hard to follow.

Step 2: Whiteboard out the behaviors of the individual abstract pieces and construct test cases for each individual piece in isolation.

At this point, the team should know the dependency tree, what classes depend on what public interface contracts, and that the classes can be built out in isolation. Before they can be built, we need to determine what constitutes this class’s behavior as being correct. We as developers are already doing this in our brains when we start writing code. The difference here is that we will be establishing those behaviors intentionally and up front and capturing these in a form of documentation with the entire team. When these classes are coded and these behaviors are met, our code works (but isn’t yet complete).

Step 3: Write the established expected behaviors as tests (don’t write production code yet).

Keep in mind: This is where the team can break up and start tackling behaviors individually. This is the task that always seems the most daunting for developers—and if your intended code path isn’t designed well in the steps above, it is the most daunting piece. Test setup is by far the biggest detractor from test driving. To combat this natural form of pain, start by writing your test blocks and only write the expectation. Yes, there will be build errors. There’s no code written yet!

Step 4: Finish building one test (don’t write production code yet).

After your expectations (or Asserts) are written out, start finishing the construction of the test, with test setup and everything. Mock your dependencies if you have them and get the test setup and execution to match your expectation exactly.

Step 5: Get rid of build errors (ah, coding!).

Build out the method you’re testing. Build errors should go away. When the build errors are gone, it’s finally ready to start executing your test! If you’re working in an interpreted language, such as JavaScript, having a linter running in your editor can emulate build errors. This is a good time to clean those up as well.

Step 6: Code until your first test passes.

Write as little code as you can to make that test pass. Even if that means making the method return the value exactly how you expect without any logic at all. Only write code that the test’s behavior is asking for.

Step 7: Code your other tests—one at a time.

Now that you have one test successfully set up, and the test says that the code satisfies its expected behavior, start finishing the other established tests. Write the test, then immediately get that test to pass. When that test passes, move onto the next test. Run all of your tests every time to make sure you didn’t break anything. Warning: This might seem like a ridiculous exercise when you first start doing it. The first two tests you write seem incredibly dumb and a big waste of time. But by the third test, you’ll have to start writing code that makes all three tests pass. It’s at the point where you have to write code to make all three tests pass that the power of test driving starts to make sense for many developers.

When you have several tests in place and passing, begin considering your test setup for each test. What can be shared? What can be put into a setup method that’s called before every test executes? Make mental notes and comments in your code as you prepare for the next step.

Step 8: Refactor your code.

This can’t be overlooked! What’s the point of working so hard to get highly maintainable and readable code if it looks like garbage? Sure, it passes the tests, which indicates that it works, but now you have confidence while you refactor. If you break something, you’ll know right away. Take this opportunity to make the code as clean and concise as humanly possible. With that being said, think about refactoring your tests as well, if you need to. Your code and your tests should be so clean, concise, and easy to read that your grandmother can look at your code and your test and have some clue as to what’s going on. Future you will thank you for it when you have to go back and change it later.

Step 9: When you’ve written all of your units and all of your unit tests pass, your integration tests should also pass.

In the development world, there aren’t many better feelings than when all of your acceptance tests pass for the first execution after all of your unit tests start passing. Well, I guess there’s one:

Step 10: Run your code in the debugger.

This is the gravy step—when you run your code in the debugger for the first time and it just works because all of your tests pass. That’s a pretty satisfying feeling!

At this point, your testing team’s test cases and concerns should have all been addressed up front in the development cycle. You can send things “over the wall” to them to beat up with confidence that these bare minimum behaviors should be covered. If they find anything unexpected, now you have a new test case to code. When building out the fix, you’ll also have the confidence that you haven’t broken anything in existence without knowing about it. This is the beauty of test coverage. This is not a metric of “code coverage” that everyone is pursuing. It’s a means of behavior coverage—and that gives you the confidence that no matter which code you add to the application, this set of behaviors will always be true until explicitly changed. And those changes should be driven by more behavioral tests.

How does this speed up development?

Now, I can’t speak for everyone in the history of software development. I can only tell you what happened to me. I used to be on the same fence. How could more coding actually make me faster?

In my professional career, before I was spending my time test driving my features, I spent all of my time doing all of the above while I was coding. I was architecting when I needed to separate out classes after coding had already happened. I was thinking of cases that I needed to handle after I’d already been coding. And anytime that I would have to make any change, I would pivot instead of plan. I know that sounds good from an “agile” standpoint, but from a coding perspective, a pivot while I was already in the weeds almost always meant a shift from stability. There were many times that a test engineer and I would go back and forth on what the feature was actually doing, what the acceptance criteria meant, or what weird behavior they saw that I couldn’t recreate. When it would take me a day and a half to code a feature, it would take at least that long to get it through testing because of bug fixing.

So, did the simple process of designing my code up front with the test engineer take care of the churn by itself, or was it a combination of this with test driving? That I can’t answer. But I can say with full confidence that any code I successfully test drove made it to quality assurance (QA) faster, made it through QA faster, and made any functional additions or changes faster. I attribute this to several factors:

  • Fewer pivots at the time of coding
  • Fewer defects reported by QA and by our users
  • Less time in code review because of the documentation on behavioral tests
  • Less time writing unit tests
    • Building the tests out front makes your code testable.
    • Adding tests after the code creates bloated test setup and results in testing code that’s not easily testable, using more time to create the tests.
  • Less time writing the code
    • Adding tests after the code almost always resulted my rearchitecting the code after it was already written so that it was testable.
  • QA and other team members already signing off on the expected behavior of the code and everyone having an understanding of how the code was expected to work
    • Leads to pair programming and concurrent coding tasks for the same feature.
  • Tests can run in parallel because they don’t depend on each other

Moral of the story? I highly recommend test driving your code. Even if the only payoff is that future you will happily maintain the codebase—that’s still a win! Further development on that feature will almost always be faster. After a team is proficient at creating tests first, this practice will keep getting faster. Couple this practice with proper use of feature flags and CI/CD practices, and your code could literally be deployed to production at any given moment, many times a day. And you’ll always have the confidence that test driving your code will keep you from releasing code that doesn’t behave in the way that has been specified by the tests.

Are you looking for work that matters? Our Technology Teams are growing, and you can view open positions here. Not ready to apply, but interested in chatting with a team member informally to learn more?  Request a virtual meeting here.