August 9, 2017

Fullstack Academy, W3-D2: Writing Tests For Your Code

We spent today learning about principles of test-driven development and how to write our own unit tests. Here's what I learned.

Fullstack Academy, W3-D2: Writing Tests For Your Code

We spent today learning about principles of test-driven development (TDD) and behavior-driven development (BDD –basically the same thing) and how to write our own unit tests for our code using the Mocha testing framework, the Chai assertion library, and the Supertest library for testing the handling of HTTP requests.

It was stressed to us that TDD, when implemented in ideal conditions, means "test first development" where tests are written prior to (or at least concurrent with) the writing of your program. However, we spent the day writing unit test for the Wikistack app we had already made. I found this odd at first, because it seemed like we were simply writing tests to pass for our pre-made code, as opposed to writing code to pass our pre-made tests. It seemed so superfluous. But our instructor made the point that it's not uncommon in the software industry for junior developers to be tasked with doing Quality Assurance and writing test suites for companies' existing code base. The reasons he gave were that senior developers often do not want to spend time writing tests for their code when they could be applying their veteran skills to working onmore "productive" activities, but they know its important to have quality checks in place for every bit of code they write, so that bugs from any refactoring, modification, or integration of new features can be more easily detected and corrected. He also mentioned that letting juniors write tests for the code base actually often proves to be a valuable way for new members of the team to thoroughly learn the code base. There some seam to be some economic sense to that pattern. However, for my own projects, I think I would like to try to practice writing at least minimal unit tests along the way to help me manage what I know will eventually become an unwieldy mass of code.

Here is a summary of conceptual or practical points that stuck out to me from today's lecture and exercises:

Why Testing Is Important:

  1. You need to confirm that your code not only works, but works as you intended and expected.
  2. You need to safeguard your code from potential regressions that may introduced with code added later.
  3. You need documentation that details how every part of your code works and is expected to work, so that you, your team, and future teammates will be able to understand the design of every unit of the program. Tests themselves serve as very elucidating documentation, and they serve as solid scaffolding for any public-facing documentation that may need to be produced.
  4. You need to enforce and maintain standards for code throughout the life of codebase and dev team to ensure consistency in the quality of your code. Requiring testing as one good means of ensuring such standards are enforced.

Best practices for TDD

  • Write tests as you go, so you don't tempt yourself to skip writing them at the end. The end product may seem to work properly on the surface, but without tests, you'll never know about bugs until it's too late.
  • Write a unit test first, then write the code, that way you know the only reason it passed the test is because of the latest new code you wrote.
  • Isolate components to confirm that individual units work reliably.
  • Use "integration tests" to confirm that any new units or modules of an application work safely with the existing code before you actually merge them together.
  • Use coverage reporting to show what percentage of your code hase been tested (althought it can't tell you whether or not it was tested correctly).
  • Keep your test specs stateless and self-contained — able to run in random order, with no spec relying on other specs to pass.
  • Specs should be deterministic, never able to pass or fail randomly.

The Mocha Test Framework

  • Organizes specs hierarchically with describe blocks that contain it assertion blocks and other describe it sub-blocks, which create function scopes for testing.
  • Enables setup and teardown of dummy test variables with before, beforeEach, after, and afterEach hooks that execute at the time indicated by their name in relation to subsequent specs.

Here's a nonsense demonstration of the Mocha syntax:

describe('Indicates what part of our code is being tested', () => {
    let var1, var2, var3; // need some non-constant variables in global space for our tests
    before('runs its callback only once prior to all the subsequent it-specs', () => {
        var1 = [ //...use your imagination];
    beforeEach('runs once prior to each subsequent it-spec', () => {
        var2 = 0;
        var3 = {...use your imagination};
    it('states precisely what you expect the aforedescribed piece of code to do', () = > {
        // use the code you're testing to return a value you can check

Installation & Use

To use Mocha, npm install --save-dev mocha in your devDependencies. Then you can run Mocha in your terminal with the mocha command, passing to it as arguments the filepath(s) of your test .js files, e.g. mocha myApp.specs.js. So you'll need to create .js files in your project directory for writing the tests themselves. Putting them in their own tests directory and giving them uniform file extentions like .spec.js or .test.js will make referencing them easier and more intuitive. Once in place, you can make running your tests even more convenient by editing your project's package.json file and adding a "test" command with the value mocha ./tests in the "scripts" property, like so:

  "name": "myRadApp",
  "version": "1.0.0",
  "description": "churns mad bytes",
  "main": "index.js",
  "scripts": {
    "test": "mocha ./tests"
  // more stuff

This will allow you to run your test files from the command line with the command npm test.

Testing Synchronous Code:

Sychronous tests in Mocha are straightforward, as synchronous code tends to be. The it function either completes to pass/fail the test, or if invalid it will throw an error.

  • Any errors that are thrown in our it blocks will be caught by mocha and reported to us
  • We could write logic to throw errors if our code isn't behaving correctly, but assertion libraries like Chai provide a more concise and easy-to-read syntax that will do that under the hood for us.
  • Mocha does not have a built-in assertion library, so we use Chai with it.

Testing Asynchronous Code:

Some extra thought has to be taken about how to write tests for asynchronous code. If not properly implemented, an it block that depends on the result of an async function could finish executing without ever getting that value and pass/fail the spec without throwing an error to tell you what went wrong. Mocha supports two main ways for testing asynchronous code:

  • using the optional parameter for the it() callback that Mocha uses to detect asynch tests
  • returning Promises

Using done to Handle Async:

  • Naming a formal parameter to the it callback function signals to Mocha that the spec is async, and therefore should wait for done to run before evaluating pass/fail.
  • We can name this function whatever we want, but done is used out of convention.
  • An async spec using done will have any of these possible outcomes:
    • If an error is synchronously thrown in the it function, the test fails.
    • If done is not called within a preset or chosen timeout (e.g. 2 secs), the test fails.
    • If done is called with a truthy value (e.g. an error object), the test fails.
    • If done is called with a falsey value (e.g. no argument), the test passes.

Here's a trivial example of how done works:

describe('The part of code you are testing', () => {
    it('does something async', done => { // done() is provided by Mocha for signaling the end of async operations
        setTimeout(() => {
            console.log('I am in timeout!');
            done(); // this tells Mocha, "I'm done with my async stuff now!"
        }, 1000);

Using Promises to Handle Async:

  • If we return a promise to an it function, Mocha will consequently detect that the spec is async. It's really, really important not to forget to return any async function that return a promise, otherwise Mocha won't catch it and wait for it to resolve before evaluating the test.
    • If an error is synchronously thrown in the it function, the test fails.
    • If the promise does not settle within a preset or chosen timeout, the test fails.
    • If the promise rejects, the test fails.
    • If the promise fulfills, only then may the test pass or fail based upon the result of the code being tested.

Working with an Object Relation Mapper for CRUD database operations is a case when you may be testing a lot of async code with promises. Here's an example of what that might look like:

describe('Creating/Updating Page Model Instances', function () {
  let validPage, invalidPage;

  beforeEach(() => {
    validPage ={ // this is a valid page that should pass specs
      title: 'How To Sequelize',
      content: "Lots of cool content!",
      status: 'closed',
      tags: 'code, software, databases, ORM'

  describe('Attribute Validations with page.validate()', () => {
    describe('page.title validations', () => {
      beforeEach(() => {
        invalidPage ={ // this page instance is engineered to fail
          title: null, // our models are elsewhere defined to reject null titles
          content: "Lots of cool content!",
          status: 'closed',
          tags: 'code, software, databases, ORM'
      it('throws errow with message "title cannot be null" if title is null', () => {
        return invalidPage.validate() // must `return` this Promise for Mocha to wait for it to resolve
          .then(result => {
            throw Error('Error: Incorrectly validated bad title');
          }, err => {
            expect(err.errors[0].message).to.equal('title cannot be null');

      it('returns the validated page instance if given a valid page.title', () => {
        return validPage.validate() / must `return` this Promise for Mocha to wait for it to resolve
          .then(result => {
          (err) => {
            throw Error(err);

The Chai Assertion Library & Related Plugins

So, Mocha is a testing framework, but it actually doesn't come with any built methods for checking the output of your code unit against some boolean assertions you make about what the output ought to be. This is where Chai comes in. Chai provides a library of assertion functions and methods that enable you to meaningully and semantically write tests that will pass or fail your code depending on whether it returns the expected output. You'll notice above, in the Using Promises to Handle Async code block section, you'll notice on lines 29 and 36 that there is a function called expect. That's actually from Chai, not Mocha. The expect function and all of those chaining properties .to.equal or to.deep.equal are assertion utilites that check exactly what it sounds like they check.

  • Assertions are easy ways of writing code that might throw an error with a descriptive message
  • Has two main flavors, BDD (should and expect) and TDD assert.
  • Most assertions take the form
  • Some assertions use an implicit getter: expect(someValue)
  • Testing function behavior — calls, arguments, return values — is made easier using "spies"
    • chai-spies is a lightweight plugin enabling spy creation and tracking.
    • sinon-chai is a more thorough / complex spy library

The Supertest assertion library

  • Enables easy async testing of any Node-style HTTP server app function, such as an Express app
  • Returns promises
  • Has its own expect method, different from Chai's
  • Lets you make requests, send request bodies, test response status & headers, etc.