Creating a Software Ecosphere: MyBooks Tutorial: Step 4 – Node RESTful Webservices – Restify CRUD

Welcome Back!

Overview

If you’re new to the series, you can see all the current steps here.  As always, the code is available on github, the main branch will contain all the steps to date, and I’ve branched the steps starting at step003. I’m not going to be documenting every step in detail, so I’d strongly suggest you either use the github branch code directly, or have it open as you follow along.

Post Step 3, I did a bunch of cleanup:

  • Added eslint-plugin-chai-expect to cleanup up my lint error/warnings
  • Changed the setup of orm.js to webstorm inspection and linting errors/warnings
  • Changed some eslintrc rules
  • Added in validation rules for string length to the sequelize models
  • Added in the rest of the integration tests for the persistence (sequelize) layer
  • Added in file headers and license.md
  • Added bluebird for promises
  • Added istanbul for test-coverage
  • Added sinon for test stub, spies and mocks.
  • Created the rest of the integration tests for the persistence layer
  • Forked sequelize so validate validates data types and make it play better with webstorm.

The biggest change was to fork sequelize. I’m using webstorm and the inspection process uses js/esdoc over the function signature, which resulted in a lot of false warnings. I put a bug in for the sequelize folks, but they aren’t interested in changing the documentation as it works for them (and it produces nice documentation, but it plays hell with anything that uses the doc declarations for expansion/inspection).  Also, the validate functionality is not doing type checking and that’s important to me (Validate early and often!). I’d rather avoid a round trip to the db to find out there are mismatched type values. My fork can be found on GitHub at rpcarver/sequelize. Yes there is overhead in trying to keep a branch current, but it doesn’t much matter to me as this is just a tutorial. For my own production apps, I’ll probably end up using mysql2 directly to avoid the overhead of sequelize, but I’ll burn that bridge after doing some benchmarking. To install my fork, uninstall sequelize install my branch (NOTE: Portions of the model integration tests will fail if you use the upstream sequelize/sequelize).

npm uninstall --save sequelize
npm install --save rpcarver/sequelize

If you’re all caught up, then let’s be about it…

Install Restify and Loggers

While we’re installing restify and restify-router, we’re also going to install logging.   I’ll be using pino and pino-http because they’re hella fast.  There is also a restify-pino-logger, but it’s just a package wrapper around pino-http, so rather than pollute our module space, we’ll use pino-http. Easy peasy…

npm install --save restify pino pino-http

Sequelize Logging and App Logging with pino

We’ll need to setup our logging options.   This will continue logging to stdout, but using external processes (so we don’t block the event loop) we can easily redirect it to a file, or use pino-tee to split it into multiple files, or even send it off to a service like elastisearch.

Testing the logger, capturing stdout

So we’re going to write the tests (doing some TDD instead of BDD 😉 ).  Before we do that, in order to test the logger, we need to capture stdout.  We do that by replacing the process.stdout.write function with our own function.   Here’s a class that let’s us do that (mybooks-api/utils/capture-stdout.js).   BTW, if you’re not stealing your ideas, at least some of the time, you’re working too hard.   I stole this one from Preston Guillory, except, instead of copying it, I grab it all so our test output (which also uses stdout) doesn’t look all weird:

class CaptureStdout {
  constructor() {
    this._capturedText = [];
  }

  /**
   * Starts capturing the writes to process.stdout
   */
  startCapture() {
    this._orig_stdout_write = process.stdout.write;
    process.stdout.write = this._writeCapture.bind(this);
  }

  /**
   * Stops capturing the writes to process.stdout.
   */
  stopCapture() {
    process.stdout.write = this._orig_stdout_write;
  }

  /**
   * Private method that is used as the replacement write function for process.stdout
   * @param string
   * @private
   */
  _writeCapture(string) {
    this._capturedText.push(string.replace(/\n/g, ''));
  }

  /**
   * Retrieve the text that has been captured since creation or since the last clear call
   * @returns {Array} of Strings
   */
  getCapturedText() {
    return this._capturedText;
  }

  /**
   * Clears all of the captured text
   */
  clearCaptureText() {
    this._capturedText = [];
  }
}

module.exports = CaptureStdout;

Logger TDD, the tests

In TDD we define the functionality by writing the tests first.  If you’re of the school of thought that a week’s worth of coding saves a day’s worth of planning, laying out your requirements first via TDD or some other method probably isn’t for you.   But then, I probably wouldn’t enjoy supporting your code in production, either ;).   We’re mirroring our implementation directory structure in the test directory, and I’m using unit and integration suffixes to differentiate the types of tests. This test will be in mybooks-api/tests/controllers/books-unit.js. So I’m striving for a unit test here, not an integration test. Hence the stubs, spys, and mocks. Here are the tests for our logger wrapper:

...
describe('util - logger', () => {
  const capture = new CaptureStdout();
  const oldLevel = logger.level;

  beforeEach(() => {
    logger.level = 'debug';
  });

  after(() => {
    logger.level = oldLevel;
  })

  it('should return a logger with the expected functionality', () => {
    expect(logger).is.an('Object');
    expect(logger).has.property('level');
    expect(logger).respondsTo('trace');
    expect(logger).respondsTo('sql');
    expect(logger).respondsTo('debug');
    expect(logger).respondsTo('info');
    expect(logger).respondsTo('warn');
    expect(logger).respondsTo('error');
    expect(logger).respondsTo('fatal');
  });

  it('should log a sql message at a level of 15', () => {
    logger.level = 'sql';

    const msg = 'this is a message';
    capture.startCapture();
    logger.sql(msg);
    capture.stopCapture();
    const json = capture.getCapturedText().map(JSON.parse);
    capture.clearCaptureText();

    expect(json[0]).has.property('msg').which.equals(msg);
    expect(json[0]).has.property('level').which.equals(15);
  });

  it('should log messages greater than the log level', () => {
    logger.level = 'debug';

    const msg = 'this is a message';
    capture.startCapture();
    logger.fatal(msg);
    logger.error(msg);
    logger.warn(msg);
    logger.info(msg);
    logger.debug(msg);
    capture.stopCapture();
    const json = capture.getCapturedText().map(JSON.parse);
    capture.clearCaptureText();

    expect(json).has.lengthOf(5);
    expect(json[0]).has.property('msg').which.equals(msg);
    expect(json[0]).has.property('level').which.equals(60);
    expect(json[1]).has.property('msg').which.equals(msg);
    expect(json[1]).has.property('level').which.equals(50);
    expect(json[2]).has.property('msg').which.equals(msg);
    expect(json[2]).has.property('level').which.equals(40);
    expect(json[3]).has.property('msg').which.equals(msg);
    expect(json[3]).has.property('level').which.equals(30);
    expect(json[4]).has.property('msg').which.equals(msg);
    expect(json[4]).has.property('level').which.equals(20);
  });

  it('should log nothing for log messages less than the log level', () => {
    logger.level = 'debug';

    const msg = 'this is a message';
    capture.startCapture();
    logger.debug(msg);
    logger.trace(msg);
    capture.stopCapture();
    const json = capture.getCapturedText().map(JSON.parse);
    capture.clearCaptureText();

    expect(json).has.lengthOf(1);
  });

  it('should log nothing when the log level is silent', () => {
    logger.level = 'silent';

    const msg = 'this is a message';
    capture.startCapture();
    logger.fatal(msg);
    logger.error(msg);
    logger.warn(msg);
    logger.info(msg);
    logger.debug(msg);
    logger.sql(msg);
    logger.trace(msg);
    capture.stopCapture();
    const json = capture.getCapturedText().map(JSON.parse);
    capture.clearCaptureText();

    expect(json).has.lengthOf(0);
  });
});

As you can see, our requirements call for a pretty much off the shelf logger with a custom sql level.   What you’re looking at is the final test.   And while it might not be pure TDD as the implementation of pino (next) makes our test brittle.   I’m willing to live with it.

Logger wrapper/extension and tying it into Sequelize

Now we implement it.  In the config directory let’s add logger.js with the following content:

const pino = require('pino');
const env = require('./env');
const pjson = require('../package.json');

const logger = pino({
  name: pjson.name,
});

/*
 * 10 - trace
 * 20 - debug
 * 30 - info
 * 40 - warn
 * 50 - error
 * 60 - fatal
 */
logger.addLevel('sql', 15);
logger.level = env.LOGGING_LEVEL;

module.exports = logger;

The custom logging level gives me some more granularity and be able to pull out the sql statements easily if I need to.   The level is between debug (20) and trace (10).   This will also allow me to just see the down to the debug level without the sql statements (which is typically what I do).

Now that we have a logger we can use it anywhere (to log exceptions especially!) using standard logger calls.   So let’s tie it into sequelize so we have a single logger.   First off, I removed the DATABASE_QUERY_LOGGING entry in env.js.   We can use the sql log level to decide to log the sql query messages or not.   Then in the orm.js we change the logging property to the logger.sql function, so that sequelize will call logger.sql(msg) for all of its log calls:

...
const logger = require('../config/logger');
...
// connect to the database.
const sequelize = new Sequelize(env.DATABASE_NAME, env.DATABASE_USERNAME, env.DATABASE_PASSWORD, {
  host: env.DATABASE_HOST,
  port: env.DATABASE_PORT,
  dialect: env.DATABASE_DIALECT,
  pool: {
    max: env.DATABASE_POOL_MAX,
    min: env.DATABASE_POOL_MIN,
    idle: env.DATABASE_POOL_IDLE,
  },
  define: {
    timestamps: false,
  },
  logging: logger.sql,
});
...

The first controller – writing the tests

Again, the tests were written, then run to make sure it fails (nothing worse than a false positive!), and then the code was written, and then I rework them both until I’m happy and all the tests pass.   I didn’t change the functionality that I defined in my first go at the tests.   I just refined how they work with the controller implementation.   To make the test easier, I’ve added sinon.   Sinon will allow us to write quick stubs, spies, and mocks.   We want to start simple, so we’re going to start with the API:/books implementation which will return all books and doesn’t require anything on the request.   For brevity, I’m not going over the mock-response, it’s in the repository under utils.   Here’s the test:

require('../../config/env');
const CaptureStdout = require('../../utils/capture-stdout');
const sinon = require('sinon');
const chai = require('chai');
const orm = require('../../models/orm');
const controller = require('../../controllers/books-controller')(orm);
const MockResponse = require('../../utils/mock-response');

const expect = chai.expect;
const allBooks = [
  { bookID: 1, title: 'booger1' },
  { bookID: 2, title: 'booger2' },
  { bookID: 3, title: 'booger3' },
];


describe('controllers - books - getAll', () => {
  let sandbox;

  beforeEach(() => {
    sandbox = sinon.sandbox.create();
  });

  afterEach(() => {
    sandbox.restore();
  });

  it('should have a getAll function', () => {
    expect(controller).has.property('getAll');
  });

  it('should return all books via the response and set the status code', async () => {
    // stub out the persistence call
    const findAll = sandbox.stub(orm.books, 'findAll');
    const next = sandbox.stub();
    // request is empty
    const req = {};
    const res = new MockResponse();
    // use spy to confirm the response was called as expected
    sandbox.spy(res, 'json');
    sandbox.spy(res, 'status');
    // stubbed persistence call is a promise, make sure we pretend we're one.
    findAll.resolves(allBooks);

    // use await to make sure the promise is fulfilled
    controller.getAll(req, res, next);
    // use await to make sure the promise is fulfilled
    await res;
    await next;

    expect(res.json.calledOnce).is.true;
    expect(res.json.getCall(0).args[0]).deep.equals(allBooks);
    expect(res.status.calledOnce).is.true;
    expect(res.status.getCall(0).args[0]).to.equal(200);
  });

  it('should error and log an error message if the persistence call throws an error', async () => {
    const captureStdout = new CaptureStdout();
    const msg = 'some error text here';
    // stub out the persistence call
    const findAll = sandbox.stub(orm.books, 'findAll');
    const next = sandbox.stub();
    // request is empty
    const req = {};
    const res = new MockResponse();
    // use spy to confirm the response was called as expected
    sandbox.spy(res, 'json');
    sandbox.spy(res, 'status');
    // stubbed persistence call is a promise, make sure we pretend we're one.
    findAll.rejects(Error(msg));

    captureStdout.startCapture();
    controller.getAll(req, res, next);
    // use await to make sure the promise is fulfilled
    await res;
    await next;

    captureStdout.stopCapture();
    const json = captureStdout.getCapturedText().map(JSON.parse);
    captureStdout.clearCaptureText();

    expect(res.json.notCalled).is.true;
    expect(res.status.notCalled).is.true;
    expect(next.called).is.true;
    expect(json).has.lengthOf(1);
    expect(json[0]).has.property('msg').contains(msg);
    expect(json[0]).has.property('level').which.equals(50);
  });
});

The Controller

The controller setup tries to make full use of restify and sequelize.  It’s fairly straightforward and we went over the requirements in the test.   Restify provides us with the ability to return an error on the promise chain and it will automatically set the header information appropriately.   We’ll test that functionality later during integration tests.

const orm = require('../models/orm');
const restify = require('restify');
const errors = require('restify-errors');
const logger = require('../utils/logger');

module.exports = {
  getAll: (req, res, next) => {
    orm.books.findAll({ include: [orm.publishers, orm.formats, orm.authors] })
      .then((result) => {
        res.status(200);
        res.json(result);
        return next();
      })
      .catch((error) => {
        logger.error(`books.findAll error: ${error}`);
        return next(new errors.InternalError('Error retrieving all books'));
      });
  },
};

Finally!  The route and the event loop!

Open up app.js first we bring in our dependencies. Then we create the server and add the routing for books, calling the controller function we just set up.   Then we build the listener loop which will initially :

const restify = require('restify');
const env = require('./config/env');
const orm = require('./models/orm');
const booksController = require('./controllers/books-controller');

const app = restify.createServer();
app.get('/books', booksController.getAll);

app.listen(env.API_PORT, () => {
  orm.sequelize.authenticate()
    .then(() => {
      console.log('DB Connection Established');
    })
    .catch((err) => {
      console.error('DB Connection Failed: ', err);
      return;
    });
  console.log(`app listening at port ${env.API_PORT} for ${env.NODE_ENV} environment.`);
});

I suggest you sync up with step004 in the github repository (I left out some boilerplate here in the interest of brevity).   Once you’ve done that from the command line (or your ide of choice):

npm start

you should see something like:

> mybook-api@1.0.0 start /somepath/mybooks/mybook-api
> NODE_ENV=development node app.js

app listening at port 8080 for development environment.
DB Connection Established

Open a browser to localhost:8080/books and…. wait… what?   🙂   Just an empty json array.   Well, we don’t have any data in the db, what did you expect?.    Let’s quick squirt some data in the dev (you are pointing at the dev db in your .env.dev, aren’t you?).   Don’t forget to stop the server.

Seed the development db

In scripts directory, I created a init-dev-db.js and use bulkcreate to create a bunch of data. Note that if I were working with a team without my own db instance, this is probably NOT how I’d handle it.  Nothing worse than having some of your development data go away when you least expect it.   And yes, there are no author book relationships.  Later on I’ll loop back and add some authors to the books:

const orm = require('../models/orm');

async function createAllData() {
  // drop all tables and recreate
  await orm.sequelize.sync({ force: true });
  
  await orm.locations.bulkCreate([
    { locationName: 'location1', description: 'dev location 1' },
    { locationName: 'location2', description: 'dev location 2' },
  ]); 
  await orm.formats.bulkCreate([
    { formatName: 'format1', description: 'dev format 1' },
    { formatName: 'format2', description: 'dev format 2' },
  ]); 
  await orm.authors.bulkCreate([
    { firstName: 'John', lastName: 'Smith' },
    { firstName: 'Ursala', lastName: 'LeGuin' },
    { firstName: 'Anne', lastName: 'McCaffrey' },
    { firstName: 'Spider', lastName: 'Robinson' },
  ]); 
  await orm.publishers.bulkCreate([
    { publisherName: 'Baen Books' },
    { publisherName: 'Ballantine Books' },
    { publisherName: 'Ace Books' },
    { publisherName: 'Gnome Press' },
  ]); 
  await orm.books.bulkCreate([
    {
      title: 'Book One',
      publisherID: 1,
      ISBN: 1234567890,
      printingYear: 1976,
      printingNum: null,
      formatID: 1, 
      rating: 10,
      locationName: 'location1',
      notes: 'This is a short note...',
    }, {
      title: 'Book Two',
      publisherID: 1,
      ISBN: 1234567890,
      printingYear: 1999,
      printingNum: null,
      formatID: 1, 
      rating: 10,
      locationName: 'location1',
      notes: 'It was a dark and stormy night...',
    }, {
      title: 'Book Three',
      publisherID: 2,
      ISBN: 1234567890,
      printingYear: 2017,
      printingNum: null,
      formatID: 2, 
      rating: 10,
      locationName: 'location2',
      notes: 'Meanwhile, our intrepid hero...',
    },
  ]);
}

createAllData();
console.log('done loading data!');
process.exit(0);

Add a script entry to package.json:

"init-dev-db": "NODE_ENV=development node ./scripts/init-dev-db.js",

Again from the command line:

npm run init-dev-db

restart the server, pull up the browser (if you work with json a lot, install a good beautifier, I use JSONview) and….

[

  • {

    • bookID1,
    • title“Book One”,
    • publisherID1,
    • ISBN1234567890,
    • printingYear1976,
    • printingNumnull,
    • formatID1,
    • rating10,
    • notes“This is a short note…”,
    • locationName“location1”,
    • publisher:

      {

      • publisherID1,
      • publisherName“Baen Books”

      },

    • format:

      {

      • formatID1,
      • formatName“format1”,
      • description“dev format 1”

      },

    • authors: [ ]

    },

  • {

    • bookID2,
    • title“Book Two”,
    • publisherID1,
    • ISBN1234567890,
    • printingYear1999,
    • printingNumnull,
    • formatID1,
    • rating10,
    • notes“It was a dark and stormy night…”,
    • locationName“location1”,
    • publisher:

      {

      • publisherID1,
      • publisherName“Baen Books”

      },

    • format:

      {

      • formatID1,
      • formatName“format1”,
      • description“dev format 1”

      },

    • authors: [ ]

    },

  • {

    • bookID3,
    • title“Book Three”,
    • publisherID2,
    • ISBN1234567890,
    • printingYear2017,
    • printingNumnull,
    • formatID2,
    • rating10,
    • notes“Meanwhile, our intrepid hero…”,
    • locationName“location2”,
    • publisher:

      {

      • publisherID2,
      • publisherName“Ballantine Books”

      },

    • format:

      {

      • formatID2,
      • formatName“format2”,
      • description“dev format 2”

      },

    • authors: [ ]

    }

]

Wrap up

We’ve got a single method and the start of our RESTful api.   Next step will be to cleanup our routing, add in the other get endpoints, testing and publishing the api to the Google Cloud Platform.  Before we publish destructive services, we’ll add in some authentication and authorization services as well.   Hope you’re enjoying this foray into creating a software ecosystem and, as always, if you have questions, don’t hesitate to ask!

Leave a Reply

Your email address will not be published. Required fields are marked *