Home Manual Reference Source Test API Healthcheck

NOTE: Any manual changes to Github Wiki will be overwritten by push (PR merged) to master branch

Welcome to the Manual/Wiki for the Hapi REST Server Template. These files should updated whenever we merge to master, and be in sync between:

We use a Github Action to accomplish this. The action is executed on merge to the master branch and defined inside of the ./.github/workflows/ directory(github) (along with all other github actions) that calls a script(github) in our scripts/actions directory.

The script retrieves the following markdown files (defined in ./.esdoc.json) and copies them to a temporary directory, then uses the wiki-page-creator-action to update the wiki.

Table of Contents

  • Setup - dependencies and how to install and start development (eventually production?)
  • Entry Point - How we start and setup the server process to handle requests
  • Controllers - mappings from url endpoints to methods for handling requests
  • Dataservices - classes/methods that connect to external services or datastores
  • Helpers - classes that are used frequently for interacting with other systems/tools
  • Logging - Explains how logs works and how to add/view them as a developer
  • Git Hooks - Notes about workflow enforcements
  • Test - files used to run the unit tests (eventually automated tests)

.

Setup

Requires Node 10+. Please install this before attempting to run. The easiest way to install and manage node versions is by using the Node Version Manager. We define a .nvm file in the project to help reference this minimum node version.

Use nvm use 10 to switch node versions after installing a new Node version. Then you should install all project dependencies(github) with npm install (or npm install -D if development work is planned). This should install the Hapi Server Framework, MariaDB and other libraries used in this project.

If you have cloned the master branch, then the build should compile and all tests should pass once the dependencies have been downloaded and installed. To only compile the server, use npm run build, or you can just run the tests with npm run test. To watch the files for changes and rebuild/retest the files on changes, you can use npm run start-watch. See the package.json(github) for all possibe npm commands.

The Note Controller/Dataservice Endpoints require MariaDB installed to store the notes passed. After you have installed mariadb, you will need to set up the initial database using mariadb and a SQL program (or scripts if we get working). Once the db is set up, the configuration for the database name, user, password and port are set in conf/config.yaml (github).

Each endpoint that is created should have an OpenAPI representation in openapi.yaml. This file can be used to run Swagger UI to help with running/testing the processes developed. There is a Swagger application at the API link in the docs or can view the raw file(github) (Eventually we will autogenerate OpenAPI file from code annotations and create Postman test suites based on these files)

Webpack

Webpack config file(github) is used to manage our webpack build to compile the application.

In this file we:

  • designate the files that will be created with the build
  • define the source mapping that will be generated with the output (this should be changed in prod)
  • set the target environment and available globals
  • define where we should look to import files
  • and set up modules/plugins for the build
    • Babel and ESLint Loaders
    • Flow Integration with Webpack
    • Hot Module Replacement for rebuilding on file changes
    • Webpack Shell Plugin to re-run Mocha tests on file changes/rebuild

Documentation

The docs/ directory is created with npm run doc, this generates an esdoc webpage based on the modified template stored in docs/template from the README files in the repo. These docs are also synced with the Github wiki page via a Github Action (see Github Actions/Scripts page for more details.

The Dependency Graph is created with Madge and graphviz, you will need to install both in order to update the dependency graph. I didn't include these in the package.json dependencies because I felt this is more than is needed to develop a working app or even write basic documentation. I also split this out to a separate npm script: npm run doc-image so it can be run when someone actually installs the dependencies

Install madge with npm install -g madge and install graphviz with brew install graphviz or port install graphviz (for OSX).

Notes/Ideas

Application Entry Point

The Entry file(github) contains a main method which is where we instantiate the server wrapper class and set the endpoints to be handled by our application. The routes and controllers(handler functions) are imported from each file in the controllers directory. We also set up the admin and docs endpoints (configurable in conf directory).

We catch any errors that occur during this server setup so that we can print a helpful error message. We also want to make ensure the application is closed down as gracefully as possible when the user asks, so we make sure to watch for any OS signals from the user and then nicely ask our server to shut itself down, then ask mariadb to close any remaining open connections.

I think we also added the handler for unhandledRejection exception handlers at the end of the file, so that we print the error to output before exiting the program (We may also want to log this to any notification system?)

Notes/Ideas

  • IDEA: Generic Handler/Middleware that creates Request Details Object to pass to each controller

Controllers

Controllers are functions that handle requests to specific endpoint paths:

function controllerFunction(request) {
    ... do stuff and return a message ...
}

The first parameter passed in is the HapiRequest object

We define the mapping from endpoint/method to controller function with an JSON object:

const map1 = {
  path: "/endpoint",
  method: "GET", ///
  controller: controllerFunction
}

To make this endpoint available to the server, we need to export an array of these mappings:

export default [ map1, map2, ... ];

Notes/Ideas

  • OPTIONS requests?
  • Do not instantiate classes to handle requests. This would be a huge memory overhead!
  • DO NOT USE SHARED CLASS PROPERTIES/STATEFUL VARIABLES IN CONTROLLER FUNCTIONS
    • These functions need to be stateless.
  • IDEA: Catch any exceptions that don't have response code and log or email indicating unexpected state
  • IDEA: generic CRUD endpoints with a flexible storage system for prototypes
    • /{objectType}/create
    • /{objectType}/read/{id}
    • /{objectType}/update/{id}
    • /{objectType}/delete/{id}
    • /{objectType}/all?filters?fields?
    • /{objectType}/search?q

Dataservices

Dataservices abstract the communication with storage systems or external APIs to fetch/store data related to a topic or for a specific UI component.

Examples

Note Dataservice

Create/Read Note objects out of a mariadb/mysql database

Notes/Ideas

  • Quick storage dataservice (abstracted away from specific object?)
    • takes object from post/put request and places in mongodb
    • objects indexed by another param
    • automatically assigned id
    • retrieves with get request
    • delete request
      • takes multiple object ids
    • search/or retrieve by property?

Helpers

Helpers are classes that modularize some functionality that is useful in the server.

Examples

MariaDB

Helper class for connecting with mariadb server and saving/retrieving rows from tables

- query
- insert
- fetch
- fetchOne

Config

Helper class for reading properties from config file

  • provides a typed interface of these properties

Healthcheck

Helper for building the healthcheck response that is displayed at <CONFIG.PATHS.healthceck> endpoint.

  • Determines version/branch and if the server is running properly
  • Makes simple requests to configured dependency DB/External Services to see if they are available

Notes/Ideas

  • IDEA: Should healthcheck read log file for past minute to see if any errors?
  • Do we want/need to worry about Dependency Injection/Singletons?
  • RDBMS vs Document store
    • CAP theorem stands for C – Consistency, A — Availability, P — Partitioning (or Scalability) and states that having all three properties at the same time is not possible,
    • https://medium.com/statuscode/three-reasons-to-use-a-nosql-document-store-for-your-next-web-application-6b9eabffc8d8
    • Document Store:
      • use for settings data and where schema will be changed often
      • when changes are small crud, based on users interactions?
      • when count and aggregate data is useful to end user
    • RDBMS:
      • less duplicated data, normalized and stored in specific tables
      • useful when data changes often
      • seems like more useful for storing fact data in ETL processes?
      • Q: phoenix?

Logging

We are currently using pino for logging. Pino logs objects/messages directly to streams so other processes can actually handle formatting or other log actions (this is because Node is single threaded, so smarter to create another process to manage the logs). We store all of these logs in <CONFIG.LOG.dir>/pino.log

Configurations

Some settings can be configured in the config.yaml(github) file:

  • The debug property shows error messages and stack traces in the stdout of the server process (for development)
  • The dir property defines the directory to save log files in
  • The level property defines the log level we show in pino.log

What we log

  • Request Log
    • We record all connections to endpoints with the request details
  • Errors
    • Any Errors during request execution will be recorded in the log
  • Developer Messages (via Logger provided methods in Route Controllers)
    • Messages added by developers inside the controller body
    • different methods for levels: trace/debug/info/warn/error/fatal

Example:

controller({ params, logger }: HapiRequest) {
  try {
      logger.info({ data: params });

      ...do stuff...
  } catch(err) {
    logger.error('Personalized Error Message');
  }

  ... return things?...
}

Other processes

By design, pino just outputs logs to be used by other "transporters" (other node processes spawned in production and development) for reacting to, and viewing these logs. here are some examples of processes we recommend implementing:

Logrotate

Log Rotation on production servers should be handled via another service as described in the pino documention

Pino-Pretty

Displays the logs in a prettier format that makes it easier to see the data, however takes up more space. To use, pipe the log file to pino-pretty executable: tail -f logs/pino.log | ./node_modules/.bin/pino-pretty -t

See CLI arguments for more control of output:

  • remove -t for epoch
  • -s for searching
  • -i for ignoring properties e.g.
    [2020-03-22 05:58:27.175 +0000] INFO  (72753 on Devlins-MacBook-Air.local): request completed
    req: {
      "id": "1584855457513:Devlins-MacBook-Air.local:72753:k82m08u1:10001",
      "method": "put",
      "url": "http://localhost:3333/api/note/1",
      "headers": {
        "host": "localhost:3333",
        "connection": "keep-alive",
        "content-length": "17",
        "accept": "*/*",
        "sec-fetch-dest": "empty",
        "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36",
        "content-type": "application/json",
        "origin": "http://localhost:3333",
        "sec-fetch-site": "same-origin",
        "sec-fetch-mode": "cors",
        "referer": "http://localhost:3333/docs/swagger/index.html",
        "accept-encoding": "gzip, deflate, br",
        "accept-language": "en-US,en;q=0.9",
        "cookie": "SQLiteManager_currentLangue=2; _pk_id.2.1fff=e04c4083531e55a4.1584819760.6.1584855171.1584855171.; _pk_ses.2.1fff=1"
      },
      "remoteAddress": "127.0.0.1",
      "remotePort": 64175
    }
    res: {
      "statusCode": 200,
      "headers": {
        "content-type": "application/json; charset=utf-8",
        "vary": "origin",
        "access-control-allow-origin": "http://localhost:3333",
        "access-control-expose-headers": "WWW-Authenticate,Server-Authorization",
        "cache-control": "no-cache",
        "content-length": 4
      }
    }
    responseTime: 6
    

Notes/Ideas

  • Trace logging any external requests/responses
  • Logger Helper for use in the dataservices
  • goaccess?
  • websocket? endpoint that shows logs in real time
  • docs page for logs (searching?)
  • Elasticsearch/Logstash/Kibana
  • logrotate on production

Git Hooks/Workflow

We use githooks(github) to help clean up and enforce the workflow for developers. This is done with Husky.

NOTE: We prevent commits to master branch on local machine (so all changes to master are PRs on github). You may want to disable this for more rapid local development, you can just remove the configuration in package.json(github) that adds these hooks.

On commit

Before each commit, we want to verify that the build won't break. So we make sure to run the build process one more time with only the committed files before letting the commit go through:

Process:

  1. Stash uncommited changes
  2. Run linting/flow/compliation and tests with npm/webpack
  3. Pop uncommitted changes

Github Actions

We setup github actions on this project so we can enforce actions/checks and workflow processes on github.

On Master PR

For every PR against master, we spin up a server to build the project via node/webpack. We also lint the project at this point (with more stringent error rules?). This will help us catch any errors in the code and prevent any merges to master that will break

Label Manager

This project defines the Github Labels in a YAML file that is managed by the Github Labeler Action. Any labels that are not defined in this file will be removed every time this action is run. This does not affect PRs

On Merge to Master

Whenever we merge a PR to master, we want to update the documentation based on the changes the user made in the commits. We run a git action to handle this as well:

  • Collect README files and update wiki
  • Fix/remove links in Wiki
  • Build documentation and generate commit

Github Specific Files

Whenever a PR is made on Github, the body/description will be pre-populated with the contents in .github/PULL_REQUEST_TEMPLATE.md

Notes/Ideas

  • IDEA: Version increase/changelog generation

Test

Testing bootstrap file(github) finds all of the test files and imports global objects that can be used in all tests to simplify each test file. Also creates a sinon Sandbox that is reset before each test for mocking/stubbing services and non-tested functions in the test context.

To run the tests with mocha, use npm run test to see the output from all tests in .spec files in the standard output.

Global Imports (available in all .spec files):

  • sinonSandbox from Sinon
  • expect from Chai
  • describe/it/beforeEach from Mocha

Notes/Ideas

  • [ ] Look into differences between webpack tests vs npm test
  • IDEA: Mocha settings/plugin for displaying filepath in output of tests (when erroring?)
    • seems difficult to do on async/timeout errors