diff --git a/en/advanced/best-practice-performance.md b/en/advanced/best-practice-performance.md index 1f516a37d2..7d4c5beef1 100644 --- a/en/advanced/best-practice-performance.md +++ b/en/advanced/best-practice-performance.md @@ -45,6 +45,7 @@ Gzip compressing can greatly decrease the size of the response body and hence in const compression = require('compression') const express = require('express') const app = express() + app.use(compression()) ``` @@ -56,11 +57,11 @@ Synchronous functions and methods tie up the executing process until they return Although Node and many modules provide synchronous and asynchronous versions of their functions, always use the asynchronous version in production. The only time when a synchronous function can be justified is upon initial startup. -If you are using Node.js 4.0+ or io.js 2.1.0+, you can use the `--trace-sync-io` command-line flag to print a warning and a stack trace whenever your application uses a synchronous API. Of course, you wouldn't want to use this in production, but rather to ensure that your code is ready for production. See the [node command-line options documentation](https://nodejs.org/api/cli.html#cli_trace_sync_io) for more information. +You can use the `--trace-sync-io` command-line flag to print a warning and a stack trace whenever your application uses a synchronous API. Of course, you wouldn't want to use this in production, but rather to ensure that your code is ready for production. See the [node command-line options documentation](https://nodejs.org/api/cli.html#cli_trace_sync_io) for more information. ### Do logging correctly -In general, there are two reasons for logging from your app: For debugging and for logging app activity (essentially, everything else). Using `console.log()` or `console.error()` to print log messages to the terminal is common practice in development. But [these functions are synchronous](https://nodejs.org/api/console.html#console_console_1) when the destination is a terminal or a file, so they are not suitable for production, unless you pipe the output to another program. +In general, there are two reasons for logging from your app: For debugging and for logging app activity (essentially, everything else). Using `console.log()` or `console.error()` to print log messages to the terminal is common practice in development. But [these functions are synchronous](https://nodejs.org/api/console.html#console) when the destination is a terminal or a file, so they are not suitable for production, unless you pipe the output to another program. #### For debugging @@ -68,7 +69,7 @@ If you're logging for purposes of debugging, then instead of using `console.log( #### For app activity -If you're logging app activity (for example, tracking traffic or API calls), instead of using `console.log()`, use a logging library like [Winston](https://www.npmjs.com/package/winston) or [Bunyan](https://www.npmjs.com/package/bunyan). For a detailed comparison of these two libraries, see the StrongLoop blog post [Comparing Winston and Bunyan Node.js Logging](https://strongloop.com/strongblog/compare-node-js-logging-winston-bunyan/). +If you're logging app activity (for example, tracking traffic or API calls), instead of using `console.log()`, use a logging library like [Winston](https://www.npmjs.com/package/winston) or [Pino](https://www.npmjs.com/package/pino). There are more packages with the same purpose, you can check the LogRocket blog post ['Comparing Node.js logging tools'](https://blog.logrocket.com/comparing-node-js-logging-tools/) for a comparison between these packages and others. ### Handle exceptions properly @@ -84,7 +85,6 @@ Before diving into these topics, you should have a basic understanding of Node/E For more on the fundamentals of error handling, see: * [Error Handling in Node.js](https://www.tritondatacenter.com/node-js/production/design/errors) -* [Building Robust Node Applications: Error Handling](https://strongloop.com/strongblog/robust-node-applications-error-handling/) (StrongLoop blog) #### What not to do @@ -142,26 +142,23 @@ Now, all errors asynchronous and synchronous get propagated to the error middlew However, there are two caveats: -1. All your asynchronous code must return promises (except emitters). If a particular library does not return promises, convert the base object by using a helper function like [Bluebird.promisifyAll()](http://bluebirdjs.com/docs/api/promise.promisifyall.html). +1. All your asynchronous code must return promises (except emitters). If a particular library does not return promises, convert the base object by using a helper function like [util.promisify](https://nodejs.org/api/util.html#util_util_promisify_original). 2. Event emitters (like `streams`) can still cause uncaught exceptions. So make sure you are handling the error event properly; for example: ```js -const wrap = fn => (...args) => fn(...args).catch(args[2]) - -app.get('/', wrap(async (req, res, next) => { +app.get('/', async (req, res, next) => { const company = await getCompanyById(req.query.id) const stream = getLogoStreamById(company.id) stream.on('error', next).pipe(res) -})) +}) ``` -The `wrap()` function is a wrapper that catches rejected promises and calls `next()` with the error as the first argument. -For details, see [Asynchronous -Error Handling in Express with Promises, Generators and ES7](https://strongloop.com/strongblog/async-error-handling-expressjs-es7-promises-generators/#cleaner-code-with-generators). +If `getCompanyById` throws an error or rejects, `next` will be called with either the thrown error or the rejected value. If no rejected value is provided, `next` will be called with a default Error object provided by the Express router. -For more information about error-handling by using promises, see [Promises in Node.js with Q – An Alternative to Callbacks](https://strongloop.com/strongblog/promises-in-node-js-with-q-an-alternative-to-callbacks/). +For more information about error-handling, see our guide on [error handling in Express](https://expressjs.com/en/guide/error-handling). -## Things to do in your environment / setup {#in-environment} +## Things to do in your environment / setup +{#in-environment} Here are some things you can do in your system environment to improve your app's performance: @@ -182,20 +179,11 @@ Setting NODE_ENV to "production" makes Express: * Cache CSS files generated from CSS extensions. * Generate less verbose error messages. -[Tests indicate](http://apmblog.dynatrace.com/2015/07/22/the-drastic-effects-of-omitting-node_env-in-your-express-js-applications/) that just doing this can improve app performance by a factor of three! +[Tests indicate](https://www.dynatrace.com/news/blog/the-drastic-effects-of-omitting-node-env-in-your-express-js-applications/) that just doing this can improve app performance by a factor of three! If you need to write environment-specific code, you can check the value of NODE_ENV with `process.env.NODE_ENV`. Be aware that checking the value of any environment variable incurs a performance penalty, and so should be done sparingly. -In development, you typically set environment variables in your interactive shell, for example by using `export` or your `.bash_profile` file. But in general, you shouldn't do that on a production server; instead, use your OS's init system (systemd or Upstart). The next section provides more details about using your init system in general, but setting `NODE_ENV` is so important for performance (and easy to do), that it's highlighted here. - -With Upstart, use the `env` keyword in your job file. For example: - -```sh -# /etc/init/env.conf - env NODE_ENV=production -``` - -For more information, see the [Upstart Intro, Cookbook and Best Practices](http://upstart.ubuntu.com/cookbook/#environment-variables). +In development, you typically set environment variables in your interactive shell, for example by using `export` or your `.bash_profile` file. But in general, you shouldn't do that on a production server; instead, use your OS's init system (systemd). The next section provides more details about using your init system in general, but setting `NODE_ENV` is so important for performance (and easy to do), that it's highlighted here. With systemd, use the `Environment` directive in your unit file. For example: @@ -204,7 +192,7 @@ With systemd, use the `Environment` directive in your unit file. For example: Environment=NODE_ENV=production ``` -For more information, see [Using Environment Variables In systemd Units](https://coreos.com/os/docs/latest/using-environment-variables-in-systemd-units.html). +For more information, see [Using Environment Variables In systemd Units](https://www.flatcar.org/docs/latest/setup/systemd/environment-variables/). ### Ensure your app automatically restarts @@ -223,11 +211,10 @@ In addition to restarting your app when it crashes, a process manager can enable * Gain insights into runtime performance and resource consumption. * Modify settings dynamically to improve performance. -* Control clustering (StrongLoop PM and pm2). +* Control clustering (pm2). The most popular process managers for Node are as follows: -* [StrongLoop Process Manager](http://strong-pm.io/) * [PM2](https://github.com/Unitech/pm2) * [Forever](https://www.npmjs.com/package/forever) @@ -235,20 +222,9 @@ For a feature-by-feature comparison of the three process managers, see [http://s Using any of these process managers will suffice to keep your application up, even if it does crash from time to time. -However, StrongLoop PM has lots of features that specifically target production deployment. You can use it and the related StrongLoop tools to: - -* Build and package your app locally, then deploy it securely to your production system. -* Automatically restart your app if it crashes for any reason. -* Manage your clusters remotely. -* View CPU profiles and heap snapshots to optimize performance and diagnose memory leaks. -* View performance metrics for your application. -* Easily scale to multiple hosts with integrated control for Nginx load balancer. - -As explained below, when you install StrongLoop PM as an operating system service using your init system, it will automatically restart when the system restarts. Thus, it will keep your application processes and clusters alive forever. - #### Use an init system -The next layer of reliability is to ensure that your app restarts when the server restarts. Systems can still go down for a variety of reasons. To ensure that your app restarts if the server crashes, use the init system built into your OS. The two main init systems in use today are [systemd](https://wiki.debian.org/systemd) and [Upstart](http://upstart.ubuntu.com/). +The next layer of reliability is to ensure that your app restarts when the server restarts. Systems can still go down for a variety of reasons. To ensure that your app restarts if the server crashes, use the init system built into your OS. The main init systems in use today is [systemd](https://wiki.debian.org/systemd). There are two ways to use init systems with your Express app: @@ -290,95 +266,8 @@ Restart=always [Install] WantedBy=multi-user.target ``` -For more information on systemd, see the [systemd reference (man page)](http://www.freedesktop.org/software/systemd/man/systemd.unit.html). - -##### StrongLoop PM as a systemd service - -You can easily install StrongLoop Process Manager as a systemd service. After you do, when the server restarts, it will automatically restart StrongLoop PM, which will then restart all the apps it is managing. - -To install StrongLoop PM as a systemd service: - -```console -$ sudo sl-pm-install --systemd -``` - -Then start the service with: - -```console -$ sudo /usr/bin/systemctl start strong-pm -``` - -For more information, see [Setting up a production host (StrongLoop documentation)](https://docs.strongloop.com/display/SLC/Setting+up+a+production+host#Settingupaproductionhost-RHEL7+,Ubuntu15.04or15.10). - -##### Upstart - -Upstart is a system tool available on many Linux distributions for starting tasks and services during system startup, stopping them during shutdown, and supervising them. You can configure your Express app or process manager as a service and then Upstart will automatically restart it when it crashes. - -An Upstart service is defined in a job configuration file (also called a "job") with filename ending in `.conf`. The following example shows how to create a job called "myapp" for an app named "myapp" with the main file located at `/projects/myapp/index.js`. - -Create a file named `myapp.conf` at `/etc/init/` with the following content (replace the bold text with values for your system and app): - -```sh -# When to start the process -start on runlevel [2345] - -# When to stop the process -stop on runlevel [016] - -# Increase file descriptor limit to be able to handle more requests -limit nofile 50000 50000 - -# Use production mode -env NODE_ENV=production - -# Run as www-data -setuid www-data -setgid www-data - -# Run from inside the app dir -chdir /projects/myapp - -# The process to start -exec /usr/local/bin/node /projects/myapp/index.js - -# Restart the process if it is down -respawn - -# Limit restart attempt to 10 times within 10 seconds -respawn limit 10 10 -``` - -{% include admonitions/note.html content="This script requires Upstart 1.4 or newer, supported on Ubuntu 12.04-14.10." %} - -Since the job is configured to run when the system starts, your app will be started along with the operating system, and automatically restarted if the app crashes or the system goes down. - -Apart from automatically restarting the app, Upstart enables you to use these commands: - -* `start myapp` – Start the app -* `restart myapp` – Restart the app -* `stop myapp` – Stop the app. - -For more information on Upstart, see [Upstart Intro, Cookbook and Best Practises](http://upstart.ubuntu.com/cookbook). - -##### StrongLoop PM as an Upstart service - -You can easily install StrongLoop Process Manager as an Upstart service. After you do, when the server restarts, it will automatically restart StrongLoop PM, which will then restart all the apps it is managing. - -To install StrongLoop PM as an Upstart 1.4 service: - -```console -$ sudo sl-pm-install -``` - -Then run the service with: - -```console -$ sudo /sbin/initctl start strong-pm -``` - -{% include admonitions/note.html content="On systems that don't support Upstart 1.4, the commands are slightly different. See [Setting up a production host (StrongLoop documentation)](https://docs.strongloop.com/display/SLC/Setting+up+a+production+host#Settingupaproductionhost-RHELLinux5and6,Ubuntu10.04-.10,11.04-.10) for more information." %} - +For more information on systemd, see the [systemd reference (man page)](http://www.freedesktop.org/software/systemd/man/systemd.unit.html). ### Run your app in a cluster @@ -392,25 +281,11 @@ In clustered apps, worker processes can crash individually without affecting the #### Using Node's cluster module -Clustering is made possible with Node's [cluster module](https://nodejs.org/dist/latest-v4.x/docs/api/cluster.html). This enables a master process to spawn worker processes and distribute incoming connections among the workers. However, rather than using this module directly, it's far better to use one of the many tools out there that does it for you automatically; for example [node-pm](https://www.npmjs.com/package/node-pm) or [cluster-service](https://www.npmjs.com/package/cluster-service). - -#### Using StrongLoop PM - -If you deploy your application to StrongLoop Process Manager (PM), then you can take advantage of clustering _without_ modifying your application code. - -When StrongLoop Process Manager (PM) runs an application, it automatically runs it in a cluster with a number of workers equal to the number of CPU cores on the system. You can manually change the number of worker processes in the cluster using the slc command line tool without stopping the app. - -For example, assuming you've deployed your app to prod.foo.com and StrongLoop PM is listening on port 8701 (the default), then to set the cluster size to eight using slc: - -```console -$ slc ctl -C http://prod.foo.com:8701 set-size my-app 8 -``` - -For more information on clustering with StrongLoop PM, see [Clustering](https://docs.strongloop.com/display/SLC/Clustering) in StrongLoop documentation. +Clustering is made possible with Node's [cluster module](https://nodejs.org/api/cluster.html). This enables a master process to spawn worker processes and distribute incoming connections among the workers. However, rather than using this module directly, it's far better to use one of the many tools out there that does it for you automatically; for example [node-pm](https://www.npmjs.com/package/node-pm) or [cluster-service](https://www.npmjs.com/package/cluster-service). #### Using PM2 -If you deploy your application with PM2, then you can take advantage of clustering _without_ modifying your application code. You should ensure your [application is stateless](http://pm2.keymetrics.io/docs/usage/specifics/#stateless-apps) first, meaning no local data is stored in the process (such as sessions, websocket connections and the like). +If you deploy your application with PM2, then you can take advantage of clustering _without_ modifying your application code. You should ensure your [application is stateless](https://pm2.keymetrics.io/docs/usage/specifics/#stateless-apps) first, meaning no local data is stored in the process (such as sessions, websocket connections and the like). When running an application with PM2, you can enable **cluster mode** to run it in a cluster with a number of instances of your choosing, such as the matching the number of available CPUs on the machine. You can manually change the number of processes in the cluster using the `pm2` command line tool without stopping the app. @@ -446,7 +321,7 @@ Use a caching server like [Varnish](https://www.varnish-cache.org/) or [Nginx](h No matter how optimized an app is, a single instance can handle only a limited amount of load and traffic. One way to scale an app is to run multiple instances of it and distribute the traffic via a load balancer. Setting up a load balancer can improve your app's performance and speed, and enable it to scale more than is possible with a single instance. -A load balancer is usually a reverse proxy that orchestrates traffic to and from multiple application instances and servers. You can easily set up a load balancer for your app by using [Nginx](http://nginx.org/en/docs/http/load_balancing.html) or [HAProxy](https://www.digitalocean.com/community/tutorials/an-introduction-to-haproxy-and-load-balancing-concepts). +A load balancer is usually a reverse proxy that orchestrates traffic to and from multiple application instances and servers. You can easily set up a load balancer for your app by using [Nginx](https://nginx.org/en/docs/http/load_balancing.html) or [HAProxy](https://www.digitalocean.com/community/tutorials/an-introduction-to-haproxy-and-load-balancing-concepts). With load balancing, you might have to ensure that requests that are associated with a particular session ID connect to the process that originated them. This is known as _session affinity_, or _sticky sessions_, and may be addressed by the suggestion above to use a data store such as Redis for session data (depending on your application). For a discussion, see [Using multiple nodes](https://socket.io/docs/v4/using-multiple-nodes/). @@ -454,4 +329,4 @@ With load balancing, you might have to ensure that requests that are associated A reverse proxy sits in front of a web app and performs supporting operations on the requests, apart from directing requests to the app. It can handle error pages, compression, caching, serving files, and load balancing among other things. -Handing over tasks that do not require knowledge of application state to a reverse proxy frees up Express to perform specialized application tasks. For this reason, it is recommended to run Express behind a reverse proxy like [Nginx](https://www.nginx.com/) or [HAProxy](http://www.haproxy.org/) in production. +Handing over tasks that do not require knowledge of application state to a reverse proxy frees up Express to perform specialized application tasks. For this reason, it is recommended to run Express behind a reverse proxy like [Nginx](https://www.nginx.com/) or [HAProxy](https://www.haproxy.org/) in production.