Quantcast
Channel: Server Density Blog
Viewing all articles
Browse latest Browse all 94

The Journey to Webpack

$
0
0

For the past couple of years we built the Server Density UI around a mixture of Grunt and Require.js.

Grunt handled task running, watching files, compiling CoffeeScript, running tests, fingerprinting files for the CDN, and loads of other little jobs. Require.js handled module loading / management and the overall production build via the optimiser.

For my first random week at Server Density—a scheduled time where every team member works on anything they like—I decided to convert us to Webpack.

When you’re past the point of small codebases and it’s no longer easy to just flip the switch on something like the entire build system, it helps to ask why?

Well, we had a few reasons.

Why Convert to Webpack

1. We wanted to use Jest as our test framework

Whilst this has no dependency on Webpack, Webpack and Jest get along very well. Jest is setup to automatically mock CommonJS style modules. In the past this would have been wasted as we were working in a Require.js (and therefore AMD) environment.

2. Hot Module Replacement

HMR is like LiveReload on steroids. Rather than reload the entire page, HMR will exchange, add or remove modules in place as they’re altered whilst also maintaining state. Dan Abramov has a great presentation on HMR.

3. Clever code splitting and the ability to use different bundles for different entry points.

This opens up opportunities like a bundle for a main application, and a separate bundle for an administration panel. Things can, of course, be split any way you like. This provides numerous benefits, like: limiting the filesize for the first bundle needed to “boot” the app; the ability to share “common” functionality across bundles; an easier way to split code based on something (such as features).

4. Easy integration with npm and seamless handling of multiple module formats (CommonJS, AMD etcetera)

Whilst the JavaScript community has mostly agreed on CommonJS as the “de-facto” module format, there are some situations where a module will be in the AMD format, or maybe even a global “window” export. Having something that can easily handle all of these possibilities means greater interoperability with the wider community should we wish to use 3rd party modules.

5. Good integration with React.js

Again, React.js has no dependency on Webpack, but it seems to be the preferred build system of the React.js community. As we use React.js, and are actively moving away from Backbone.js, it’s good to gain parity with the community.

Once the initial concept of “loaders” are grokked they’re very easy to use, and very powerful.

6. Leaner and meaner configuration files

This caters for configuration files that are easier to understand and extend. As well as less duplication between build and test runner configuration.

How did you do it?

Good question. On the surface it seemed like I would just need to convert over our Require.js configuration to an equivalent Webpack configuration and Everything Would Just Work™ with a couple of tweaks. I was wrong. Very wrong.

In the end, the whole of UI ended up getting a bit of an audit since small and subtle things kept breaking. As we will see later on, this wasn’t Webpack’s fault. Here are some of the scenarios I dealt with.

Configuration files

First I needed to create a Webpack configuration file, based on our old Require.js configuration. Our development configuration file ended up looking like this:

var path = require('path')

var webpack = require('webpack')

module.exports = {

  // The base directory for resolving the entry option
  context: __dirname + "/app/coffeescript",

  // The entry point for the bundle
  entry: "./bootstrap.coffee",

  // Various output options, to give us a single bundle.js file with everything resolved and concatenated
  output: {
    path: __dirname + '/app/webpack',
    filename: "bundle.js",
    publicPath: '/app/webpack/',
    pathinfo: true
  },

  // Where to resolve our loaders
  resolveLoader: {
    modulesDirectories: ['node_modules']
  },

  resolve: {
    // Directories that contain our modules
    root: [path.join(__dirname, "./app/coffeescript"), path.join(__dirname, "./app/templates")],

    // Extensions used to resolve modules
    extensions: ['', '.js', '.csx', '.csx.coffee', '.coffee'],

    // Replace modules with other modules or paths (like the 'paths' option with Require.js).   This is for modules where we explicitly control the location, as opposed to node_modules based modules.
    alias: {
      some_lib: path.join(__dirname, "some/location")
    }
  },

  // Source map option. Eval provides a little less info, but is faster
  devtool: 'eval',

  // Our loader configuration
  module: {
    loaders: [
      { test: /\.csx.coffee$/, loaders: ['coffee', 'cjsx']},
      { test: /^(?=.*coffee)(?!.*csx).*/, loader: 'coffee' }, // Contains .coffee but NOT .csx
      { test: /\.html$/, loader: "mustache"},
      { test: /\.json$/, loader: "json"}
    ]
  },

  // Include mocks for when node.js specific modules may be required
  node: {
    fs: 'empty',
    net: 'empty',
    tls: 'empty'
  }

};

I’ve cut out a couple of options for brevity, but that’s the bulk of our configuration file.

Our production configuration extends from this configuration by adding some additional plugins options—namely things such as the UglifyJS plugin—and alters the output option to use our distribution folder. We also use source-map for our devtool option in production. Apart from that everything stays the same.

I also added some scripts options to our package.json file. When working locally we just need to run npm run watch which executes webpack-dev-server with various flags for us. The webpack-dev-server is a very nifty little tool, and is an npm install webpack-dev-server away. It’s essentially a small Node.js Express web server which uses Webpack middleware to serve bundles—it also sets up Socket.io to respond to changes in realtime.

Webpack-dev-server supports two modes for automatically refreshing the page: inline and iframe. Both of those support Hot Module Replacement. The documentation recommends pairing inline mode with Hot Module Replacement. You can enable them both by using the --inline --hot flags. Awesome, huh?

There’s also the option to just use a simple --watch flag to watch and rebundle, with none of the extra magic. In that case webpack --watch will work just fine, without the extra webpack-dev-server module. In the past it was standard to install some sort of watch module with Grunt, Gulp et cetera. You then had to configure all the paths it should watch and all the commands to be run in response to those changes. What a faff!

You’ll notice that one of the meatiest parts of the configuration file is the loaders section. This is where Webpack really shines with its simplicity. Loaders preprocess files as and when they’re required. With Grunt we would have to install a separate module for something like CoffeeScript. We would then configure our watch module to watch those .coffee files before running the CoffeeScript compilation in response to something changing.

With Webpack the watch portion is already handled, and because of this configuration line: test: /^(?=.*coffee)(?!.*csx).*/, loader: 'coffee' } Webpack knows that when a module is required with a .coffee extension, it should run it through the coffee loader. Webpack will look for loaders in various locations, one of these being node_modules, which is where we keep our loaders. And that’s it!

You’ll notice we use a regex for our test property here, this is because we have .coffee and .csx.coffee extensions. If a module has both extensions we process the module with the coffee and cjsx loader. Loaders are incredibly powerful because you can chain them in this fashion.

As part of our conversion I also wanted to swap us from the Grunt based Jasmine module we were using (grunt-contrib-jasmine in our case) for running tests, to Karma. Karma integrates very easily with Webpack, and allows us to use our same Webpack configuration file. By reusing so much of the same config it really does cut down the number of areas something can go wrong.

We use Karma with this configuration:

var webpack = require("webpack");
var path = require("path");
var _ = require('lodash')

// The base config from above
var baseWebpackConfig = require('./webpack.base.config');

var karmaWebpackConfig = {
  resolve: {
    alias: {
      testingSpecificLib: path.join(__dirname, "some/location")
    }
  }
}

var webpackConfig = _.merge({}, baseWebpackConfig, karmaWebpackConfig);

// Karma with the Webpack preprocessor will handle entry and output
delete webpackConfig.context
delete webpackConfig.entry
delete webpackConfig.output

module.exports = {
  
  frameworks: ["jasmine", "sinon", "phantomjs-shim"],
  
  files: [
    './tests/tests.bundle.js'
  ],
  preprocessors: {
    './tests/tests.bundle.js': ['webpack', 'sourcemap']
  },

  webpack: webpackConfig,
  
  plugins: [
    require("karma-webpack"),
    require("karma-jasmine"),
    require("karma-sinon"),
    require("karma-phantomjs-shim"),
    require("karma-phantomjs-launcher"),
    require('karma-sourcemap-loader')
  ]
};

Again I’ve cut out some of the more obvious Karma options for brevity, but this highlights the Karma and Webpack integration. By making use of the karma-webpack plugin it’s as simple as providing a webpack option. This makes use of our standard configuration, extended with some testing-specific options. The files option may seem a little funny using ./tests/tests.bundle.js. The contents of our tests.bundle.js file looks like this:

var context = require.context('./unit', true, /.+coffee?$/);

context.keys().forEach(context);

module.exports = context;

This cleverly bundles up all of our unit tests for us. Here we’re looking for all files in the ./unit folder, with a .coffee extension. The true option just makes it recursive.

When we do testing, the npm run test delegates to karma start for us. For production we extend this configuration with some extra options. For example, we set singleRun to true as we use that configuration in a continuous integration style.

npm

Webpack integrates easily with npm, and will use the node_modules folder automatically to resolve modules if they can’t be found anywhere else. With this in mind, where possible, I swapped us from storing 3rd party libraries in our assets folder to using the same version via npm. These changes were mostly seamless and only small changes were needed. For example, in the case of the React.js module if we wanted to use addons, we now needed to format our require statement like this: require('react/addons').

Minified source files

Where possible I got rid of minified source files, and pulled in the full source. Webpack works better with unminified source files, and will point this out with warnings in the CLI output.

Require.js based modules

We had some modules that were heavily based around Require.js, or more accurately the AMD format. The first of these was a domReady module. I swapped this for some custom, lightweight DOM-ready code. The others were for Google based libraries, namely Maps and Feeds. In the case of Google Maps I was able to pull in an npm module and make minimal changes. Replacing the AMD-based Google Feeds module was a case of writing a custom module. This wasn’t a complicated module by any stretch of the imagination. It was mostly just a JSONP convenience wrapper. And with that, the old modules were gone.

Breakages

From here I had to work through our compilation errors, exceptions and test failures, on an almost one-by-one basis. I’m going to highlight a few of these. It would take far too long to go through each and every one of them (most as them were specific to our application, anyway. Your mileage will vary).

Mustache templates

In the past we had a few syntax errors like this {{ / (note the space), and the template handling we used was seemingly more lenient than the Mustache loader (which uses Hogan).

Lodash errors

Somehow in the past we were getting away with using the _(somethingToWorkOn) Lodash syntax, but without calling .value() at the end. With our new setup, exceptions were being raised in the application. At first the errors seemed quite cryptic, but once I stepped through the debugger they gradually started making sense. Here a _( search was able to track these uses down, and from there the usage could be changed to _.filter(somethingToWorkOn, predicate) (as an example). As we only had about 4 references to that style of syntax in our codebase, it made most sense to do away with it completely.

React.js components not rendering

These were a bit more tricky as there were no compilation errors, and no exceptions in the console. Only way I could find them was by going through the app and locating places where things weren’t rendering. In the end this came down to the use of React.js components with lowercase letters, like this: <someComponent.

As specified in the React.js documentation components must use an uppercase letter. This was working and “slipped through the net” for us before.

Unit tests

It took a long time to convert over all of our unit tests. The main problem was that in the past we had a mocking library that was Require.js specific—Squire in our case. Whilst Squire worked well, we couldn’t use it anymore.

I swapped us over to the Webpack Inject Loader, which meant working through every mocking call and changing over the syntax. Unfortunately this couldn’t be handled in a generic, search / replace based way.

The old mocking syntax looked like this:

injector = new Squire()

injector.mock(dependencyName, mockForDependency)

And our new syntax looked like this:

SomeModuleInjector = require('inject?module/to/mock!some/module/location')

SomeModule = SomeModuleInjector({
  'module/to/mock' : {}
})

Problems like this were frustrating time-sinks, at the time. However dealing with them was a worthwhile exercise as we got a chance to fix things that shouldn’t necessarily work before.

The future

Adopting Webpack has opened up lots of future possibilities for us: multiple lazy loaded entry points, handling of CSS and handling of assets such as images and fonts (certain loaders can automatically convert images to Base64 strings if they’re below a certain filesize. Very nifty).

By completing all this groundwork we hope to be in good stead with our workflow for the next couple of years. It’s for that reason that far-reaching changes like this are a good investment.

If you’d like to read further into Webpack I recommend this wonderful cookbook by Christian Alfoni.

Also, if this kind of stuff appeals to you we are hiring a UI Engineer.

The post The Journey to Webpack appeared first on Server Density Blog.


Viewing all articles
Browse latest Browse all 94

Trending Articles