Testing for the Web, Part I: Javascript Unit Testing with Jenkins, JSTD, Phantom.js and Sauce

Like anyone developing software for the web, testing is a major concern for us here at Netwallet. Catching errors up front during testing means that your site stays up and your customers can get their work done. That makes customers happy. Having comprehensive tests running continuously on your code also gives you the freedom to change your code, refactor and improve it, all the while knowing that problems or regressions will be caught before the changes go live. That makes developers happy.

However, testing for the web is difficult. The complexity of the HTML/CSS/Javascript stack is greatly compounded by differences among browsers and browser versions. Then there are all the complexities of dealing with server interactions, ajax, etc. To make matters even worse for us at Netwallet, we are building a service that integrates with third-party sites, so we have to test the way our code interacts with numerous sites across the web whose code we don’t control.

Obviously, testing will never catch every problem, but a layered testing setup is an important part of creating a reliable service. When combined with good monitoring and a continuous-deployment capability that lets us fix and respond to problems quickly, testing goes a long way toward ensuring that we stay up and running and serving our customers needs. What do we mean by layered testing? The idea is to test code at various scales, starting with unit tests of small chunks of code (functions, classes, modules), ranging all the way up to integration tests that exercise whole pieces of the app, including production services.

Unit-testing Javascript in the Cloud

In the rest of this post I’ll describe the test setup that we’ve put together for unit testing javascript, as sketched in the image above. We use several pieces of open source technology, coordinated by a small python script. We can run this script on development machines to test before pushing code, and also in scheduled or triggered jobs on our Jenkins continuous integration server. For most local test runs, we use the lightning-fast PhantomJS headless browser to keep things snappy, but the setup also allows us to run our unit tests in any of the 49 browser/os combinations supported by Sauce Labs.

The units tests are distributed to the browsers and run using JsTestDriver. When JSTD runs, it starts a local web server with a special ‘capture’ url. To run tests in a browser, the browser is pointed at this capture url, allowing JSTD to take control. When we then tell JSTD to run some tests it instructs all captured browsers to load the test code and execute it, then reports the results. We can leave JSTD running with one or more captured browsers and execute multiple test runs, or we can start the JSTD server and capture a browser just for a single test run. For local development, JSTD can integrate with eclipse to allow easy running of tests and viewing of results right next to the code.

In order for JSTD to be of any use, of course, we need to capture some browsers. JSTD can be instructed to launch browsers that are installed locally, but this is of course limited to installed browser versions and the running OS. To get at other browser versions and operating systems, we use the Sauce Connect service from Sauce Labs. This service lets you fire up browsers in their cloud, control those browsers with the Selenium WebDriver protocol, and proxy requests from the browsers back through a vpn connection so that they can see local services which might be behind a firewall, including in our case the local JSTD Server.

Another nice way to capture a browser is to use PhantomJS, which is a headless version of webkit that can be run from the command line and controlled with a simple javascript interface. Because it’s headless, it starts up quickly and can be run on a server without installing a full graphics stack. Because it’s built on webkit, it provides a full-featured, modern browser environment that’s guaranteed to be similar to chrome, safari or other webkit-based browsers. And finally, because it can be driven from the command line, we can easily point it to our JSTD url to be captured for running tests. For our unit testing setup we simply start up PhantomJS and point it at the capture URL, then JSTD takes over to run the tests.

To manage all this we wrote a simple python script that wrangles everything and runs our tests. The script is a bit long, for example there’s a good chunk of code to process command line options. But the heart of the code is very simple and consists of these few lines:

We use python’s with statement to manage everything, making sure that it all gets cleaned up nicely in the end, even if errors happen. We simply set up a jstd server and a sauce server, set up the desired browsers and point them to the desired capture url, and then run the tests (for the pythonistas out there you’ll note that we’re using the somewhat maligned contextlib.nested to manage all these contexts. This function is problematic when used with things like files because it can’t properly handle errors that occur when creating the individual context managers, but in our case where the contexts are already initialized and don’t do anything that could potentially raise an exception until their __enter__ method is called, it works just fine). The test results for all browsers are saved in JUnit format to the specified output directory, where they can be read by jenkins, for example, for aggregation and review.

The script is quite flexible, enabling us to either run JSTD and Sauce Connect services, setting them up in the beginning and then tearing them down at the end, or else connect to existing servers if they are already runnning. In addition, we can specify any number of browsers to run in a simple command line format. For example, running

python testjs.py --jstd-config tests.jstd --jstd-output results phantom

will launch PhantomJS and capture it with an already-running JSTD server, then run the tests configured in tests.jstd, and save tests results in the results directory. On the other hand, running

python testjs.py \
  --jstd-run --jstd-port 9090 --jstd-config tests.jstd --jstd-output results \
  --sauce-run --sauce-user {user} --sauce-key {api_key} \
  phantom firefox-14-xp opera-12-linux

will launch jstd on port 9090 (one of the ports proxied by sauce connect), launch sauce connect, fire up several browsers in the cloud as well as a local PhantomJS instance, and run the tests. (Note that in both of these examples we’re assuming that other required configuration values are set in environment variables, for example JSTD_JAR and SAUCE_JAR to indicate the location of the jar files for those services. See the code for details.)

So, a little bit of python glue helps us tame the monster, making all these moving parts work together to create a nice javascript unit test environment that scales from local tests with PhantomJS, all the way up to cloud tests across just about every browser we would want to try. It works great for us at Netwallet, and we hope it will be useful to others as well. So here it is, in all its glory:

Leave a comment

Ember/Handlebars template precompilation with Play

One of the features we love about the Play framework is its out-of-the-box support for dealing with assets: compiling and minifying javascript, CoffeeScript, and LESS stylesheets. This asset compilation is not without issues, for example it is not possible to configure the LESS compiler version, nor can you completely configure the Closure Compiler (though this should be fixed in the next version of play). However, it’s still a great system and we’re sure it will only get better as the play framework matures.

Since we’re using Ember we wanted to tap in to the play asset compilation system to precompile our ember templates on the server before sending them to the client. This has a number of advantages over client-side compilation, allowing us to catch template errors at build time and to eliminate the runtime overhead of template compilation in the client.

The template compiler we created looks at the setting emberEntryPoint, which should be a sequence of directories containing ember templates (note that this is different from the standard asset compilers which have files as entry points, rather than directories). For each entry point directory, we compile all the *.handlebars template files and concatenate them into one javascript file to be served to the browser.

For example, suppose we have the following hierarchy of files:

app/
  assets/
    templates/
      my_view.handlebars
      widget/
        another_view.hanblebars

We set the emberEntryPoint to app/assets/templates; the compiled template file will be served from /assets/javascripts/templates.pre.js (unminified) or /assets/javascripts/templates.pre.min.js (minified). On the client, after loading this script these templates can be used by simply setting the templateName property on your ember views, for example:

Note that the hierarchy of template files is preserved in the compiled template names, which makes it easy to handle even complicated sets of templates without worrying about name collisions.

To make this all work, we need our asset compiler to do something like the Handlebars precompile script, but running inside rhino as in the standard play asset compilers, and modified to work with the customized version of Handlebars that is embedded in Ember. This requires that we set up the rhino js context so that Ember will run (inspired by this gist), then load the ember library, and finally create our own precompile function to convert the template function objects created by ember to strings. We concatenate these and wrap them with some boilerplate that will add the compiled templates to Ember’s TEMPLATE cache at runtime.

Here’s the code:

We just drop EmberCompiler.scala into the project/ directory in our app, make the necessary modification to our Build.scala script, and play’s build system will pick everything up and start compiling our templates (if you’re already in a play shell you’ll have to tell play to reload to pick up the changes to the build definition). It’s that simple.

We’re looking for excellent engineers to join our team here at NetWallet. So if you are passionate about doing great things with technology, drop us a line. And, if you’re interested in learning more about the technologies we’re using, come to our talks at Silicon Valley CodeCamp on Play and Ember.

, ,

5 Comments

Simple cross-domain ajax with a wormhole

At NetWallet, we need to embed our application in a host page and communicate securely with our backend servers using cross-domain ajax.  Some of the standard techniques for performing cross-domain requests won’t work for us: we can’t expect all third parties to proxy requests (this would be practically infeasible, let alone insecure), nor can we use jsonp because we need to set cookies from our domain to verify user and machine identities. Instead, we’ll create a wormhole.

The basic idea is to create an iframe in the host page that loads content from our domain. This is the idea behind the ‘post to iframe’ trick used by Facebook, among others. As a variant of this technique, we’ve been using porthole.js to set up an iframe to which we can post messages, and from which we can receive messages. This works, but we wanted to wrap this low-level interface into something slightly more convenient.  We like jQuery’s ajax method, so we’ll try to imitate that.  In the end we’ll have a communication channel that lets us quickly and easily talk to another domain almost as if it were right here, which is why we call it a wormhole: it bridges time and space!

On the host page, the wormhole frontend, we create an Ember object (did I mention we’re using Ember.js?) that sets up the porthole windowProxy and listens for messages from it.  This object also has an ajax method that emulates jQuery.ajax, taking a standard settings object. When Wormhole.ajax is called, we create a Deferred to handle the result and store this deferred in a dictionary, indexed by a unique request id. Then, we simply send the settings object through the porthole to be handled on the other side, and we return a promise that the deferred will eventually be fulfilled. When the response comes back from the porthole, Wormhole.onResponse is called and we simply pop the matching deferred out of the dictionary and resolve or reject it, depending on whether the request succeeded or failed.

For convenience, we even handle ajax requests that are made before the iframe has loaded by stuffing those requests into a queue and then sending them when we get a message that the porthole is ready. That way clients can use the Wormhole immediately after it has been created, without themselves waiting for it to be established.

To make this all work the backend page which is pulled down into the iframe from wormholeUrl runs a simple script that sets up the porthole windowProxy from its side, sends a ready message, and then listens for requests posted from the frontend. When a request comes in, it fires off a real jquery ajax call, no cross-domain hoops to jump through, and then sends a response message back through the porthole when the request is complete, indicating whether the call succeeded or failed.

Here’s the code:

Et voila! We can now make cross-domain requests through our iframe using the convenient jQuery.ajax interface. We’re working on interesting problems like these at NetWallet every day, and we’re hiring!

, , ,

5 Comments

Follow

Get every new post delivered to your Inbox.