2012-07-03

Modularizing JavaScript Code with Backbone.js and RequireJS

Lately we have been doing a lot of JavaScript development and have been very happy about the improvements in libraries and frameworks, long gone are the days of seeing JavaScript just as a necessary evil. Unfortunately there is still legacy code lying around even in our codebase.

As an example I present an application that was developed by us: Semantic.hri.fi, it provides a Linked Open Data view of www.aluesarjat.fi statistics. What we will be looking at is the SPARQL search page and mainly its source code.

Even with a quick glance it is easy to spot several issues which are nowadays handled with a different approach:

  • String prototype is monkey patched with a function that is only used once
  • HTML is constructed in code via string concatenation
  • Application is not properly layered due to rendering happening in back-end data fetches
  • Hard to tell which functions are managing which area of the screen
  • File is very long
  • Global namespace is polluted with several variables

Now that we have identified some of the problem spots, how would we fix them?

String prototype is monkey patched with a function that is only used once

Monkey patching causes compatibility issues and thus is not very sensible. Luckily an easy fix is just to use a function that accepts the string as an argument. Another option of course is to use an external library for handling string manipulation, such as Underscore.string.

HTML is constructed in code via string concatenation

Instead of manually constructing HTML like this:

//...
html.push("<thead><tr>");
for (var i = 0; i < vars.length; i++){
  html.push("<th>" + vars[i] + "</th>");
}       
html.push("</tr></thead>");
// ..

I would use a templating library such as Handlebars. With Handlebars we would have a template declared in its own file like this:

...
<thead>
  <tr>
    {{#each vars}}
    <th>{{this}}<th>
    {{/each}}
  </tr>
</thead>
...

And the code to populate it with:

var template = Handlebars.compile(tableTemplate);
$('#results').html(template({vars: vars}));

Application is not properly layered due to rendering happening in back-end data fetches

Instead of doing DOM manipulation inside an AJAX success callback like this:

success: function(data){
  var defaultNamespaces = [];
  var bindings = data.results.bindings;
  for (var i = 0; i < bindings.length; i++){
    var binding = bindings[i];
    namespaces[binding["prefix"].value] = binding["ns"].value;
    defaultNamespaces.push("PREFIX ", binding["prefix"].value, ": <", binding["ns"].value, "> </br>\n");
  }
  
  $("#namespaces").html(defaultNamespaces.join(""));
  
  init();
}

I would rather just fire an event to inform anyone who is interested that we are done loading. For this we use a common event aggregator called vent, which is just a JavaScript object with Backbone.Events mixed in. With vent in use I would change the above to something like this:

success: function(data) {
  vent.trigger('query:success', data.results.bindings);
}

This enables the success function to only care about retrieving the data. How the data is processed later on is left to the subscriber(s).

Hard to tell which functions are managing which area of the screen

Looking at the code I am having a really hard time understanding which functions are responsible for managing which areas of the screen, they should definitely be somehow bundled. A great way to do this is with Backbone.Views.

I would probably first create a SparqlView which is composed of SearchView (one on the left) and SavedQueriesView. After that I would probably split SearchView into two separate views QueryView and ResultsView. Below is a pseudocode example of the structure:

SparqlView
  SearchView
    QueryView
    ResultsView
  SavedQueriesView

Backbone.Views would probably be used for smaller elements as well, but these changes alone would give us a better understanding of the structure, plus the benefit of all the goodies Backbone.View provides.

File is very long

This can be of course fixed by splitting the file into separate files and then loading them separately. Luckily for us, we have already split the code into separate Backbone entities so moving them into their own files is really easy:

ViewFilename
SparqlViewjs/views/sparql.js
SearchViewjs/views/sparql/search.js
QueryViewjs/views/sparql/search/query.js
ResultsViewjs/views/sparql/search/results.js
SavedQueriesViewjs/views/sparql/saved_queries.js

When the subview has only one parent I find it a good practice to put the subview into a subdirectory with the name of their parent.

Global namespace is polluted with several variables

We have split the code into separate files, but have not really done anything about polluting the global namespace and our HTML file is full of script sources. I would bring in RequireJS to sort out this mess.

With RequireJS we declare only our application entry point in the HTML:

<script data-main="main" src="js/libs/require/require.js"></script>

data-main specifies which file is used as the entry point, in our case it is main.js and its contents is something like the following:

require(['app'], function(App) {
  App.initialize();
});

require is a function that takes an array as its first argument and a function as its second. The values in the array are RequireJS modules which are fed into the function as arguments. So in this case the argument App is module app.js. Inside the function we call App's method initialize, let's have a look at app.js:

define(['js/libs/jquery/jquery', 'js/views/sparql.js'],
  function($, SparqlView) {
  var initialize = function() {
    new SparqlView({el: $('#sparql')});
  }

  return {initialize: initialize};
});

Here we have defined a RequireJS module that has two dependencies: jQuery and SparqlView and we only expose a hash containing one function, initialize. When called, initialize creates a new SparqlView, SparqlView is defined as a RequireJS module as all of its subviews.

We now hopefully have a clear view on how to take old JavaScript code to the present.

2012-06-21

Form Data Extraction with Backbone.js, Underscore.js and jQuery

Our weapon of choice for building single-page web applications is Backbone.js. Unlike a few of its competitors, such as Ember.js, Knockout, Backbone does not support model binding out of the box. There are a some libraries that add model binding support to Backbone, probably first on the block was Backbone.ModelBinding which has now been discontinued, Backbone.ModelBinder and Backbone Bindings.

Even though these libraries make sense in some cases, we have noticed that usually when starting out, Backbone's default tools (Backbone itself, Underscore.js and jQuery) are enough to get the ball rolling, as demonstrated below.

HTML

<form>
  Name: <input type="text" name="name"/>
  Age: <input type="text" name="age"/>
  <input type="submit" value="Submit"></input>
</form>

JavaScript

var UserForm = Backbone.View.extend({
  events: {'submit': 'save'},

  initialize: function() {
    _.bindAll(this, 'save');
  },
  
  save: function() {
    var arr = this.$el.serializeArray();
    var data = _(arr).reduce(function(acc, field) {
      acc[field.name] = field.value;
      return acc;
    }, {});
    this.model.save(data);
    return false;
  }
});

var userForm = new UserForm({el: this.$('form'), model: new User()});

To understand the actual data extraction I will explain serializeArray and reduce.

serializeArray is a jQuery method that takes a form and constructs an array of all the input elements that have a name attribute. The resulting array is in the following form:

[{name: "name", value: "John Smith"}, ...]

An array of maps is not quite something we are looking for, so we have to do another transformation.

reduce is used to build up a new value from the array with an original seed value (empty map) and a function. The function's first argument acc is the return value of the previous function call and the second argument field is the current element in the array. Now we have something that can be passed to Backbone.Model.save:

{name: "John Smith", age: 34}

2012-03-09

Scalagen - Java to Scala Conversion

Scalagen was born out of the idea that we would like to port some of our projects from Java to Scala source code. Instead of porting all the code manually a tool could be used to do most of the bulk work. We did some initial experiments with Jatran, but found it lacking.

After that we investigated some options to write such a tool ourselves. First we needed a parser for Java sources to turn into abstract syntax trees. We decided to use the Javaparser framework since it supports Java 5 source and had an easy to use API.

Initial Architecture

Then work began on Scalagen. The initial parts were written in Java, such as a visitor implementation for the javaparser which prints out the tree as Scala syntax. After that had been done this visitor was used to convert the visitor class from Java to Scala. After that the implementation of Scalagen continued to be fully Scala-based.

After that we wrote visitor implementations which did transformations on the AST. Needed transformations were to extract static members into companion objects and to change various control statements into a more Scala-esque form. At first the visitor would just mutate the original AST from one state to another. This proved to cause problems with Scala's immutable collection types so we rewrote the transforming visitors to always return new detached AST versions.

Now the initial architecture was set up and further transformations could be written. After a while we had an impressive list of transformations which could be easily used via both their class and object variants:

object Primitives extends Primitives


class Primitives extends UnitTransformerBase {

...

}

Wrapping the Parser

Then we began to adapt the javaparser API to be more easy to use. We declared short type aliases in a singleton object to strip off redundant suffixes such as Stmt and Expr:

type Annotation = AnnotationExpr 

type AnnotationDecl = AnnotationDeclaration 

type AnnotationMember = AnnotationMemberDeclaration

type Assign = AssignExpr  

type Binary = BinaryExpr   

type Block = BlockStmt 

type BodyDecl = BodyDeclaration

Then we began to write deconstructors to make AST pattern matching more concise:

object FieldAccess {
  def unapply(f: FieldAccess) = Some(f.getScope, f.getField)
} 

object For {
  def unapply(f: For) = Some(toScalaList(f.getInit), f.getCompare, 
    toScalaList(f.getUpdate), extract(f.getBody))
}

object Foreach {
  def unapply(f: Foreach) = Some(f.getVariable, f.getIterable, 
    extract(f.getBody))
}

object MethodCall {
  def unapply(m: MethodCall) = Some(m.getScope, m.getName, 
    toScalaList(m.getArgs))
}

And something more complex which provides both a new-less constructor via apply and a deconstructor via unapply.

object VariableDeclaration {
  def apply(mod: Int, name: String, t: Type): VariableDeclaration = {
     val variable = new VariableDeclarator(new VariableDeclaratorId(name))
     new VariableDeclaration(mod, t, variable :: Nil)
   }
  def unapply(v: VariableDeclaration) = 
    Some(v.getType, toScalaList(v.getVars))
}

The singleton names matched the type aliases so we had a very Scala-like meta-layer on top of the javaparser AST classes.

Here is a fairly complex example of how the pattern matching could be used in the transforming visitors.

override def visit(nn: Foreach, arg: CompilationUnit): Node = {
  val n = super.visit(nn, arg).asInstanceOf[Foreach]
  n match {
    case Foreach(
      VariableDeclaration(t, v :: Nil),
      MethodCall(scope, "entrySet", Nil), body) => {
        val vid = v.getId.toString
        new Foreach(
          VariableDeclaration(0, "(key, value)", Type.Object),
          scope, n.getBody.accept(toKeyAndValue, vid).asInstanceOf[Block])    
    }
    case _ => n
  }   
}

This looks still quite cryptic if you are not familiar with the AST structure of javaparser, but for the ones familiar with structure of the AST, this is a fairly intuitive way to match AST patterns.

Packaging

Scalagen provides direct Maven support via a plugin. You can use it directly via the command line like this

mvn com.mysema.scalagen:scalagen-maven-plugin:0.1.3:main \
  -DtargetFolder=target/scala

and for test sources

mvn com.mysema.scalagen:scalagen-maven-plugin:0.1.3:test \
  -DtargetFolder=target/scala

Here is the snippet for an explicit configuration in a POM:

<plugin>
 <groupId>com.mysema.scalagen</groupId>
 <artifactId>scalagen-maven-plugin</artifactId>
 <version>0.1.3</version>
</plugin>

To convert main sources run

mvn scalagen:main

and to convert test sources run

mvn scalagen:test

The conversion results are to be seen as a starting point for the Java to Scala conversion. Some elements are not transformed correctly for various reasons and will need manual intervention.

Finally

Scalagen is an experimental effort and has still lots of rough edges, but we are open to improvement suggestions and stylistic changes.