Spark framework: The tiny framework that almost could

The Spark framework
The Spark Framework

As part of my complete makeover of all my websites I spent the last few weeks creating an application in sparkframework, a non-enterprise Java micro web framework. However, I never actually launched it. I came very close, and I’ll get back to why not in a moment.

The spark framework

The spark framework is a tiny non-enterprise Java framework that gives you the bare minimum needed to create a web application, with just a few lines of code, and not much more. It provides embedded Jetty out of the box, so you can use it without rolling out your own app server, which is nice. It also supports Freemarker-, and JSP-templates. In other words you can assume it’s great if you either like building everything from scratch yourself (and actually have time), or you are creating something very simple.

Just to give an impression of how minimalistic it actually is, and what it provides, here are the features:

  • Routes
  • Request
  • Query maps
  • Response
  • Cookies
  • Session management
  • Halting
  • Filters
  • Browser Redirect
  • Static files (in package and/or external)
  • Freemarker and JSP templates with views

My application

And this sounded to me to be exactly what I needed. What I wanted to do was replace a whole bunch of sites with a single tiny small Java application. The sites in question are currently based on static html, Server Side Includes and a lot of HTTP redirects in Apache, so a micro framework should have been exactly what I needed. I spent a couple of weekends creating a very minimalistic application to handle this. The main class is only about 30 lines of Java code, supported by a few POJOs. In addition, the application included a whole bunch of Freemarker templates, three JSON files, and a few bootstrap 3 themes. Each page are rendered through a master template for the given site, and the page template included through it. I also included a fall back for unknown sites letting me render a “oops” site for any incoming requests to the sever that doesn’t map to a known site. This allows the application to act as a “catch all” backend for Varnish.

The three JSON files define the structure of the sites. One file contains entries for each page of each site including sub-template to be used, title, path and any other site specific settings or content that are used across multiple pages. The second file contains redirects and the third JSON file simply contains a lists of all of my sites, since I want to include that list on multiple sites but maintain in from a single location.

Adding a new page to a site simply involves adding a new Freemarker template, and giving the page an entry in the first JSON file. Easy as can be. Adding a new redirect simply involves adding an entry to a JSON file. Finally, adding a new site involves adding new entries to the first JSON file and adding a couple of lines of Java code. The extra couple of lines of Java code should really not be necessary, and I was planning on changing this soon, until the problem arose.

The problem

Right before launch I discovered a problem, that ended up thwarted my whole effort. While testing the sites before switching DNS entries, I discovered that some 404 errors where rendered by a boring generic jetty 404 page. This was not my intention. My initial thought was to create a top level “splat” route at the lowest priority level, to trap any incoming non-recognized requests and spit out a custom 404 based on the site. But that broke static files, since routes have higher priority that static files no matter what in the spark framework. How about a filter? Include a top level end filter that checks for 404s and redirects based on that? Nope. The exit filter obviously also is applied before static the static route. What about adding a custom 404 page to jetty? Not possible. The spark framework doesn’t provide any customization of the built in Jetty server.

Leaving the only option to not use the embedded jetty but rather deploy the application on an external app server, which completely screws up the whole idea of the application. If that is the only option, why not go for Spring and get all of the Spring magic? Or simply make the whole thing from scratch based on an embedded Jetty? The whole idea of the project was a simple drop and run jar without any external dependencies, configuration or effort. Naturally, what I really should do is fork the project and add the option myself, but with my current TODO-list that probably will not happen for a good 10 years or so.

So there you have it. Custom 404 pages combined with static content and embedded Jetty in simply not possible in the spark framework which is a shame. So now I’m going to spend time redoing the project in Spring Boot instead.

Conclusion

The spark framework is a really cool concept. I really like the idea of being able to create non-enterprise applications with minimal a minimal footprint in Java. However, the lack of possibility to configure the embedded Jetty server is a big show stopper as far as I am concerned. Until that is fixed, I’ll probably not be using the spark framework and rather making do with the “enterprizy” Spring Boot framework instead.

However, if you really like making everything from scratch yourself (I really do, but I’ve discovered that I never finish anything I try to make from scratch, so I’ve more or less given up on not using frameworks), and like non-enterprise Java (yeah, it really does exist), you most definitely want to look into the spark framework. And hopefully you like tinkering, and decide that you should contribute configuration of the embedded Jetty server to the project (I really wish I had the time).

Finally, spark is probably really great for prototyping and creating proofs of concept, and I’ll probably be using it for that in the future.

5

Responses

  1. Jason C

    I was looking into doing the same thing today and was struggling to find a solution for a little while. Ended up coming up with this:

    before((req, resp) -> {
    req.attribute(“handled”, false);
    });

    after((req, resp) -> {
    if (!Boolean.parseBoolean(req.attribute(“handled”).toString())) {
    // redirect/render ‘not found’ page content
    }
    });

    Then you just need to be sure to set the “handled” attribute to true whenever one of your route handlers is run (which isn’t too bad really if you leverage some helper methods and/or a subclass to render the responses).

    Reply
    • Jason C

      And yes, it still allows static files to be served as well – both the before and after filters seem to get skipped entirely in this case.

      Reply
    • Brendan Johan Lee

      Interesting solution. I’ll have to look into this once I have some spare time. It isn’t nearly as elegant as I would have prefered, but if it works I might give it a go.

      Reply
  2. Rob Eden

    You can run Spark using a manually configured Jetty instance by installing the SparkFilter. The Spark docs tell how to do this with a web.xml, but here’s the code for just coding the mapping:

    WebAppContext webapp = new WebAppContext();
    webapp.setContextPath( “/” );
    webapp.setWar( “/” );
    FilterHolder holder = webapp.addFilter( SparkFilter.class, “/*”,
    EnumSet.of( DispatcherType.REQUEST ) );
    holder.setInitParameter(
    “applicationClass”, “my.package.MySparkApplication” );
    server.setHandler( webapp );

    Then you can do whatever you want with the Jetty instance.

    Reply
  3. Dachao

    I was able to flesh out Rob’s suggestion and have an example posted at Spark’s github:
    https://github.com/perwendel/spark/issues/197#issuecomment-213952246

    Reply

Leave a Reply