Follow Java For Testers on Feedspot

Continue with Google
Continue with Facebook


TLDR; Downloading a file with RestAssured is as simple as taking the body of a request as a byte array and writing it to a file.

When automating I often have to download files. One very common FAQ for WebDriver is “How do I download a file with WebDriver?”.

How to download a file with WebDriver

Answer: Don’t, use an HTTP Library instead.

You can, by configuring the browser session to not prompt for location dialogs, and then click download button. But usually that is more pain that it is worth so, if I have to download a file, I’ll use an HTTP library.

RestAssured is an HTTP library

RestAssured is marketed as an API testing library. And I do use it to test APIs, as I documented in my book “Automating and Testing a REST API”. I also use it as a general purpose HTTP library, and that is really how I am describing it here.

Basic code to download a file

I’ve extracted the following code from an actual set of Tactical Automated Execution code that I wrote recently.

private void writeImageIfNotExists(
                final PostDetails postDetails, 
                final File outputImagePath,
                final String image_url,
                final String fileNameMain,
                final String fileNamePostExtDetails) throws IOException {
    File outputImageFile = new File(outputImagePath.getPath(), 
                            fileNameMain + fileNamePostExtDetails);
    if (!outputImageFile.exists()) {
        Map<String, String> cookies = new HashMap();
        cookies.put("session_id", Secret.SESSION_ID);

        byte[] image = RestAssured.given().
        // output image to file
        OutputStream outStream = new FileOutputStream(outputImageFile);

The above is pretty hacky (but it is tactical, which means I wrote it for a specific purpose and may be short lived).

It basically creates a File object from a path, and file name Strings.

If the file doesn’t already exist then.

I create a HashMap of cookies. Then I add one cookie, a session_id because I’m bypassing the login process to rip out data from the system. If I was download a file during GUI Automating then I might rip the cookie from the WebDriver session and inject the details into my RestAssured session. In the above example I copy and pasted the session id from the browser becuase it was tactically supporting me doing some work.

Then I make a call using RestAssured

byte[] image = RestAssured.given().

This makes a GET request and returns the body as a byte array.

Which I can then write to the output file.

OutputStream outStream = new FileOutputStream(outputImageFile);

This is the basic code to download a file but isn’t necessarily the best example.

A Better Example of How to Download a File

I have created a slightly longer example and added it to my LibraryExamples project on Github

You can read the code to see full details, or watch the explanatory video.

Summary though (see sample code below the summary):

  • I pass in a map of cookies and headers to allow me to authenticate easily
  • rather than return the body asByteArray I return the response,
    • this allows me to check if the url actually exists first with a 200 status, this makes the whole process more reliable long term
  • I can still convert the body to a byte array when I know it exists
    • response.getBody().asByteArray()
  • If I wanted a very flexible solution, I wouldn’t assume an extension and I would use the Contenty-Type header to assign an extension, but in this example I just output the header to the console so you can see how to get it
    • response.getHeader("Content-Type")
  • The output file writing is a little more robust in that it catches any exceptions and reports the error.
  • If I was automating strategically I would use code more like the following, and gradually refactor it into a library to ever more generically support my strategic automating approach. e.g.
    • paramaterise whether to delete file or not
    • add extension based on content type
    • return a boolean if it downloaded correctly
    • support finding out if an exception happened if it did not download correctly
    • etc.
private void downloadUrlAsFile(
                final Map<String,String> cookies,
                final Map<String,String> headers,
                final String urlToDownload,
                final File outputPath,
                final String filename) throws IOException {

    File outputFile = new File(outputPath.getPath(), filename);

    final Response response = RestAssured.given().

    // check if the URL actually exists
    if(response.getStatusCode() == 200){

        if (outputFile.exists()) {

        System.out.println("Downloaded an " + response.getHeader("Content-Type"));

        byte[] fileContents = response.getBody().asByteArray();

        // output contents to file
        OutputStream outStream=null;

        try {
            outStream = new FileOutputStream(outputFile);
        }catch(Exception e){
            System.out.println("Error writing file " + outputFile.getAbsolutePath());
        }finally {
Step By Step Video Explaining the Code
How to download a file with RestAssured - YouTube
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

TLDR; Using Visual SVN, svnserve and local SVN repositories I was able to easily convert SVN to Git on Windows.

I hit every error possible when converting SVN to GIT. I eventually figured out the simplest way to avoid errors during the conversion process.

Migrate SVN to Git from local file repositories

To migrate from SVN to GIT I found it most reliable to use a local svn server, having migrated from remote svn using an svn dump file. then using svnserve to allow conversion using git svn over the svn protocol.

svnserve -d --foreground -r .
  • then I can use git svn to clone the repo with the svn protocol
  • “git svn clone svn:// -s”
  • then I can add a remote server and push it to my local repo

This avoids all errors like cpan core, svn file format versions above 1.6, malformed xml over http, slow http connections.

This just works.

Step By Step Video Showing Conversion Process
How to convert Svn To Git Using svnserve, VisualSVN, svnadmin dump, and git svn - YouTube
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

TLDR; Bonobo is a free and simple to install Git Server for windows.

The Bonobo git server install page instructions don’t fully match the process I had to use to install, so I’ve documented the process here.

I will show you the steps to install a local Git server on Windows 10.

Install Steps for Git Server

The Git Server used is Bonobo Git Server, this runs on IIS on Windows.



  1. use Turn Windows Features On and Off to install
  2. Internet Information Services > Web Management Tools > IIS Management Console
  3. Internet Information Services > World Wide Web Services > Application Development Features > ASP .Net 4.7
  4. Internet Information Services > World Wide Web Services > Common HTTP Features > Static Content


  1. download the zip file
  2. unarchive the zip file
  3. copy the contents of the zip file folder to C:\inetpub\wwwroot
  4. change the security properties of the App_Data folder to allow modify access to the IIS user
  5. check Anonymous Authentication is enabled
  6. visit http://localhost/Bonobo.Git.Server
  7. create a user
  8. amend settings to “allow user repository creation” and “allow push to create repositories”
  9. create a repo
  10. push repo to your server
useful git and shell commands used
git status
vi readme.md
git init
git status
git add -A
git commit -m "my first commit"
git status
git remote add origin http://servernameorip/Bonobo.Git.Server/test.git
git push -u origin master
Step By Step Video Showing Install of Bonobo Git Server
Install Free Git Server on Windows using IIS and Bonobo - YouTube
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

TLDR; Coding Tips for beginners.Write your code as temporary comments, remove syntax errors as soon as you see them.

Here are a few tips I’ve been verbally passing on when teaching people on my Java For Testers face to face training.

  • write the code you want to see as comments first
  • remove syntax errors as soon as you see them
Write the code you want to see as comments first

Creating a blog comment e.g.

iterate over the list and print the name of each object in the list
and assert that when I call getAge for each object it is greater than 18

The reason for doing this is that learning Java is hard enough:

  • what short cut keys in the IDE do I use
  • what was that loop construct again?
  • how do I get the age?
  • what does the if statement look like?

You are trying to remember a whole bunch of stuff.

Writing down what you are trying to achieve means that you don’t have to keep that in your head at the same time.

Eventually you will stop doing this. And you will want to delete the comments when you are finished.

But, I’ve seen this help people because this helps stopping people get too lost.

Remove syntax errors as soon as you see them

When people don’t do this, they end up writing a bunch of code and then none of it works, and it can be hard to resolve.

As soon as you see a syntax error, fix it, to allow you to write code and harness code completion.

Sometimes that means I’ll write "" because I just want the syntax error to go away and I haven’t decided on the data yet.

Sometimes that means I’ll pass in null as the argument, because I don’t know what it should be yet, but I know it needs to be there.

Then when the line of code is syntactically correct, I make it semantically correct.

This also helps when you are using the IDE to write code with “Alt+Enter”, the more syntactically correct it is, the more the IDE will generate the code you want to see.

And using the IDE to write your code can help avoid syntax errors.

Two Tips

Those are the first two tips that come to mind.

I’ll try to keep a list next time I do training as simple things can avoid the cognitive load associated with learning to code.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

TLDR; Rather than migration your assertions line by line, create an abstraction class to represent the new implementation and then perform inline refactoring.

I’m experimenting with migrating my projects to JUnit 5.

Many of the “how to migrate to JUnit 5” blog posts show differences, but not a lot of strategies. I used a Branch By Abstraction strategy to migrate JUnit 4 Assertions. This allowed me to experiment with using JUnit5 assertions or AssertJ assertions.

Differences between JUnit 5 and JUnit 4

The main differences between JUnit4 and JUnit5 seem to be: annotations, rules, and assertions.

  • If I find and replace, then I can change the annotations.
  • Assertions, if they are statically imported, I could find and replace the static import statements.

But assertions have the issue that in JUnit5 the messages have to go at the end of the parameter call list.


JUnit 4 would be:

Assert.assertNotNull("name should not be null", name);

A direct conversion to JUnit5 would be. But that would be incorrect because this means assert that the String “name should not be null” is not null, and if it is, show the error message in the variable name. Which is the opposite from the current JUnit4 test.

Assertions.assertNotNull("name should not be null", name);

Assertions are the main issue I have to solve in my migration.

Useful Articles, but Not Viable Strategies

The following articles are all useful, but when they do present strategy for migrating, the strategy is a lot of manual work to check the assertions.

None of these seem to present a strategy I like for migrating to JUnit 5.

Going line, by line, while your code is ‘broken’ doesn’t seem like an effective approach.

Basic Steps

The first steps I took were:

  • Commenting out JUnit 4 in the pom.xml
  • Adding JUnit 5 with the backwards compatibility functionality in the vintage engine

This was a quick view of “What doesn’t work”.

For more complicated projects we might also need to add the migration support package.

You can find all the Migration tips on the JUnit 5 site

Assertion Abstractions

Since Abstractions are the main issue I face, I thought I might investigate Assertion libraries, after all if I’m going to have to change all the assertions, perhaps I should change them all to an Assertion library.

Perhaps migrate to AssertJ or Hamcrest first?


AssertJ uses an Assertions package, and has a fluent interface so might even be easier for beginners because of the code completion.

AssertJ has ‘soft’ assertions to assert on all assertions and report on all even when one fails. Which is functionality in JUnit 5 as well, but I don’t need to use JUnit 5 Assertions to get it.

AssertJ has migration scripts for helping migrate

I would have to read these to evaluate them first.


I have used Hamcrest before. But stuck to JUnit4 because it was simpler and had more code completion.


Branch By Abstraction

Since I want to evaluate the different libraries.

I could use a Branch by Abstraction approach.

Currently my code has

Assert.assertNotNull("name should not be null", name);

I only use a subset of “Assert” so I could easily create an “Assert” class which allows me to migrate to JUnit 5 without changing my code e.g.

package uk.co.compendiumdev.junitmigration.tojunit5;

import org.junit.jupiter.api.Assertions;

public class Assert {
    public static void assertNotNull(final String message, final String actual) {
    Assertions.assertNotNull(actual, message);

If I had the above then I could simply import that into my Test Class, no need to change the code in my Test Class and then I have used JUnit 5 Assertions.

If I used this throughout my code base, then instead of working on Assertion migration at a line by line level, I could use IntelliJ to perform an inline refactor for each method and it would change my JUnit4 code from

Assert.assertNotNull("name should not be null", name);

and change it to JUnit 5

Assertions.assertNotNull(name, "name should not be null");

I haven’t seen this strategy mentioned in any of the migration blogs that I read.

Branch By Abstraction to AssertJ

Because this is a Branch by Abstraction, I could experiment with different implementations. I could create a AssertJ implementation of my Assert abstraction:

package uk.co.compendiumdev.junitmigration.toassertj;

import static org.assertj.core.api.Assertions.assertThat;

public class Assert {

    public static void assertNotNull(final String message, final String actual) {

By changing the imports I could switch between AssertJ or JUnit5 without changing my test code until I am happy to settle on one of the approaches.

I could clearly make this even more flexible by coding to an interface and having the Assert wrapper be a configurable factory which switches between the two, but I don’t need that degree of complexity.

Branch By Abstraction Strategy for Assertion Migration

This seems to be a simpler approach to assertion migration than I’ve seen mentioned.

This approach is open to me even if I statically import the methods, because I can statically import the methods from my abstraction class instead of the main library.

I’m not sure why this isn’t the most communicated migration approach.

I created a video showing this approach in action below, and you can see all the code on github:

And the Video

Migrate Junit 4 To Junit 5 Assertions Using Branch By Abstraction - YouTube
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

TLDR; Apply MVP principles when coding. Code to the API first. The API is internal before it is external. Unit tests with classes. In code testing with classes in combination. In code API testing. External HTTP API Testing. And then if necessary -In memory and process HTTP API testing. GUI.

A long time ago, in a town which I no longer live in, I wrote a tool called Compendium-TA

Commercially that was a disaster: it was self funded, it took a long time to write and I made some poor technology decisions.

I learned MVP and API First Thinking the hard way. I’ll try and explain within.

One of the (many) things I learned was the need to create a Minimum Viable Product. Build the MVP and sell the MVP. This is now well known due to books like The Lean Startup, and The Startup Owner’s Manual. Sadly I was building my tool in 2003 and the lessons were not as widely propagated then, so I had to learn this from experience by not writing an MVP and failing as a result.

I think most people now know about MVP. But I’m not convinced that people incorporate the MVP philosophy into their development approach.

Build the API first

As an example. I go to a lot of conferences. At the conferences I speak to the tool vendors. I inevitably ask the vendors if they have an API.

Often they do, but it doesn’t cover all the functionality of the product. Sometimes they will say that “the API is coming later, the functionality is available in the GUI”.

I get nervous at this point because an API is easier and faster to explore from a testing perspective, and most of these vendors are test tool vendors. So they aren’t building their tools in a way that makes it easy for them to test the tool.

And… that isn’t even the main point about MVP and API first thinking.

The API you build first isn’t a User API

When I talk to people about building the API first they often think in terms of a REST API, or an SDK.

One of the lessons I learned while building Compendium-TA was to build an API first.

In Compendium-TA I did this by adding the Microsoft Scripting Engine into the product. This allowed me to create internal classes and code that were exposed as an API. I could then use those from the Scripting interface to experiment with the functionality from an end user perspective without writing the full GUI. It also allowed the few actual customers I had, to extend the tool for their needs.

The first API supports building

I still carry this concept through to this day. Many of my custom tools are actually quick sets of code, and they are triggered by an @Test method, and the IDE is my GUI.

Some of my tools never move beyond this level of MVP. And I’m the sole user. They are almost pure API at a code level where the API are a set of classes.

Lessons learned revisiting Compendium-TA

I look back on Compendium-TA with fondness. It did things I still have no tool to replace.

But I built it on dying technology VB6 and when I ran out of funds I didn’t have time to rebuild it or carry it forward.

I looked at it recently and with hindsight I can see that most of the tool is GUI. Much of the complexity was making the GUI scalable for different resolutions and responsive to events. I had to create my own event publish and subscribe mechanism to make it work well.

The actual core code was quite simple.

So I’ve started to rewrite portions of it - using MVP and API first thinking.

MVP API thinking

Compendium-TA had the following main functional areas:

  • user defined entity and relationship management
  • graph based diagramming and modelling using the entities
  • scripting engine
  • custom reporting and templating engine

I’ve started with the user defined entity and relationship management since that was the core of the app.

In essence it was a limited implementation of E-R modelling.

Class Level APIs

I explored this in my new code, with code:

  • TDD of simple classes to represent the main concepts
    • entity,
    • relationship,
    • entity definition,
    • relationship definition
    • etc.

The basic classes I create are the initial API that I worked with.

I was able to make sure that I could:

  • create entity definitions,
  • instantiate the entities as instances
  • define relationships
  • create relationships between instances

And get a feel for the viability of the approach. See if I needed to have a complicated set of base classes. I experimented with some methods and approaches that I didn’t use. And I cut out a lot of code that seemed too complex for an MVP going forward.

e.g. at the moment I only support 1:M relationships so if I want a M:M relationship then I create two relationships. This seems like a valid thing to do because I cut down on the complexity of the code and can still create viable E-R models.

@Test methods to explore the classes in combination

Because I was experimenting and not sure what I wanted to do. I used a loose form of TDD for the next exploration.

I wrote @Test methods to use the classes in combination and essentially built a DSL that helps me create a scenario in code.

This isn’t always elegant, and I can see that longer term I’m going to have refactor the interfaces on the objects, but as an MVP - the API allows me to get the job done and explore the concepts in more detail.

As an example, here is an @Test I used to explore entity instance creation.

public void createAmendAndDeleteATodoWithAGivenGUID(){

    Thing todos = todoManager.getThingNamed("todo");

    int originalTodosCount = todos.countInstances();

    String guid="1234-12334-1234-1234";

    ThingInstance tidy = todos.createInstance(guid).setValue("title", "Delete this todo").
            setValue("description", "I need to be deleted");


    ThingInstance foundit = todos.findInstance(guid);

    Assert.assertEquals("Delete this todo", foundit.getValue("title"));

    Assert.assertEquals(originalTodosCount, todos.countInstances());

    foundit = todos.findInstance(guid);


        Assert.fail("Item already deleted, exception should have been thrown");
    }catch(Exception e){



It isn’t particularly good TDD e.g. I have asserts in the middle and this does a lot of stuff.

But it allows me to explore:

  • the objects I need
  • the methods they have
  • what methods do I use in sequence to ‘do’ something
  • do I need to throw exceptions or should I rely on returned objects?
  • etc.
In Code API Testing

I realised that with a basic E-R model I could create a CRUD application fairly easily and if I was able to support an HTTP API then I might be able to automatically generate a simple REST API system from a very basic model.

To experiment with this I started to create “In code API tests”


I don’t have an http server I have the basic concept of PUT/GET/POST etc. I have the concept of a url path and a json ‘body’

And I can write in code tests to explore the CRUD functionality as though it were driven by an HTTP API.

I started with GET because I knew that if I got that working I would be able to create an app with canned data that could be explored with GET requests and it might be possible to use the tool as a front end to “other people’s data”.

So I created relatively ugly code to drive a querying engine:

// project/_GUID_/todo/category
query = todoManager.simplequery(

Assert.assertEquals(1, query.size());

Having done that. I then wrapped the query engine in an api object to help me explore the API concept:


ApiResponse apiresponse = todoManager.api().get(
     "todo/" + paperwork.getGUID());
Assert.assertEquals(200, apiresponse.getStatusCode());

I gradually expanded this to cover GET/PUT/POST

And while the tests might be ugly, they do provide a lot of coverage that allows me to make changes in the back end without the overhead of amending a lot of TDD based tests.

// DELETE a Relationship
// DELETE project/_GUID_/tasks/_GUID_
numberOfTasks = myNewProject.connections("tasks").size();

apiresponse = todoManager.api().delete(

Assert.assertEquals(200, apiresponse.getStatusCode());

Assert.assertNotNull("Task should exist, only the relationship should be deleted",
      FieldValue.is("guid", paperwork.getGUID())));

Once I’m comfortable with the interfaces at this level of testing, I drop down to TDD and design the underlying classes in more detail.

Experiment with a User focussed API

It was surprisingly simple to create a REST API for this.

Since I already have a DSL for the actual ER Model. I was able to automatically create the routings for this model to create a REST API

Part of a model

Thingifier todoManager = new Thingifier();

Thing todo = todoManager.createThing("todo", "todos");

        .addFields(Field.is("title", STRING), Field.is("description",STRING),

Thing project = todoManager.createThing("project", "projects");

        .addFields(Field.is("title", STRING), Field.is("description",STRING),

  Between.things(project, todo), 
  AndCall.it("tasks"), WithCardinality.of("1", "*"));

I was able to generate the routings in Spark

e.g. here’s an excerpt

for(Thing thing : thingifier.getThings()){

    final String aUrl = thing.definition().getName().toLowerCase();
    // we should be able to get things
    get(aUrl, (request, response) -> {
        ApiResponse apiResponse = 
        return apiResponse.getBody();});

    // we should be able to create things without a GUID
    post(aUrl, (request, response) -> {
        ApiResponse apiResponse = 
        return apiResponse.getBody();});

    options(aUrl, (request, response) -> {
        response.header("Allow", "OPTIONS, GET, POST");
        return "";});

    delete(aUrl, (request, response) -> {
          response.status(405); return "";});

I have notes in a // TODO comment to push this back a level and have the routings generated at the “in code” api, rather than the Spark level (the HTTP API) as it will make it simpler to test at the code level in the future.

But, I’m experimenting and by pushing it to HTTP it allows me to interactively explore it via Insomnia immediately and get a feel for what it is like to work with as an REST API.

In memory and process HTTP API testing

My next ‘testing’ step would involve me spinning up the Spark app in memory and from within the same @Test sending through HTTP requests.

i.e. I would do HTTP Integration testing without packaging up or deploying the application.


Then I would start on a GUI based around the HTTP messages.

But I might not

I’ve gone quite far with the exploration quite quickly and now I have a much better idea of what I can use this for.

I might not take it to the GUI level.

There already exist other tools which do this better for GUI work:

  • Jeddict seems like a very capable tool that I will investigate to see if the MVC stack allows me to create GUI based apps quickly.
  • Evolutility seems interesting - particularly the new version, but I probably wouldn’t use it because of the technology stack it targets.

Part of the reason for building an MVP is to help you investigate and explore options without committing too much time and money into it. And help you evaluate your MVP against other products.

MVP and API Thinking Changed How I Code

MVP and “API First” thinking has influenced how I code.

I do try to write expressive code so that I can ‘see’ how the design of my classes will work together.

And I don’t always stick to a rigid class based TDD approach. I jump between low level TDD and higher level BDD/TDD/“in code” design based work.

This helps me when I’m writing applications but also when I’m writing code to automatically execute applications. Often I’m designing the code and abstractions through usage, because they are automating other apps that would require too much work to ‘mock out’ so we execute against them directly.

Hope this helps.

All the code, in its uglified beauty is available on Github. It is a work in progress. It is an MVP.


Interestingly, I note that although I’ve written a main method. I do run this from the IDE. I haven’t written any manifest instructions into the pom.xml. And yet I have tested it as a REST API over HTTP from Insomnia. MVP thinking.


  • The API for an MVP does not have to be aimed a the end user.
  • Treat the code and classes that you are build as an MVP and iterate
  • Use your initial code to explore a problem.
  • Use your Tests in your TDD process to help you design and explore the solution, as well as the code.
  • Build your API incrementally:
    • class focussed unit tests
    • in code testing with classes in combination
    • in code API testing
    • external HTTP API Testing using Insomnia or Postman
    • in memory and process HTTP API testing
    • then add an end user GUI using the API
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

TLDR: Learn one programming language and you have already learned parts of other languages. You can speed up learning other languages by learning the differences.

I wrote a bunch of code in Java in my Test Tool Hub for generating CounterStrings.

I thought it would be useful to have it online and written in JavaScript.

JavaScript and Java have a lot of similarities. So many similarities in fact that I was able to copy and paste the code, and then make minor tweaks without too much thinking.

If you already know Java then learning these similarities and differences might help learn JavaScript faster.


Both JavaScript and Java:

  • use { and } as code block delimiters
  • use ; to end statements
  • have a Math library e.g. Math.pow
  • have if, do...while (pretty much same syntax)
  • have return statements
Differences Classes are Functions

JavaScript now has a class object, but I tend not to use it.

In JavaScript an object can be created with a new keyword on functionCall()

So Classes are really functions.

e.g. here is a JavaScript function representing an object

function CounterStringRangeStruct(space, minX, maxX) {

And here is a similar Java Class.

public class CounterStringRangeStruct {
    public final int spaceValueInRangeTakes;
    public final int minValInRange;
    public final int maxValInRange;

    public CounterStringRangeStruct(int space, int minX, int maxX) {
        this.spaceValueInRangeTakes = space;
        this.minValInRange = minX;
        this.maxValInRange = maxX;

They are different, I haven’t represented the final notion in the Java. But you can see the similarities

Both languages would instantiate an object with the same syntax:

new CounterStringRangeStruct(5, 1, 2);
Methods are Functions

Methods are really functions within functions in JavaScript.

e.g. an append method on StringCounterStringCreator

function StringCounterStringCreator(){


    this.append = function(nextPart) {
        this.string = this.string + nextPart;

And here is a similar Java Method.

public class StringCounterStringCreator implements CounterStringCreator {

    private final StringBuilder string;

    public StringCounterStringCreator(){
        string = new StringBuilder();


    public void append(String nextPart) {

To convert my classes from Java to JavaScript I basically:

  • copy and pasted the Java code into a JavaScript file
  • converted the classes into functions
  • if my classes have constructors then I incorporated the constructor code into the main function
  • converted all methods into functions on the main function
  • removed all types from Java in the JavaScript
  • added ; after all function definitions
  • converted my Java lists into JavaScript arrays
  • fixed syntax errors

Some code I have to change.


  • javascript uses unshift to add an item to the start of an array
  • be careful about types - since JavaScript doesn’t have types, I have to be careful about the code I write.
    • for the app I was converting this meant that I had to enforce some integer division with Math.floor(x/y)
  • converting numbers to strings was simpler in JavaScript number.toString()
  • Java String .length() method is a property in JavaScript .length
  • remember to add this. in JavaScript otherwise a local variable is assumed and used

But code was much the same.

You can see code similarities if you look at my JavaScript Counterstring code:

And the Java code it is based on was:

The class names match the function names in the JavaScript.

Differences That Bite

Differences that bite me because I constantly forget.

Polymorphic Methods

JavaScript does not have polymorphic method declaration e.g.

function generateCounterString(){
    var howlong = Number(document.getElementById('lengthOfCounterString').value);
function generateCounterString(howlong){
    var separator = document.getElementById('csseparator').value;
        new CounterString().create(howlong, separator);
    document.getElementById("status").innerText="Generated - Ready to Copy"

The above doesn’t work.

The second declaration overrides the first. A call to generateCounterString() is actually a call to generateCounterString(undefined)

And my code doesn’t handle that.

I default to ‘default’ argument processing code like this:

howlong = typeof howlong !== 'undefined' ? 
            howlong : Number(document.getElementById('lengthOfCounterString').value);

Clearly it is better to have a set of tests for the code when you migrate code to a different language. I didn’t have that.

  • but the Java code was well covered with unit tests
  • the functionality is pretty self contained with only one or two entry methods, so was easy to test interactively

Organised code is easier to migrate.

  • I amended my Java code prior to migration
  • refactored the code
  • simplified code into new classes
  • added more JUnit tests on the Java code
  • simplified the algorithms

Things that bit:

  • polymorphic methods
    • different languages have different coding styles so we have to learn them
  • lack of types
    • I had to use methods to enforce integer arithmetic in JavaScript
    • JavaScript arrays are so flexible I didn’t need to use any List code
Why Migrate

Migrating code can help learn a new language.

  • you can compare the working code for the same example
  • you have to learn nuances to get simple code working
  • you concentrate on the ‘differences’ which helps learn new language features
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

TLDR: Spark is static so having it run in an @BeforeClass allows HTTP request testing to begin.

I use Spark as the embedded web server in my applications. I also run simple HTTP tests against this as part of my local maven build. And I start Spark within the JUnit tests themselves. In this post I’ll show how.

We all know that there are good reasons for not running integration tests during our TDD Red/Green/Refactor process. We also know that we can run subsets of tests during this process and avoid any integration tests. And hopefully we recognise that expedient fast automated integration verification can be useful.

What is Spark?

Spark is a small, easy to add to your project with a single maven dependency, embedded web server for Java.

I use it for my

Spark is easy to configure within code
get("/games/", (req, res) -> {res.redirect("/games/buggygames/index.html"); return "";});

And it will look in a resource directory for the files:


And it is easy to change the port (by default 4567)


And I can do fairly complicated routings if I want to for all the HTTP verbs.

                (request, response) -> {
                    return api.getHeartbeat(
                        new SparkApiRequest(request),
                        new SparkApiResponse(response)).getBody();
                (request, response) -> { 
                    response.header("Allow", "GET"); 
                    return "";});
        path(ApiEndPoint.HEARTBEAT.getPath(), () -> {
            before("", (request, response) -> {              
                                        new SparkApiRequest(request))){

I tend to use abstraction layers so I have:

  • Classes to handle Spark routing
  • Application Classes to handle functionality e.g. api
  • Domain objects to bridge between domains e.e. SparkApiRequest represents the details of an HTTP request without having Spark bleed through into my application.
Running it for Testing

It is very easy, when using Spark to simply call the main method to start the server and run HTTP requests against it.

String [] args = {};

Once Spark is running, because it is all statically accessed the server stays running while our @Test methods are running.

I’m more likely to start my Spark using the specific Spark abstraction I have for my app:

    public void startServer() {
        server = new RestServer("");

We just have to make sure we don’t keep trying to start running it again, so I use a polling mechanism to do that.

Because this is fairly common code now. I have an abstraction called SparkStarter which I use.

This has a simple polling start mechanism:

public void startSparkAppIfNotRunning(int expectedPort){

    sparkport = expectedPort;

    try {
        if(!isRunning()) {


    }catch(IllegalStateException e){

        sparkport = Spark.port();
    }catch(Exception e){
        System.out.println("Warning: could not get actual Spark port");


And the wait is:

private void waitForServerToRun() {
    int tries = 10;
    while(tries>0) {
            try {
            } catch (InterruptedException e1) {
        tries --;

These methods are on an abstract class so I create a specific ‘starter’ for my application that knows how to:

  • check if it is running
  • start the server
    public boolean isRunning(){

            HttpURLConnection con = (HttpURLConnection)
                        new URL("http",host, sparkport, heartBeatPath).
            return con.getResponseCode()==200;
        }catch(Exception e){
            return false;


    public void startServer() {
        server = CompendiumDevAppsForSpark.runLocally(expectedPort);

You can see an example of this in CompendiumAppsAndGamesSparkStarter.java

And in the JUnit code
    public static void ensureAppIsRunning(){
                    get("localhost", "/heartbeat" ).

e.g. PageRoutingsExistForAppsTest.java

You can find examples of this throughout my TestingApp

Because it is static, this will stay running across all my tests.

Pretty simple and I find it very useful for the simple projects that I am working on .

Bonus Video

“Spark Java Embedded WebServer And Testing Overview”


Spark is a simple embedded Java WebServer. I can also spin it up during JUnit tests to make my testing easy.

In this video I show:

  • An overview of Spark Java Embedded Web Server
  • How to use it during JUnit execution
  • Abstraction code separating Spark from my Application
Spark Java Embedded WebServer And Testing Overview - YouTube
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

TLDR: when I have a small set of HTTP use-cases, and I’m working on fast in-build HTTP integration verification then I’ll probably use HttpURLConnection

I do receive a question fairly often like:

  • “Why would you ever use basic HTTP libraries rather than Rest-Assured?”
  • “When would you choose to use basic HTTP libraries instead of Rest-Assured?”

And other variants.

I’ll try to answer that in this post.

It isn’t an easy answer since I’ll know it when I see it.

But… since I’ve just implemented a bunch of @Test methods for a test app that I’m writing and they use the HttpURLConnection I will explain my reasoning for that.

Case Study 1 - Fast, Light, Local HTTP Verification

Most of my test apps use Spark as their embedded Web Server.

  • easy to use
  • fairly small
  • easy to start up for testing purposes
Locally Running Server

And by easy to start up for testing purposes I mean easy.

    public static void ensureAppIsRunning(){
            get("localhost", "/heartbeat" ).

Clearly the above is an abstraction layer, and I’ll explain that in a future post, but I can have a single line of code in an @BeforeClass method that will quickly startup my app, listening on a port ready for me to send HTTP requests as local integration tests.

I put these in a separate package to my super fast Unit tests, but these usually run fast enough that I don’t do anything special to separate them out in my build process.

Send HTTP messages

Because my test app is running locally as part of my JUnit execution, I want any @Test methods to run quickly and have minimal maintenance.

I decided to use the built in Java HttpURLConnection rather than an external library.


  • no additional dependencies required
  • stable API
  • minimal changes between Java versions

I recently had to change my HttpURLConnection wrapper abstractions to cope with Java 1.9 reflection changes, but that was because I had essentially hacked the HttpURLConnection to gain access to the raw message sent. Otherwise I haven’t really had to change my code very often.

With RestAssured (or any third party HTTP library), I have to be very conscious of the version I use, and I might find I have to update my project and change my code just to keep the version of Java advancing.

Using HttpURLConnection keeps my life as a programmer simple.

Why not use the Java 1.9 HTTP2 library?

And over the years I’ve build up abstraction code to help me that I move between projects. Cod that I will eventually put into a support library, at some point, probably just when the Java 10 HTTP 2 library is released thereby making my abstraction code instantly redundant.

I think the HTTP2 library is still in []incubator mode](https://dzone.com/articles/an-introduction-to-http2-support-in-java-9) and will probably change for Java 10 so at the moment it feels like a third party library which would add stress into my code, rather than make things easy.

Prefer built in libraries

I try to use built in libraries where possible to reduce the size of my project and the amount of stuff I have to learn that might not be transferrable to other work.

The build in HttpURLConnection isn’t perfect but a quick layer of abstraction layer and it works well in situations where you do not require a lot of flexibility and edge case processing. For my testing I will mainly issue some GET, and POST requests, and won’t really do much error processing.

How do you use HttpURLConnection

I will have an @Test that looks a bit like this:

public void canAccessiframeSearch(){
    http = new BasicHttp("http","localhost", 4567);

And you can see I’ve hidden HttpURLConnection behind a BasicHttp abstraction, so if I do want to do anything more complicated I can change that abstraction layer to delegate off to a third party library instead of using the HttpUrlConnection. And my @Test code won’t have to change.

So, what’s in the box at isPageAt?

Make an HTTP request

isPageAt basically makes an HTTP request

HttpURLConnection con = (HttpURLConnection)new URL(protocol,host, port, path).openConnection();
int status = con.getResponseCode();
  return true;
  return false;

For a simple ping test, HttpURLConnection gives me what I need without any third party libraries and it works fast.

Running on localhost as a simple embedded server means I don’t have any complicated connections.

HttpURLConnection works well here.

Get the Body

I do sometimes want to go beyond ping and get the body of the request. And HttpURLConnection does make you work a little harder at this point to get the request body.

    private String getResponseBody(HttpURLConnection con) {
        BufferedReader in=null;

        // https://stackoverflow.com/questions/24707506/httpurlconnection-how-to-read-payload-of-400-response
        try {
            in = new BufferedReader(
                    new InputStreamReader(con.getInputStream()));
        }catch(Exception e){
            // handle 400 exception messages
            InputStream stream = con.getErrorStream();
            if(stream!=null) {
                in = new BufferedReader(
                        new InputStreamReader(stream));

        String inputLine;
        StringBuffer responseBody = new StringBuffer();

            if(in!=null) {
                while ((inputLine = in.readLine()) != null) {
        }catch(IOException e){

        return responseBody.toString();

The above code:

  • read the input stream
  • if that fails get the error stream
  • now read which ever one it was itno a StringBuffer
  • then convert that to a String and return it

Bit of a pain, but having copy and pasted the code from Stackoverflow into your own abstraction layers you never really have to worry about it again.

And so it goes on

HttpURLConnection can handle proxy connections, PUT, POST, custom headers etc. So your abstraction layer for your use case grows to cover what you need it to do.

You only have to write code to cover the situations you test for.

If you find that your abstraction layer is starting to balloon out of control and you are maintaining it more then it is probably time to consider bringing in a third part abstraction like RestAssured to minimise your work load.


I tend to use HttpURLConnection as default instead of a third party library like RestAssured when:

  • running fast integration tests
  • constrained use case for the HTTP requests
  • want to keep dependencies to a minimum
  • happy to write abstraction code around HTTP requests (but not too much)
  • want to minimise impact of Java version upgrades

Also note that the @Test methods here are strategic in that they are a long-term automated execution approach. But they are also part of the early ‘build’ process.

When they are part of my ‘integration’ test process I’ll probably use RestAssured as the default choice because I will be expecting a less constrained set of use cases for the HTTP requests and I’ll probably be coding in a different project than the main code project so dependency management will be less of an issue.


You can find the code I’ve referred to here on Github if you want to trawl through the project:

And you can find a slightly bigger set of HttpURLConnection abstractions here at some point I’ll pull these out into their own library.

Free Bonus Explainer Video
Why Java basic HTTP libraries HttpURLConnection instead of RestAssured? - YouTube
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

TLDR: refactored to isolate XML processing, configured XStream in code, removed all annotations, added XML header, wrote less code

I have a small REST API application which uses Spark and GSON and JAXB. I haven’t released this to Github yet but I did release some of the example externally executed []integration verification code](https://github.com/eviltester/rest-listicator-automating-examples) for it.

When trying to package this for Java 1.9 I encountered the, now standard, missing JAXB, libraries. So I thought I’d investigate another XML library.


I used JAXB because I didn’t need to add any additional dependencies into Maven and it was bundled by default in < Java 1.8. Now that the new module system is in place, JAXB has to be included via dependencies and this ballooned my deployable jar file from 2.9 meg to 4.3 meg.

That might not seem much but I grew up with 48K and that privation leaves a permanent ‘this seems too big’ mentality.

JAXB is a big powerful set of code, I’m only using it to serialise and deserialise a few objects.

So in reality, when I have to bundle it with my app. It is too big.

I had to add the following dependencies to my pom.xml to have JAXB functioning with Java 1.9:


A quick look around and I thought I’d try XStream.

  • XStream does not require any annotations - which is good because I have a mild aversion to annotations and this might help me remove the annotations in my payload objects
  • XStream seems smaller as a packaged .jar
  • XStream requires a single import, to continue with JAXB I had to add four.
Not Ready Yet

My application is one that I use for training and teaching REST API testing.


  • it sometimes has some hacky code
  • it hasn’t been fully refactored
  • JAXB was used in quite a few places

The first thing I had to do was refactor the code to isolate the XML processing as much as possible.

I can’t do much about the JAXB annotations, since they are on the payload classes. But I can refactor out the marshalling and unmarshalling code into a single “Payload XML Parser” object.

Fortunately I do have quite a lot of tests - far more than in the external integration test pack.

Part of the reason I use Spark is that I find it easy to spin up an instance of the app which will listen on an HTTP port so I can have HTTP tests running as part of the main project very easily.

Isolate the JAXB code

I want to isolate code like this:

try {
    JAXBContext context = JAXBContext.newInstance(ListOfListicatorListsPayload.class);
    Unmarshaller m = context.createUnmarshaller();
    ListOfListicatorListsPayload lists = (ListOfListicatorListsPayload)
                                            m.unmarshal(new StringReader(payload));
    newLists = new PayloadConvertor().transformFromPayLoadObject(lists);

} catch (Exception e) {

Ignore the throwing away of exceptions - this is a ‘training’ app. I’m allowed to take shortcuts and have verbose console output.

And push it into something like MyXMLProcessor so that I’m creating an abstraction layer on top of the XML Payload processing, which mill make it easier to test out different XML libraries if I don’t like XStream.

And so this was a simple ‘refactor to method’ approach. Which led to code like this:

try {
    newLists = new MyXmlProcessor().getListOfListicatorLists(payload);
} catch (Exception e) {

The code behind getListOfListicatorLists at this point is exactly the same as the code that was in my PayloadConvertor.

I simply used the Find in Path to find any of the following imports, and moved the code that required the import into MyXmlProcessor as a method:

import javax.xml.bind.JAXBContext;
import javax.xml.bind.JAXBException;
import javax.xml.bind.Unmarshaller;
import java.io.StringReader;
import javax.xml.bind.Marshaller;
Migrate method by method

I then migrated each method in turn from JAXB to XStream.

I had to do the following in response to failing tests and exceptions during the conversion.

Add an XML Header when marshalling

When XStream Marshalls an Object to XML it doesn’t return an XML header.

So I added a quick hack rather than see if there was a configuration option in XStream. It seemed easier.

String header = "<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"yes\"?>";
return header + xstream.toXML(payload);
Configure in Code Rather than Annotate

I much prefer configuring in code rather than XML files or Annotations.

With XStream this meant:

public MyXmlProcessor(){
     xstream = new XStream();
     xstream.alias("lists", ListOfListicatorListsPayload.class);
     xstream.addImplicitCollection(ListOfListicatorListsPayload.class, "lists");
     xstream.alias("list", ListicatorListPayload.class);

The Alias is to take the place of the JAXB annotations.


When a class name is different from the XML name, I need an alias of an annotation:

The following is required to map a UserPayLoad to a user element.

@XmlRootElement(name = "list")
public class ListicatorListPayload {
    public String guid;
    public String title;

In XStream, I don’t annotate the class, I configure the XStream processor:

xstream.alias("user", UserPayload.class);

I prefer this second approach since it should allow me to handle any special cases more easily and it is a single place to configure everything.

The addImplicitCollection is to handle element to collection mapping.

@XmlRootElement(name = "lists")
public class ListOfListicatorListsPayload {
    public List<ListicatorListPayload> lists = new ArrayList<>();

For the above, in addition to the .alias calls to map Classes to elements.

I also had to map the collection to the element.

xstream.addImplicitCollection(ListOfListicatorListsPayload.class, "lists");

This happened by default in JAXB, but it seems a simple enough change for XStream.

Once all the fields were aliased, and I was using XStream to convert to and from XML, I could remove all the JAXB annotations.

XML Formatting

XStream provides nicely formatted XML. JAXB provides everything on a single line with no new lines.

This has a side-effect that it makes the API easier to work with as a training tool because the messages are formatted when viewed in a proxy without any additional parsing.

But it meant I had to change some of my assertions.

To assert on the XML generated by XStream I de-prettified it by removing all white space in the response String

String bodyWithNoWhiteSpace = response.body.replaceAll("\\s", "");
XStream vs JAXB

I consider the code for XStream much smaller and easier to read.

UserPayload user = (UserPayload) xstream.fromXML(payload);


     JAXBContext context = JAXBContext.newInstance(UserPayload.class);
     Unmarshaller m = context.createUnmarshaller();
     return (UserPayload)m.unmarshal(new StringReader(payload));
 } catch (JAXBException e) {

And the more general form:

    public <T> T simplePayloadStringConvertorConvert(String body, Class<T> theClass) {
        try {
            return (T)xstream.fromXML(body);
        }catch(Exception e){

        return null;


    public <T> T simplePayloadStringConvertorConvert(String body, Class<T> theClass) {
        try {
            JAXBContext context = JAXBContext.newInstance(theClass);
            Unmarshaller m = context.createUnmarshaller();
            return (T)m.unmarshal(new StringReader(body));
        } catch (JAXBException e) {       

        return null;

I still have to configure the XStream security before I release. And I will eventually have to remove some reflection warnings.

  • This forced me to refactor my code - which is always a good idea - and allowed me to isolate the XML conversion.
  • The packaged jar was 4.3 meg with JAXB and dropped to 3.5 meg with XStream
    • Originally it was 2.9 meg when packaged against Java 1.8
  • XStream also seems faster
Summary of Process
  • Have tests in place to support refactoring
  • Isolate existing XML code
  • Read XStream 2 minute tutorial
  • Add XStream maven dependency
  • Convert code step by step - while running the tests
  • Done
Final Notes

I have a very small app, with minor usage of XML.

I will be converting my other XML processing apps, which also use minimal XML processing to use XStream because it was so much easier to understand and use.

Your mileage may vary.

Read for later

Articles marked as Favorite are saved for later viewing.
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview