Loading...

Follow On Test Automation - Bas Dijkstra on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Many of you probably have not noticed, but last weekend I deactivated my Twitter account. Here’s why.

I’ve been active on Twitter for around four years. In that time, it has proven to be a valuable tool for keeping up with industry trends, staying in touch with people I know from conferences or other events, as well as a few non-testing and IT related sources of information (mainly related to running or music).

Over time, though, the value I got from Twitter was slowly getting undone by two things in particular:

  1. The urge to check my feed and notifications far too often.
  2. The amount of negativity and bickering going on.

I started to notice that because of these two reasons, I became ever more restless, distracted and downright anxious, and while Twitter may not have been the only reason for that, it was definitely a large contributor.

I couldn’t write, code or create courseware for more than 10 minutes without checking my feed. My brain was often too fried in the evening to undertake anything apart from mindless TV watching, playing the odd mobile game or even more social media consumption. Not a state I want to be in, and definitely not an example I want to set for my children.

So, at first, I decided to take a Twitter break. I removed the app from my phone, I blocked access to the site on my laptop and activated Screen Time on my phone to make accessing the mobile Twitter site more of a hassle. And it worked. Up to a point.

My mind became more clear, I became less anxious. But it still felt like it wasn’t enough. There was still the anxiety, and I still kept taking the extra steps needed to check my Twitter feed on my phone. And that’s when I thought long and hard about what value I was getting from being on Twitter in the first place. After a while, I came to the realization that it simply was too little to warrant the restlessness, and that the only reasonable thing to do was to pull the plug on my account.

So that’s what I did. And it feels great.

I’m sure I’ll still stay in touch with most people in the field. Business-wise, LinkedIn was and is a much more important source of leads and gigs anyway. There are myriad other ways to keep in touch with new developments in test automation (blogs, podcasts, …). And yes, I may hear about some things a little later than I would have through Twitter. And I even may not hear about some of them altogether. But still, leaving Twitter so far turns out to be a major net positive.

I’ve got some big projects coming up in the next year or so, and I’m sure I’ll be able to do more and better work without the constant distraction and anxiety that Twitter gave me in recent times.

So, what’s up in the future? Lots! First of all, more training: this month is my first ever month when I hit my revenue target purely through in company training, and I hope there are many more to come. I’ve got a couple of conference gigs coming up as well, most notably at this point my keynote and workshop at the Agile & Automation Days in Gdańsk, Poland, as well as a couple of local speaking and workshop gigs. And I’m negotiating another big project as well, one that I hope to share more information about in a couple of months time.

Oh, and since I’m not getting any younger, I’ve set myself a mildly ambitious running-related goal as well, and I’m sure that the added headspace will contribute to me keeping focused and determined to achieve it. I’ll gladly take not being able to brag about it on Twitter in case I make it.

One immediate benefit of me not being on Twitter anymore is the fact that I seem to be able to read more. Just yesterday, I finished ‘Digital Minimalism‘ by Cal Newport, and while this book hasn’t been the reason for my account deactivation, it surely was the right read at the right moment!

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

So far, most of the blog posts I’ve written that covered specific tools were focused on either Java or C#. Recently, though, I got a request for test automation training for a group of data science engineers, with the explicit requirement to use Python-based tools for the examples and exercises.

Since then, I’ve been slowly expanding my reading and learning to also include the Python ecosystem, and I’ve also included a couple of Python-based test automation courses in my training offerings. So far, I’m pretty impressed. There are plenty of powerful test tools available for Python, and in this post, I’d like to take a closer look at one of them, Tavern.

Tavern is an API testing framework running on top of pytest, one of the most popular Python unit testing frameworks. It offers a range of features to write and run API tests, and if there’s something you can’t do with Tavern, it claims to be easily extensible through Python or pytest hooks and features. I can’t vouch for its extensibility yet, thought, since all that I’ve been doing with Tavern so far was possible out of the box. Tavern has good documentation too, which is also nice.

Installing Tavern on your machine is easiest when done through pip, the Python package installer and manager using the command

pip install -U tavern

Tests in Tavern are written in YAML files. Now, you either love it or hate it, but it works. To get started, let’s write a test that retrieves location data for the US zip code 90210 from the Zippopotam.us API and checks whether the response HTTP status code is equal to 200. This is what that looks like in Tavern:

test_name: Get location for US zip code 90210 and check response status code

stages:
  - name: Check that HTTP status code equals 200
    request:
      url: http://api.zippopotam.us/us/90210
      method: GET
    response:
      status_code: 200

As I said, Tavern runs on top of pytest. So, to run this test, we need to invoke pytest and tell it that the tests we want to run are in the YAML file we created:

As you can see, the test passes.

Another thing you might be interested in is checking values for specific response headers. Let’s check that the response content type is equal to ‘application/json’, telling the API consumer that they need to interpret the response as JSON:

test_name: Get location for US zip code 90210 and check response content type

stages:
  - name: Check that content type equals application/json
    request:
      url: http://api.zippopotam.us/us/90210
      method: GET
    response:
      headers:
        content-type: application/json

Of course, you can also perform checks on the response body. Here’s an example that checks that the place name associated with the aforementioned US zip code 90210 is equal to ‘Beverly Hills’:

test_name: Get location for US zip code 90210 and check response body content

stages:
  - name: Check that place name equals Beverly Hills
    request:
      url: http://api.zippopotam.us/us/90210
      method: GET
    response:
      body:
        places:
          - place name: Beverly Hills

Since APIs are all about data, you might want to repeat the same test more than once, but with different values for input parameters and expected outputs (i.e., do data driven testing). Tavern supports this too by exposing the pytest parametrize marker:

test_name: Check place name for multiple combinations of country code and zip code

marks:
  - parametrize:
      key:
        - country_code
        - zip_code
        - place_name
      vals:
        - [us, 12345, Schenectady]
        - [ca, B2A, North Sydney South Central]
        - [nl, 3825, Vathorst]

stages:
  - name: Verify place name in response body
    request:
      url: http://api.zippopotam.us/{country_code}/{zip_code}
      method: GET
    response:
      body:
        places:
          - place name: "{place_name}"

Even though we specified only a single test with a single stage, because we used the parametrize marker and supplied the test with three test data records, pytest effectively runs three tests (similar to what @DataProvider does in TestNG for Java, for example):

So far, we have only performed GET operations to retrieve data from an API provider, so we did not need to specify any request body contents. When, as an API consumer, you want to send data to an API provider, for example when you perform a POST or a PUT operation, you can do that like this using Tavern:

test_name: Check response status code for a very simple addition API

stages:
  - name: Verify that status code equals 200 when two integers are specified
    request:
      url: http://localhost:5000/add
      json:
        first_number: 5
        second_number: 6
      method: POST
    response:
      status_code: 200

This test will POST a JSON document

{'first_number': 5, 'second_number': 6}

to the API provider running on localhost port 5000. Please note that for obvious reasons this test will fail when you run it yourself, unless you built an API or a mock that behaves in a way that makes the test pass (great exercise, different subject …).

So, that’s it for a quick introduction to Tavern. I quite like the tool for its straightforwardness. What I’m still wondering is whether working with YAML will lead to maintainability and readability issues when you’re working with larger test suites and larger request or response bodies. I’ll keep working with Tavern in my training courses for now, so a follow-up blog post might see the light of day in a while!

All examples can be found on this GitHub page.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Do you want to learn more about APIs and how to test them? Have you been looking for a comprehensive course that teaches you everything there is to know about testing APIs and testing software systems at the API level? If so, you might want to read on!

Many API testing courses out there focus on a specific tool or technique that you can leverage to improve your API testing efforts. While those courses definitely have their use, I feel there’s much more to it if you really want to become well-versed in testing APIs and testing systems at the API level.

That’s the reason I created a brand new, three day masterclass that will teach you, among other things, about:

  • APIs and their role in modern software systems
  • Why to test APIs, and why to test at the API level
  • What to look for when testing APIs
  • Exploring APIs for testing purposes
  • Using tools for API test automation
  • API performance and security testing
  • API specifications and contract testing
  • Mocking APIs for testing purposes

A much more detailed description of this API testing masterclass, including a day-to-day breakdown of course contents and learning goals, can be found on the course page.

Public training 3-5 April 2019 in the Netherlands
As with all of my other training courses, this API testing masterclass will be available as an in-company training. However, I will also be delivering this masterclass at a public training event on 3-5 April, 2019. This training will take place in or near the city of Utrecht in the Netherlands, at a location that is easily accessible both by car and by public transport. More details about the location will follow soon.

Early Bird registration: To celebrate the launch of this course, I am happy to offer a reduced Early Bird registration fee for the public training event on April 3-5. If you register for the April 3-5 masterclass before March 1st of 2019, you’ll get it for just € 1095 excl. VAT, instead of the regular price of € 1495 excl. VAT. That is a price reduction of 27%.

Need to convince your manager?
Please send them this course flyer, highlighting all of the benefits and providing a summary of the training course, all neatly on a single page. And don’t forget to bring the Early Bird discount to their attention!

I’m looking forward to many successful deliveries of this API testing masterclass, and I hope to see you all at one of them.

Some words of thanks
I owe a lot of thank you’s to Maria Kedemo of Black Koi Consulting for putting the idea for this masterclass in my head at exactly the right time, as well as for further discussing it and for reviewing the content outline as it can be found on the course page.

I am also very grateful for the content outline review and valuable comments made by Elizabeth Zagroba, Angie Jones and Joe Colantonio.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Wow, another year has flown by! And what an amazing year it has been. Now that the end of the year is coming ever closer, I’d like to look back a little on this last year and look forward to what 2019 might have in store for me.

The freelance life
2018 was my first full year freelancing under the On Test Automation label. As I’ve said in previous posts, it fits me like a glove. What I’ve been especially grateful for this year is that being a freelancer has given me the freedom to choose whatever I want to spend my time on, without having to get permission from anybody else. It has also allowed me to be there for my family whenever it’s been needed, without having to deal with sick days or annual leave budgets.

Needless to say that I’ll continue to work as a freelancer in 2019!

Client work
I’ve done consultancy on a per-hour billing basis for four different clients this year. Sometimes as part of a software development team, sometimes in an advisory role. I’ve noticed that the latter suits me far better, so that’s what I’ll try and keep doing in 2019. These roles are a little harder to get by, and they’re often not even publicly advertised, so I’ll have to make sure that people know where to find me in case they’re looking.

I’m happy to say that I’ll be starting a new project that sounds like a perfect early January with a brand new client, where I’ll advise and support several development teams with regards to their test automation efforts for 2 days per week. I’m really looking forward to that.

Training
2018 has been the year where I finally started to increase my efforts to land more training gigs. Delivering training is what I like to do best, and I hope that 2019 I will be able to reap what I have been sowing this year. In 2018, I delivered 17 training sessions (ranging from 2 hours to a full day) with 8 different clients. I am most proud of the two times I’ve been asked to deliver training abroad, allowing me to do one day of training in the UK (Manchester) and one day in Romania (Cluj).

For 2019, I hope to at least double the amount of training sessions delivered, where my ultimate goal is to be at an average of delivering 2 days of training per week (with the rest spent on consulting work, writing, and other things). To get to that amount, I’ve started collaborating with a few training providers this year, and I hope that this pays off in 2019. I am also launching a brand new training course on January 7, one that I’ve got high hopes for, so hopefully I’ll be delivering that one a couple of times too, besides my existing training offerings.

Speaking gigs
This year has been a relatively quiet year on the speaking front. That’s fine with me, because even though I am starting to like speaking more and more, I like doing training and workshops even better, so that’s where my focus has been. Still, I have done five talks this year. Three of them in the Netherlands: at the TestNet spring conference, at a company meetup and the one I am most proud of: my very first keynote talk at the Dutch Testing Day. I’ve also delivered two talks abroad: one at the atamVIE meetup in Vienna, Austria, and one at the Romanian Testing Conference.

I would like to do another couple of talks next year, because I’m slowly learning to become a better speaker and I would love to expand on that. I have one talk scheduled so far, none other than my very first international keynote at the UKStar conference in London, UK in March. I am really, really looking forward to that one!

Conferences
Speaking of conferences, it has been a relatively quiet year on that front as well. I think I’ve attended five conferences this year, four in the Netherlands (TestNet 2x, TestBash NL and the Test Automation Day) and one abroad (the Romanian Testing Conference). In all of these conferences, I’ve been a contributor, either with a talk or with a workshop (or in case of RTC, both).

Next year, I would love to attend more conferences, and not necessarily as a contributor each and every time. Also, I’d like to expand my horizon and attend one or two conferences outside of the testing community. Two conferences are in my agenda already, UKStar and TestBash Netherlands, where I’ll be delivering a brand new workshop.

Writing
I’ve been relatively inactive on the writing front this year, too. I’ve published 7 articles (5 in English, 2 in Dutch) on several websites, as well as 10 blog posts on this site, including this one. Next year, I’m planning to pick up the pen more often again, both for other web sites as well as for my own blog. It will be a matter of consciously making more time for it, as that has been lacking a bit this year.

Webinars
Finally, I’ve also done four webinars this year, and I’m planning on doing a couple more of them next year. The organisers that had to suffer from my ramblings this year were Beaufort Fairmont, Parasoft, TestCraft and CrossBrowserTesting.

So, all in all, it has been a very diverse year! Which is a good thing, but also a trap I’ve been falling in. My attention has been divided over so many different things that those that I think are really important to me (training, writing) have suffered a little. That’s a lesson I’ll definitely take with me into next year.

But first, it’s time to relax a little. We’ll see eachother again in 2019. I hope that it’s going to be an amazing year for all of you.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Since my last blog post that involved creating tests at the API level in C#, I’ve kept looking around for a library that would fit all my needs in that area. So far, I still haven’t found anything more suitable than RestSharp. Also, I’ve found out that RestSharp is more versatile than I initially thought it was, and that’s the reason I thought it would be a good idea to dedicate a blog post specifically to this tool.

The examples I show you in this blog post use the Zippopotam.us API, a publicly accessible API that resolves a combination of a country code and a zip code to related location data. For example, when you send an HTTP GET call to

http://api.zippopotam.us/us/90210

(where ‘us’ is a path parameter representing a country code, and ‘90210’ is a path parameter representing a zip code), you’ll receive this JSON document as a response:

{
	post code: "90210",
	country: "United States",
	country abbreviation: "US",
	places: [
		{
			place name: "Beverly Hills",
			longitude: "-118.4065",
			state: "California",
			state abbreviation: "CA",
			latitude: "34.0901"
		}
	]
}

Some really basic checks
RestSharp is available as a NuGet package, which makes it really easy to add to your C# project. So, what does an API test written using RestSharp look like? Let’s say that I want to check whether the previously mentioned HTTP GET call to http://api.zippopotam.us/us/90210 returns an HTTP status code 200 OK, this is what that looks like:

[Test]
public void StatusCodeTest()
{
    // arrange
    RestClient client = new RestClient("http://api.zippopotam.us");
    RestRequest request = new RestRequest("nl/3825", Method.GET);

    // act
    IRestResponse response = client.Execute(request);

    // assert
    Assert.That(response.StatusCode, Is.EqualTo(HttpStatusCode.OK));
}

If I wanted to check that the content type specified in the API response header is equal to “application/json”, I could do that like this:

[Test]
public void ContentTypeTest()
{
    // arrange
    RestClient client = new RestClient("http://api.zippopotam.us");
    RestRequest request = new RestRequest("nl/3825", Method.GET);

    // act
    IRestResponse response = client.Execute(request);

    // assert
    Assert.That(response.ContentType, Is.EqualTo("application/json"));
}

Creating data driven tests
As you can see, creating these basic checks is quite straightforward with RestSharp. Since APIs are all about sending and receiving data, it would be good to be able to make these tests data driven. NUnit supports data driven testing through the TestCase attribute, and using that together with passing the parameters to the test method is really all that it takes to create a data driven test:

[TestCase("nl", "3825", HttpStatusCode.OK, TestName = "Check status code for NL zip code 7411")]
[TestCase("lv", "1050", HttpStatusCode.NotFound, TestName = "Check status code for LV zip code 1050")]
public void StatusCodeTest(string countryCode, string zipCode, HttpStatusCode expectedHttpStatusCode)
{
    // arrange
    RestClient client = new RestClient("http://api.zippopotam.us");
    RestRequest request = new RestRequest($"{countryCode}/{zipCode}", Method.GET);

    // act
    IRestResponse response = client.Execute(request);

    // assert
    Assert.That(response.StatusCode, Is.EqualTo(expectedHttpStatusCode));
}

When you run the test method above, you’ll see that it will run two tests: one that checks that the NL zip code 3825 returns HTTP 200 OK, and one that checks that the Latvian zip code 1050 returns HTTP 404 Not Found (Latvian zip codes are not yet available in the Zippopotam.us API). In case you ever wanted to add a third test case, all you need to do is add another TestCase attribute with the required parameters and you’re set.

Working with response bodies
So far, we’ve only written assertions on the HTTP status code and the content type header value for the response. But what if we wanted to perform assertions on the contents of the response body?

Technically, we could parse the JSON response and navigate through the response document tree directly, but that would result in hard to read and hard to maintain code (see for an example this post again, where I convert a specific part of the response to a JArray after navigating to it and then do a count on the number of elements in it. Since you’re working with dynamic objects, you also don’t have the added luxury of autocomplete, because there’s no way your IDE knows the structure of the JSON document you expect in a test.

Instead, I highly prefer deserializing JSON responses to actual objects, or POCOs (Plain Old C# Objects) in this case. The JSON response you’ve seen earlier in this blog post can be represented by the following LocationResponse class:

public class LocationResponse
{
    [JsonProperty("post code")]
    public string PostCode { get; set; }
    [JsonProperty("country")]
    public string Country { get; set; }
    [JsonProperty("country abbreviation")]
    public string CountryAbbreviation { get; set; }
    [JsonProperty("places")]
    public List<Place> Places { get; set; }
}

and the Place class inside looks like this:

public class Place
{
    [JsonProperty("place name")]
    public string PlaceName { get; set; }
    [JsonProperty("longitude")]
    public string Longitude { get; set; }
    [JsonProperty("state")]
    public string State { get; set; }
    [JsonProperty("state abbreviation")]
    public string StateAbbreviation { get; set; }
    [JsonProperty("latitude")]
    public string Latitude { get; set; }
}

Using the JsonProperty attribute allows me to map POCO fields to JSON document elements without names having to match exactly, which in this case is especially useful since some of the element names contain spaces, which are impossible to use in POCO field names.

Now that we have modeled our API response as a C# class, we can convert an actual response to an instance of that class using the deserializer that’s built into RestSharp. After doing so, we can refer to the contents of the response by accessing the fields of the object, which makes for far easier test creation and maintenance:

[Test]
public void CountryAbbreviationSerializationTest()
{
    // arrange
    RestClient client = new RestClient("http://api.zippopotam.us");
    RestRequest request = new RestRequest("us/90210", Method.GET);

    // act
    IRestResponse response = client.Execute(request);

    LocationResponse locationResponse =
        new JsonDeserializer().
        Deserialize<LocationResponse>(response);

    // assert
    Assert.That(locationResponse.CountryAbbreviation, Is.EqualTo("US"));
}

[Test]
public void StateSerializationTest()
{
    // arrange
    RestClient client = new RestClient("http://api.zippopotam.us");
    RestRequest request = new RestRequest("us/12345", Method.GET);

    // act
    IRestResponse response = client.Execute(request);
    LocationResponse locationResponse =
        new JsonDeserializer().
        Deserialize<LocationResponse>(response);

    // assert
    Assert.That(locationResponse.Places[0].State, Is.EqualTo("New York"));
}

So, it looks like I’ll be sticking with RestSharp for a while when it comes to my basic C# API testing needs. That is, until I’ve found a better alternative.

All the code that I’ve included in this blog post is available on my Github page. Feel free to clone this project and run it on your own machine to see if RestSharp fits your API testing needs, too.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Note: this is an updated version of an earlier post I wrote in May of last year. Since then, my understanding of Continuous Testing and what it takes for automation to be a successful and valuable part of any Continuous Testing effort have changed slightly, so I thought it would be a good idea to review and republish that post.

Test automation is everywhere, nowadays. That’s probably nothing new to you.

A lot of organizations are adopting Continuous Integration and Continuous Delivery as a means of being able to develop and deliver software in ever shorter increments. Also nothing new.

To be able to effectively implement CI/CD, a lot of organizations are relying on their automated tests to help safeguard quality thresholds while increasing release speed. Again, no breaking news here.

However, automation in and by itself isn’t enough to safeguard quality in CI and CD. You’ll need to be able to do Continuous Testing (CT). Here’s how I define Continuous Testing, a definition greatly influenced by others that have been talking and writing about CT for a while:

Continuous Testing is the process that allows you to get valuable insights into the business risks associated with delivering application increments following a CI/CD approach. No matter if you’re building and deploying once a month or once a minute, CT allows you to formulate an answer to the question ‘are we happy with the level of value that this increment provides to our business / stakeholders / end users? ‘ for every increment that’s being pushed and deployed in a CI/CD approach.

It won’t come as a surprise to you that automated tests often form a significant part of an organization’s CT strategy. However, just having automated tests is not enough to be able to support CT. Apart from the fact that automation can only do so much (a topic I’ve discussed in several other blogs and articles), not every bit of automation is equally suitable to be used in a CT strategy. But how do you decide whether or not your automation can be used as part of your CT efforts? And when they can’t, what do you need to take care of to improve them?

In order to be able to leverage your automated tests successfully for supporting CT, I’ve come up with a model based on four pillars that need to be in place for all automated checks before they can become part of your CT process:

Let’s take a quick look at each of these FITR pillars and how they are necessary when including your automation into CT.

Focused
Automated tests need to be focused to effectively support CT. ‘Focused’ has two dimensions here.

First of all, your tests should be targeted at the right application component and/or layer. It does not make sense to use a user interface-driven test to test application logic that’s exposed through an API (and subsequently presented through the user interface), for example. Similarly, it does not make sense to write API-level tests that validate the inner workings of a calculation algorithm if unit tests can provide the same level of coverage.

The second aspect of focused automated tests is that your tests should test what they can do effectively. This boils down to sticking to what your test solution and tools in it do best, and leaving the rest either to other tools or to testers, depending on what’s there to be tested. Don’t try and force your tool to do things it isn’t supposed to (here’s an example).

If your tests are unfocused, they are far more likely to be slow to run, to have high maintenance costs and to provide inaccurate or shallow feedback on application quality.

Informative
Touching upon shallow or inaccurate feedback, automated tests also need to be informative to effectively support CT. ‘Informative’ also has two separate dimensions.

Most importantly, the results produced and the feedback provided by your automated tests should allow you, or the system that’s doing the interpretation for you (such as an automated build tool), make important decisions based on that feedback. Make sure that the test results and reporting provided contain clear results, information and error messages, targeted towards the intended audience and addressing business-related risks. Keep in mind that every audience has its own requirements when it comes to this information. Developers likely want to see stack traces, whereas managers don’t. Find out what the target audience for your reporting and test results is, what their requirements are, and then cater to them as best as you can. This might mean creating more that one report (or source of information in general) for a single test run. That’s OK.

Another important aspect of informative automated tests is that it should be clear what they do (and what they don’t do), and what business risk they address. You can make your tests themselves be more informative through various means, including (but not limited to) using naming conventions, using a BDD tool such as Cucumber or SpecFlow to create living documentation for your tests, and following good programming practices to make your code better readable and maintainable.

When automated test solutions and the results they produce are not informative, valuable time is wasted analyzing shallow feedback, or gathering missing information, which evidently breaks the ‘continuous’ part of CT.

Trustworthy
When you’re relying on your automated tests to make important decisions in your CT activities, you’d better make sure they’re trustworthy. As I described in more detail in previous posts, automated tests that cannot be trusted are essentially worthless. Make sure to eliminate false positives (tests that report a failure when they shouldn’t), but also false negatives (tests that report no failure when they should).

Repeatable
The essential idea behind CT (referring to the definition I gave at the beginning of this blog post) is that you’re able to give insight into application quality and business risks on demand, which means you should be able to run your automation on demand. Especially when you’re including API-level and end-to-end tests, this is often not as easy as it sounds.

There are two main factors that can hinder the repeatability of your tests:

  • Test data. This is in my opinion one of the hardest ones to get right, especially when talking end-to-end tests. Lots of applications I see and work with have complex data models or share test data with other systems. And if you’re especially lucky, you’ll get both. A solid test data strategy should be put in place to do CT, meaning that you’ll either have to create fresh test data at the start of every test run or have the ability to restore test data before every test run. Unfortunately, both options can be quite time consuming (if at all attainable and manageable), drawing you further away from the ‘C’ in CT instead of bringing you closer to it.
  • Test environments. If your application communicates with other components, applications or systems (and pretty much all of them do nowadays), you’ll need suitable test environments for each of these dependencies. This is also easier said than done. One possible way to deal with this is by using a form of simulation, such as mocking or service virtualization. Mocks or virtual assets are under your full control, allowing you to speed up your testing efforts, or even enable them in the first place. Use simulation carefully, though, since it’s yet another moving part of your CT solution to be managed and maintained, and make sure to test against the real thing periodically for optimal results.

Having the above four pillars in place does not guarantee that you’ll be able to perform your testing as continuously as your CI/CD process requires, but it will likely give it a solid push in the right direction.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Recently (as in, over the last year and a half or so) I regularly receive questions about providing online training in addition to my in-house, in-person training offerings. Until now, I put those requests on the back burner as I was of the opinion that teaching online (either live or through prerecorded video instructions) would never be a replacement for ‘live’ training.

And then something struck me: why would it have to be an exact replacement? Why not just try it, see how it goes, learn from it and see if it’s a suitable way to conduct training?

So, when I got in touch with a test consultancy firm in the UK that was looking for training for their employees, I decided to give it a try. After some discussion, we agreed that I would deliver the first day of training in house (meaning: in Manchester), while the following modules would be delivered online, saving me a couple of trips back and forth and cutting down on overhead costs for airfare and hotels. And so it was done.

Note: I am aware that having met the students in person before delivering training online to them is a big plus. However, I believe that the lessons and the pros and cons I talk about in this blog post equally apply when you’ve never met the students in real life just as well.

So, what did I learn in the process? Let’s see.

Preparation
I could write a whole separate article about how to properly prepare for a technical training course or workshop. In fact, I’ll be doing just that in the near future, in an article that will be published on another platform.

I won’t go into too many details here, but by far the most important thing to do when you’re about to conduct training online is to make sure that the participants are ready from the start. My preferred way of doing this is by sending detailed preparation instructions (a step-by-step guide, screenshots and all) to them at least a whole week in advance, so the participants have some time to set up their device. Additionally, I make myself available for questions and troubleshooting in case something goes wrong.

I was afraid to do this in the beginning, fearing I’d be overwhelmed with questions, but it turns out that’s not the case. For all the workshops I’ve given in the last few years, I’ve only had a couple (as in: three or four) people asking for help. That doesn’t mean that everybody else is ready to go when the workshop or training starts, but that’s a whole different kettle of fish…

The reason this is extra important when delivering training online is that you cannot just walk over to the participants and look over their shoulder to see what’s going wrong. You can do screen sharing, of course, but that’s not as efficient as taking over the controls for a bit.

So, long story short, overdo it on the preparation instructions. Be very clear in them and make sure they’re unambiguous. Have them tested by somebody else if you’re not sure everything’s clear (heck, do this even if you ARE sure).

Organization
With regards to how the training days should be organized, here are some key lessons I’ve learned from the two days of training I’ve hosted so far:

  • Group size: Where I can take around 12 people for a class that involves programming when they’re in the same room, I am glad I had only 4 participants for my online training. I think I can handle up to 6 people, but no more. Keeping track of how everybody is doing takes more effort when you see them through a webcam only, and there will probably be more questions (also because participants can’t help eachother out), so it’s only fair to limit the number of attendees to make sure everybody gets the attention and the answers they deserve.
  • Type of course: Live online training works well for hands-on automation training, but probably much less so (for obvious reasons) for training courses that involve a lot of group work, discussions and presentations. I wouldn’t even know how to facilitate that online…
  • Location and connection: Make sure the participants (and you yourself as well) are in a room with good lighting and that their webcam is on, because reading facial expressions will tell you a lot about their level of engagement. Also make sure they’re in a location with a good Internet connection. Videoconferencing takes bandwidth, yet you want both video and audio to be of the highest possible quality to make sure the participants can hear and see you well.

Engagement
The hardest part about delivering training online is keeping your audience engaged. Taking training is hard enough on your energy levels when you’re in the same room as the trainer, looking at a webcam and listening to somebody who’s potentially very far away is orders of magnitude harder. Here are some tips that might help you (they worked for me!):

  • Ask the participants how they’re doing often, to the point of being annoying. Don’t lose them, don’t give them a chance to start drifting off. Make sure they are awake and engaged. In the pre-course instructions, point out that they should be well rested, and that taking a training course online is even more demanding than ‘live’ training, for both parties.
  • Consider shortening the training days (for example, teach for 6 hours instead of 8 for a day of training). Chances are high that they won’t take in anything in those last two hours anyway, simply because their energy levels are too low. Additionally, take breaks often. Even just a five minute break for a leg stretch or a bathroom visit can help keep energy levels up. I took breaks every hour, which definitely wasn’t overdoing it.
  • Involve them. Instead of just broadcasting information all day, ask them lots of questions. When you’re doing programming exercises, ask them to share their screen and talk the rest of the audience through their solution and thinking process. Again, this helps keep them engaged. Don’t let them fall asleep!

Pros and cons
As I said earlier, online training isn’t a replacement for in-person training, at least not on a 1-on-1 basis. It’s a whole different ball game. Both have their pros and cons. Some of the benefits of delivering training online for me are:

  • It allows me to work from home. Big plus. I like driving my car, but I hate wasting time commuting. With online training, I can teach from the comfort of my own home.
  • My potential client base is many times larger. I am quite limited in the amount of travel I can do in a year, and the Netherlands is a small country, which means my client base isn’t all that large. With the possibilities of online training, though, I can deliver my courses to the entire world, potentially. Added bonus: meeting and talking to people from other countries and cultures, plus it does wonders for my English.

Sure, there are some downsides as well:

  • Not being able to walk up to people and see how they’re doing. I do this a lot when teaching in-person, but that’s not an option when online. Even with a webcam, people can hide behind their screen easier and pretend all is well. Their loss, of course, but I take pride in keeping everybody engaged.
  • It isn’t suited for every course I offer. I do more and more courses where people work in groups and have discussions, and as I said earlier, that’s not really an option when teaching online.

Having said all of this, I will definitely start offering live online training more often in the future, probably starting after the summer holidays. It’s definitely a valuable addition to the services I can offer. If you’re interested in taking one of my online courses, keep an eye out on this site for future announcements.

Oh, and in case you were wondering: so far I didn’t need any dedicated virtual classroom software to conduct the training. I used the pro version of appear.in, which requires no software to be installed at all on the client side, is a breeze to work with, allows everybody to share their screen effortlessly, has a chat to share links and stuff, basically everything I need.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Note: in my observation, scripted test execution and the type of regression test scripts I’m referring to are slowly going away, but a lot of organizations I work with still use them. Not every organization is full of testers working in a context-driven and exploratory way while applying CI/CD and releasing multiple times per day. If you’re working in one, that’s fine. This blog post probably is not for you. But please keep in mind that there are still many organizations that apply a more traditional, script-based approach to testing.

In the last couple of months, I’ve been talking regularly about some of the failures I’ve made (repeatedly!) during my career so far. My talk at the Romanian Testing Conference, for example, kicked off with me confessing that in retrospect, a lot of the work I’ve done until all too recently has been, well, inefficient at best, and plain worthless in other cases. Only slowly am I now learning what automation really is about, and how to apply it in a more useful and effective manner than the ‘just throw more tools at it’ approach I’ve been supporting for too long.

Today, I’d like to show you another example of things that, in hindsight, I should have been doing better for longer.

One of my stock answers to the question ‘Where should we start when we’re starting with automation?’ would be to ‘automate your existing regression tests first’. This makes sense, right? Regression tests are often performed at the end of a delivery cycle to check whether existing functionality aspects have not been impacted negatively as a result of new features that were added to the product. These tests are often tedious – new stuff is exciting to test, while existing features are so last Tuesday – and often take a long time to perform, and one thing there often isn’t left is time at the end of a delivery cycle. So, automating away those regression tests is a good thing. Right?

Well, maybe. But maybe not so much.

To be honest, I don’t think ‘start with automating your regression tests’ isn’t a very good answer anymore, if it has ever been (again, hindsight is 20/20…). It can be a decent answer in some situations, but I can think of a lot of situations where it might not be. Why not? Well, for two reasons.

Regression scripts are too long
The typical regression test scripts I’ve seen are looong. As in, dozens of steps with various checkpoints along the way. That’s all well and good if a human is performing them, but when they are turned into an automated script verbatim, things tend to fall apart easily.

For example, humans are very good at finding a workaround if the application under test behaves slightly differently than is described in the script. So, say you have a 50-step regression script (which is not uncommon), and at step 10 the application does something similar to what is expected, but not precisely the same. In this case, a tester can easily make a note, find a possible way around and move on to collect information regarding the remaining steps.

Automation, on the other hand, simply says ‘f*ck you’ and exits with a failure or exception, leaving you with no feedback at all about the behaviour to be verified in steps 11 through 50.

So, to make automation more efficient by reducing the risk of early failure, the regression scripts need to be rewritten and shortened, most of the times by breaking them up in smaller, independently executed sections. This takes time and eats away the intended increase in speed expected from the introduction of automation. And on top of that, it may also frustrate people unfamiliar to testing and automation, because instead of 100 scripts, you now have to automate 300. Or 400. And that sounds like more work!

Regression scripts are written from an end user perspective
The other problem with translating regression scripts verbatim is that these scripts are often written from an end user perspective, operating on the user interface of the application under test. Again, that’s all well and fine when you’re a human, but for automation it might not be the most effective way to gain information about the quality of your application under test. User interface-driven automation is notoriously hard to write and maintain, hard to stabilize, slow to execute and relatively highly prone to false positives.

Here too, in order to translate your existing regression scripts into effective and efficient automation, you’ll need to take a thorough look at what exactly is verified through those scripts, find out where the associated behaviour or logic is implemented, find or develop a way to communicate with your application under test on that layer (possibly the user interface, more likely an API, a single class or method or maybe even a database table or two) and take it from there.

Sure, this is a valuable exercise that will likely result in more efficient and stable automation, but it’s a step that’s easy to overlook when you’re given a batch of regression scripts with the sole requirement to ‘automate them all’. And, again, it sounds like more work, which not everybody may like to hear.

So, what to do instead?

My advice: forget about automating your regression tests.

There. I’ve said it.

Instead, ask yourself the following three questions with regards to your testing efforts:

  1. What’s consuming my testing time?
  2. What part of my testing efforts are repetitive?
  3. What part of my testing efforts can be repeated or enhanced by a script?

The answer(s) to these questions may (vaguely) resemble that what you do during your regression testing, but it might also uncover other, much more valuable ways to apply automation to your testing. If so, would it still make sense to aim for ‘automating the regression testing’? I think not.

So, start writing your automation with the above questions in mind, and keep repeating to yourself and those around you that automation is there to make your and their life easier, to enable you and them to do your work more effectively. It’s not just there to be applied everywhere, and definitely not to blindly automate an existing regression test suite.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Choices. We all make them tens of times each day. Peanut butter or cheese (cheese for me, most of the time). Jeans or slacks (jeans, definitely). Coffee or tea (decent coffee with a glass of water on the side please). And when you’re working on or learning about automation, there’s a multitude of choices you also can (and sometimes have to) make. A lot of these choices, as I see people discussing and making them, are flawed in my opinion, though. Some of them are even false dichotomies. Let’s take a look at the choices people think they need to make, and how there are other options available. Options that might lead to better results, and to being better at your job.

Do I need to learn Java or .NET? Selenium or UFT?
Creating automation often involves writing code. So, the ability to write code is definitely a valuable one. However, getting hung up on a specific programming language might limit your options as you’re trying to get ahead.

I still see many people asking what programming language they need to learn when they’re starting out or advancing in their career. If you’d ask me, the answer is ‘it doesn’t really matter’. With the abundance in tools, languages, libraries and frameworks that are available to software development teams nowadays, chances are high that your next gig will require using a different language than your current one.

As an example, I recently started a new project. So far, in most of my projects I’ve written automation in either Java or .NET. Not in this one, though. In the couple of weeks I’ve been here, I’ve created automation using PHP, Go and JavaScript. And you know what? It wasn’t that hard. Why? Because I’ve made a habit of learning how to program and of studying principles of object oriented programming instead of learning the ins and outs of a specific programming language. Those specifics can be found everywhere on Google and StackOverflow.

The same goes for automation tools. I started writing UI-level automation using TestPartner. Then QuickTest Pro (now UFT). I’ve used Selenium in a few projects. I’ve dabbled with Cypress. Now, I’m using Codecept. It doesn’t matter. The principles behind these tools are much the same: you identify objects on a screen, then you interact with them. You need to take care of waiting strategies. If you become proficient in these strategies, which tool you’re using doesn’t matter that much anymore. I’ve stopped chasing the ‘tool du jour’, because there will always be a new one to learn. The principles have been the same for decades, though. What do you think would be a better strategy to improve yourself?

Identify and learn to apply common principles and patterns, don’t get hung up on a single tool or language. Choose both/and, not either/or.

Do I stay a manual tester or become an automation engineer?
Another one of the choices I see people struggling with often is the one between staying a ‘manual tester’ (a term that I prefer not to use for all the reasons Michael Bolton gives in this blog post of his and becoming an automation engineer. If you’d ask me, this is a perfect example of a flawed choice in the testing field. It’s not a matter of either/or. It’s a matter of both/and.

Automation supports software testing, it does not replace it. If you want to become more proficient in automation, you need to become more proficient in testing, too. I’ve only fairly recently realized this myself, by the way. For years, all I did was automation, automation, automation, without thinking whether my efforts actually supported the testing that was being done. I’ve learned since that if you don’t know what testing looks like (hint: it’s much more than clicking buttons and following scripts), then you’ll have a pretty hard time effectively supporting those activities with automation.

Don’t abandon one type of role for the other one, especially when there’s so much overlap between them. Choose both/and, not either/or.

Do I learn to write tests against the user interface, or can I better focus on APIs?
So, I’ve been writing a lot about the benefits of writing tests at the API level, not only on this blog, but also in numerous talks and training courses. When I do so, I am often quite critical about the way too many people apply user interface-driven automation. And there IS a lot of room for improvement there, definitely. That does not mean that I’m saying you should abandon this type of automation at all, just that you should be very careful when deciding where to apply it.

Like in the previous examples, it is not a matter of either/or. For example, consider something as simple and ubiquitous as a login screen (or any other type of form in an application). When deciding on the approach for writing tests for it, it’s not a simple choice between tests at the UI level or tests at the API level; rather it depends on what you’re testing. writing a test that checks whether an end user sees the login form and all associated in their browser? Whether the user can interact with the form? Whether the data entered by the user is sent to the associated API correctly? Or whether the form looks like it’s supposed to? Those are tests that should be carried out at the UI level. Checking whether the data provided by the user is processed correctly? Whether incorrectly formatted data is handled in the appropriate manner? Whether the right level of access grants is given to the user upon enter a specific combination of username and password? Those tests might target a level below the UI. Many thanks, by the way, to Richard Bradshaw for mentioning this example somewhere on Slack. I owe you one more beer.

Being able to make the right decision on the level and scope to write the test on required knowing what the benefits and drawbacks and the possibilities of the alternatives are. It also requires the ability to recognize and apply principles and patterns to make the best possible decision.

Again, identify and learn to apply common principles and patterns, don’t get hung up on a single tool or language. Choose both/and, not either/or.

The point I’ve been trying to make with the examples above is that, like with so many things in life, being the best possible automation engineer isn’t a matter of choosing A over B. Of being able to do X or Y. What, in my opinion, will make you much better in your role is being able to do, or at least understand, A and B, X and Y. Then, extract their commonalities (these will often take the form of the previously mentioned principles and practices) and learn how to apply them. Study them. Learn more about them. Fail at applying them, and then learn from that.

I’m convinced that this is a much better approach to sustainable career development than running after the latest tool or hype and becoming a self-proclaimed expert at it, only to have to make a radical shift every couple of years (or even months, sometimes).

Don’t become a one trick pony. Choose both/and, not either/or.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview