Loading...

Follow Evil Tester - Alan Richardson on Feedspot

Continue with Google
Continue with Facebook
or

Valid

I was asked a question over email, and I’m paraphrasing the essence as, “given our project issues what tool can we use to automatically pull out all this data and automate the app to let us test it properly?”.

Unfortunately most of the time, when I’m asked this, the answer isn’t what people want to hear.

What they want to hear is “use tool X”.

What I often have to tell them is… “no tool will save you, I would do these things instead”.

When can tools help?

Tools can help quickly when you are using standard technology and you need to interact with it in an obvious way e.g.

  • query a database you have access to
  • send HTTP requests to an API
  • compare files
  • check links on a web site
  • etc.

A tool works quickly and easily when used for well defined tasks, implemented by out of the box functionality, on standard technology, used as you would expect.

When don’t tools help?

Tools cannot help as well when:

  • we need to access the database but the development team won’t give us access
  • we want to see the data in the application but don’t know the database
  • we want to see the data but it is stored in a custom format
  • we’ve been told there is an API that the GUI uses but it isn’t documented
  • we want to scrape data off the GUI because we can’t get access to the storage
  • we want to know the versions of files installed on the server but they won’t give us access to the server
  • etc.

None of the above are really technology problems. They are attempts to bypass social dynamic issues.

Tooling as workarounds

Sometimes we look for technology solutions to help us bypass people problems because we have work to do.

I’ve certainly done that, but it requires knowledge and experience prior to finding a tool. And… really it’s hacking as a replacement for effective managing.

e.g.

  • Because the external vendors would not give me access to the database of their tool to create custom reports and analysis. I accessed their tool database using MS Access and the default admin password that I found in a document in a network share that I scanned with some custom scripts to find documentation that might have the information I needed.
  • Because the external vendors would not give me the password to open the excel spreadsheets that contained the formulas for the calculations of the reports they were using and which I was sure were incorrectly calculating the information they were passing on to us, I found a way to bypass the password using OpenOffice to read the data and the code used to calculate it.
  • Because the external vendors would not explain their unit testing approach or show us the internals of the application to see if there were any architectural risks or API access, I used decompiling technology to give me visibility into the application and make a risk assessment and review the code.
  • etc.

Note that I rephrased these so that the actual problem was at the front.

Because of …some social interaction and communication issues… I used my existing technical knowledge to use tools to bypass the situation and get the information I needed.

And…

  • This could have backfired and made the social interaction and communication issues worse. These were not my first approaches to solving the issue, and I was working on the social interaction and communication issues in parallel to this ‘workaround’ activity.
  • I had to have the knowledge of what to do prior to using tooling. I had a choice of tooling because I understood the technical details of the problem I was working with.
  • This would not work with all the problems I’ve faced. When I haven’t been given access to production environments, I didn’t use tools to hack into production to gain access. There is a legal and ethical dimension associated with tooling workarounds.

It would have been far easier if we had managed to solve the communication and social interaction issues.

But what if there really was a tool?

Let’s say that you don’t have the technical knowledge to hack your way around the social interaction.

If there really was a tool, and I told you what it was, you would now have additional activities and risks:

  • you need to learn how to use the tool
  • you need to learn the technology of the tool and system to understand the results
  • you may create a new maintenance problem
  • you may need to find a budget

You now have to spend time learning to incorporate the tool into your process. Chances are that you need the tool because of time limitations and were hoping that the tool can speed up your process, but, it just slowed you down.

Without the technical knowledge to fully understand the tool, how it works, and its capabilities you might not identify technical limitations with the tool and report information back as false positive. This frequently happens when incorporating new scanning technology into your project e.g. security scanning software. But without the knowledge to assess and interpret the results, we raise issues which are not actually issues.

If we don’t pick a tool carefully, and don’t use it effectively, then we might create a maintenance issue for ourselves that may not give us issues immediately, but later, after we have come to rely on it.

Without a budget we often only look at free and open source tools, which may not have the polished interface we need to get started quickly or have support to act as an offset mechanism for our lack of experience.

Tools require knowledge and experience so that we understand them, what they do, how they do it, and what alternatives we can use if the tooling itself starts to fail us.

Feature Requests

Many of the questions I am asked about over email don’t require tooling as the solution.

Some of them could require new features in the application under test to support the testing.

e.g.

  • additional reports
  • data exports
  • an admin interface to control data
  • etc.

Additional reports can often remove the need for someone to access the database or file system directly. These can often prove useful in a live environment as well, since often the information we need during testing will also be required for reconciliation in live.

Data exports can often remove the need for scraping data from the GUI. This can often prove useful in a live environment to support adhoc requests and reporting later.

And an admin interface to control data can help configure the system into controlled states and limit the amount of data in the environment to make the testing more controllable.

But all of this requires fairly good communication and social interaction otherwise there may be no willingness or motivation to implement the features.

Tooling Isn’t The Only Solution

When tooling is the solution, the right tooling can be a marvellous addition to your process.

Tooling isn’t the only solution.

Tooling can introduce risks, so we have to be careful when we adopt tools.

One risk of tooling is that it can make the social and communication issues worse.

Tooling often seems easier to introduce. But I’m not sure I’ve ever seen tooling act as an effective replacement for the resolution of social and communication issues. But the resolution of those issues is often outside the realm of influence of the people asking for help, so they seek a technical tooling solution in the hope that one exists.

Social and communication issues can be one of the hardest things to resolve, but when resolved they lead to a better work environment and an improved long term situation.

I am often brought on site as a consultant to help with technical issues. And when I’m onsite, I also help solve problems longer term by working on the social and communication issues. You can find more about my consultancy work at eviltester.com/consultancy

Alan Richardson works a Software Development consultant helping teams improve their development approach. Alan has performed proof of concepts for clients for automated execution, and has worked with teams to help them implement fast and effective experiments to improve their process. Alan consults, performs keynotes and tutorials at conferences, and blogs at EvilTester.com. Alan is the author of four books, and trains people world wide in Software Testing and Programming. You can Contact Alan via Linkedin or via his consultancy web site compendiumdev.co.uk and testing web site eviltester.com.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The formal api documentation specifications ecosystems have tools that can help create documentation more easily and use the documentation to automatically validate e.g. Dredd, Swagger and StopLight

I’m working on an API for The Pulper, and I want to write documentation in a formal spec and use tools to validate it. Sounds Easy.

The Pulper is a simple CRUD app that is part of my TestingApp, a collection of applications to act as sample apps to practice testing on:

A version without the API is online on heroku if you want to play with it.

I’m adding an API so that it can support API exploration and so I can hook the GUI to the API backend with JavaScript and allow a whole new class of emergent bugs to appear.

One of the tings that makes The Pulper slightly different is that in the Admin drop down menu, you will find that there are multiple versions of the application that you can switch between. These offer slightly different functionality, potentially different bugs, and a different GUI experience if you are automating.

Once I’ve added an API I can have different versions of that, with different bugs etc.

Documenting and Testing

The issue with the API is that I wanted to do it a little better than my REST Listicator test app, which you can also play with online, or download as part of the TestingApp.

The documentation for this is hand crafted - which is good in that it allows errors to creep in, which have to be tested for, but isn’t the easiest thing to read to understand the API.

I suspect version 1 of the The Pulper API might have hand written documentation for this reason.

Standard Documentation Formats

There are standard documentation formats for APIs. The two most popular seem to be:

  • Swagger’s OpenAPI
  • API Blueprint

You can find information about OpenAPI at

And the API Blueprint at https://apiblueprint.org/

Tools to convert between the various formats seem to exist so I don’t think it matters which one you start with.

Testing Documentation

Documentation forms one of the inputs into our testing models.

  • Does the stuff in the documentation exist?
  • Can we do what the documentation says?
  • Does the system look and operate like the documentation?
  • etc.

Formal documentation formats offer the possibility of tooling to help out.

And the tooling eco system around the API formats offers the tantalising prospect of being able to automatically test the API from the formal specification.

Testing Interprets Documentation

The tooling can help, but mostly they help ‘validate’ the requests and responses against the spec, rather than test it.

I haven’t explored the tool space enough yet to see how far they can go.

The first tool I looked at was Dredd

Dredd

https://dredd.org/en/latest/

Out of the box, Dredd can take an API Blueprint Spec or a Swagger spec:

  • lint it to check that the spec is a valid format
  • issue all 2xx status code associated requests

Issuing all 2xx status code requests isn’t quite as helpful as it seems since it tries to issue POST requests to receive a 201, but does so without the data so you get a failing test. If you write the schema files well then Dredd may pick up examples in the spec but I haven’t experimented with this.

But I found it quite useful to see, out of the box with no configuration:

  • the list of requests issued
  • actually see some passing
  • seeing some valid errors where the API didn’t match the spec

I think it adds value out of the box.

Dredd Hooks

Dredd has hooks to allow scripting and I experimented with that to add payload bodies into to requests and skip any response codes I don’t want to see failing. That worked well.

To find out the hook transaction names you use the --names command line parameter

 dredd swagger.json http://localhost:4567 --names

I added a simple hooks.js for using Dredd. This:

  • adds a payload for my POST books to create an item and trigger a 201 status.
  • skips a transaction I haven’t coded for yet

    var hooks = require('hooks');
    
    hooks.before('/apps/pulp/api/books > Create or amend a single or multiple books > 201 > application/json', (transaction) => {
    transaction.request.body=JSON.stringify({
        "books":[
            {
                "title": "The Land of Little People Terror",
                "publicationYear": 1980,
                "seriesId": "Issue 1",
                "authors": [
                    {
                        "id": "4"
                    }
                ],
                "series": {
                    "id": "1"
                },
                "publisher": {
                    "id": "1"
                }
            }
        ]
    });
    });
    hooks.before('/apps/pulp/api/books > Create or amend a single or multiple books > 200 > application/json', (transaction) => {
    transaction.skip=true;
    });
    

Dredd looks like it has a good set of lightweight augmentation approaches for adding extra information to allow the untouched documentation to help drive some automated execution.

Tooling

I found writing the swagger specification quite time consuming with the online swagger editor http://editor.swagger.io

But it was much faster with stoplight.io

https://stoplight.io/

My current api documentation work in progress is here, but this is subject to massive change.

https://next.stoplight.io/eviltester-1/thepulper

I’m going to experiment more with the formal api documentation specifications and tooling around it to see if there are any more useful tools and approaches I can add to my API Testing processes.

If you are interested in testing and automating APIs then you might find my book “Automating and Testing a REST API” useful. It covers testing and automating APIs from scratch and uses tools like cURL, Proxies, Postman, RestAssured and discusses abstraction layers for automating.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

TLDR; Two online events, one workshop, a podcast, and a tonne of Patreon content.

Events

I presented an AMA for The Test Tribe the recording should be available, at some point should be available the Test Tribe Youtube channel.

I also presented a webinar for Testival meetup group.

Recordings for both of the above have been added (with bonus material) to my Evil Tester Talks Online bundle.

Viv Richards and I presented a workshop on dev tools at the London Tester Gathering.

The sample apps we used were:

Viv wrote the Sweetshop and Playground - both of which are fun to experiment with. I wrote The Pulper and the Buggy Games. We also showcased our Useful Snippets Chrome Extension.

We kept the workshop fluid so people could use what ever apps they preferred to experiment when working with the Chrome Dev tools. I provide an introduction to Dev Tools in my Technical Web Testing 101 online course.

Test Automation U released my “Automating in the Browser using JavaScript” course, which uses TodoMVC as a sample app and shows how to automate it from within the browser itself.

And for those of you who like to be productive on Twitter, I have added additional functionality to Chatterscan.com. It is not easier to work with embedded URLs, Hashtags and saved searches now uses the Twitter Saved Searches as well as local storage. It also has shortcut links to other helpful tools for managing lists. I now primarily use this as my Twitter client to maximise the value I can get from Twitter while avoid spending much time.

And I also released a new podcast episode “Automate or Die” I will try and expand on this over the coming month with some additional podcast episodes.

AccelQ

I spent some time with the AccelQ Tool set using it to automate the Todo MVC app and get a feel for its modelling and DSL scripting capabilities.

If you work for a tool vendor and want me to spend some time building objective overview content to help showcase your tool then contact me and let me know.

Patreon Posts For June 2019

The following posts on Patreon were collated into a 65 page PDF that patreons on the $5 “scholar” tier have access for downloading to make it easier to catch up with the content.

Twitter May 2019

I found a few interesting links that I posted to Twitter, which are listed below.

Note: I don’t summarise the content that I release to Facebook or Instagram so you might want to follow me there.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

TLDR; A simple link checker running from a snippet or console has some secondary advantages like jumping to the links and showing CSP and CORB errors

I’ve been experimenting more with JavaScript and working more from the console.

It was great to see Santhosh Tuppad using a lot of JavaScript from the console in his security workshop at LTG Gathering. I thought his encouragement to everyone to create a local unpacked adhoc chrome extension was a good idea.

I created a tutorial showing how to create a Chrome extension on YouTube

And I have a couple of examples of extensions on github

Working with Chrome extensions gives you access to a few more APIs to avoid some of the constraints of cross site scripting.

External Link Checkers

I use link checkers like:

To externally crawl my site for errors.

External crawlers are important for finding the status of pages e.g. 404, 200

Building a Link Checker

As a quick experiment I wanted to see how much of a link checker I could build in JavaScript and run it from snippets.

I have uploaded all the code as a Gist:

Find all the links

In essence, what I do is:

  • find all the links
  • iterate over them

Which looks like this:

var links = document.querySelectorAll("a"); 
var linkReport = [];
links.forEach(function(link){
    var reportLine = {url: link.getAttribute('href'), status:0, message : "", element : link};
    linkReport.push(reportLine);
    // do stuff to the reportLine and link here
});
console.table(linkReport);

You could run this from the console, or add it as a snippet.

When it finishes it uses the console.table functionality to output all the objects.

The objects are created in the line:

var reportLine = {url: link.getAttribute('href'), status:0, message : "", element : link};

In this form it doesn’t really do anything, but…

…if anything caught your eye in the table, say it was link 70 in the table, then you could, in the console…

Scroll it into view:

linkReport[70].element.scrollIntoView()

And highlight it on screen with:

linkReport[70].element.style.backgroundColor = "red"

So it might be useful in that simple form.

Checking Links

I wanted to check the links by making a HEAD request.

I knew this wouldn’t work for all links because the browser would block some of the requests due to cross site scripting concerns.

Extensions like Check My Links will use Chrome APIs to avoid the XSS issues.

But I carried on regardless to see if anything interesting would happen.

I initially used XMLHttpRequests:

    var http = new XMLHttpRequest();
        http.open('HEAD', reportLine.url);
        
        http.onreadystatechange = (function(line,xhttp) {
            return function(){
                if (xhttp.readyState == xhttp.DONE) {
                    line.status = xhttp.status;
                    line.message = xhttp.responseText + xhttp.statusText;
                    linksChecked++;
                    console.table(xhttp);
                }
            }
        })(reportLine, http);
        http.send();

This console logs the http request as it works.

Because this is callback based, if I output the table after the loop it would not have all the request status so I maintain a count of links checked linksChecked++; and added a polling mechanism after the loop:

var finishReport = setInterval(
                        function(){
                              if(linksChecked>=linkReport.length){
                                  console.table(linkReport);
                                  clearInterval(finishReport);
                                  }
                               }
                        , 3000);

This way the final console.table report is only shown when the number of links check matches the number of links in the array.

Simple link checker using XMLHttpRequest

Giving me a simple link checker like this:

var links = document.querySelectorAll("a");
var linkReport = [];
var linksChecked=0;
links.forEach(function(link){
    var http = new XMLHttpRequest();
    var reportLine = {url: link.getAttribute('href'), status:0, message : "", element : link};

        http.open('HEAD', reportLine.url);
        linkReport.push(reportLine);
        
        http.onreadystatechange = (function(line,xhttp) {
            return function(){
                if (xhttp.readyState == xhttp.DONE) {
                    line.status = xhttp.status;
                    linksChecked++;
                    line.message = xhttp.responseText + xhttp.statusText;
                    console.table(xhttp);
                }
            }
        })(reportLine, http);
        http.send();
});
var finishReport = setInterval(
                        function(){
                              if(linksChecked>=linkReport.length){
                                  console.table(linkReport);
                                  clearInterval(finishReport);
                                  }
                               }
                        , 3000);

Again I can scroll to link and make it visible.

One of the issues I have with Check My Links is that when a link fails it can be hard to find it on screen sometimes. This way I can use JavaScript in the console to jump to it.

Using Fetch

I thought I’d try with Fetch and see how different the output was:

    fetch(reportLine.url, {
      method: 'HEAD'
    })
    .then(function(response) {
        linksChecked++;
        reportLine.status=response.status;
        reportLine.message= response.statusText + " | " +
                            response.type + " | " + 
                            (response.message || "") + " | " +
                            (response.redirected ? "redirected | " : "") +
                            JSON.stringify(response.headers) ;
        console.table(response);
        }
    )
    .catch(function(error){
        reportLine.message = error;
        console.table(error);
        linksChecked++;
    });

This was a little easier to use and the response had more useful information so I crudely concatenated the response fields I was intrested into the message property of the report line.

Errors

When the link checker runs it shows me all the CSP errors in the console:

VM14:1 Refused to connect to 'https://help.github.com/' because it violates the document's Content Security Policy.

And all the CORB errors:

Cross-Origin Read Blocking (CORB) blocked cross-origin response https://gist.githubusercontent.com/eviltester with MIME type text/plain. See https://www.chromestatus.com/feature/5629709824032768 for more details.

This was a useful side-effect. The table report shows me a status of 0, but I can look in the console for the other errors.

This is a useful side-effect because I don’t see these warnings with external link checkers, but it is important to be able to check that the various XSS policies are in place, or have been deliberately eased up on for some servers as appropriate.

I don’t think I have any other tools which provide me with this information easily.

Could I check status for these?

In order to try add even more information I thought I’d see if I could check the status for anything that was throwing errors in the initial log.

So I used a quick hack that I learned in Santhosh’s workshop.

Image tags are often used for XSS to pass information to another site, but I wanted to see if that could give me any status information.

function imgreport(links){    
    links.forEach(function(link){
            if(link.status==0){
                // trigger error messages with status 
                // to the console for status of 0
                var img = new Image();
                img.src = link.url;
            }
        }
    );
}

The above function creates a new image and sets the url to one of the links that failed to work with the fetch.

Would this provide more information?

It did.

With the Fetch I learned:

`Access to fetch at ‘https://twitter.com/eviltester' from origin ‘https://www.eviltester.com' has been blocked by CORS policy: No ‘Access-Control-Allow-Origin’ header is present on the requested resource. If an opaque response serves your needs, set the request’s mode to ‘no-cors’ to fetch the resource with CORS disabled.

For the same URL with the image I learned:

GET https://twitter.com/eviltester 403

What else could fetch do?

I had a look at the fetch documentation and saw that it could follow the redirects for me:

    fetch(reportLine.url, {
      method: 'HEAD',
      mode: 'cors',
      redirect: 'follow'
    })

So I added the url it was redirected to into the report

        if(response.redirected){
            reportLine.redirectedTo = response.url;
        }
My final code

The final code for my linkchecker used the ‘fetch’ version as it had more actionable and useful information.

var links = document.querySelectorAll("a");
var linkReport = [];
var linksChecked=0;
links.forEach(function(link){
    
    var reportLine = {url: link.getAttribute('href'), status:0, redirectedTo: "", message : "", element : link};
    linkReport.push(reportLine);

    console.log("HEAD " + reportLine.url);

    fetch(reportLine.url, {
      method: 'HEAD',
      mode: 'cors',
      //mode: 'no-cors',
      redirect: 'follow'
    })
    .then(function(response) {
        linksChecked++;
        reportLine.status=response.status;
        reportLine.message= response.statusText + " | " + 
                            response.type + " | " + 
                            (response.message || "") + " | " +                            
                            JSON.stringify(response.headers) ;
        if(response.redirected){
            reportLine.redirectedTo = response.url;
        }
        console.table(response);
        }
    )
    .catch(function(error){
        reportLine.message = error;
        console.table(error);
        linksChecked++;
    });

});

function imgreport(links){    
    links.forEach(function(link){
            if(link.status==0){
                // trigger error messages with status 
                // to the console for status of 0
                var img = new Image();
                img.src = link.url;
            }
        }
    );
}

var finishReport = setInterval(
                        function(){if(linksChecked>=linkReport.length){
                            console.table(linkReport);
                            imgreport(linkReport);
                            clearInterval(finishReport);
                            }}
                        , 3000);

Not an everyday link checker

I found that a useful exercise.

The link checker report is useful to me because it does reveal issues on the page that were hinted at by icons in Chrome, but very visible in the fetch error messages.

Using the console.table allows me to sort the ‘report’ in the console to make the investigation useful, and I learned a bit more about fetch

All the code is easy to copy and paste to experiment with from this gist

And if you wanted to learn a bit more JavaScript then:

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

TLDR; AccelQ tool helps automate web applications and APIs using a DSL rather than programming language

I spent some time becoming familiar with the AccelQ tool and used it to automate a web application. AccelQ have a Model Based and Domain Specific Language approach to automating. If you are looking for a tool to support automating but do not have coding experience on your team then AccelQ might help. And if you have a mix of skills then AccelQ also support adding your own code and expanding the tool capabilities.

I don’t always have a lot of time to evaluate new tools. AccelQ made this overview possible by paying for my time to familiarise myself with their tool and allowing me the freedom to create the overview video without their editorial control.

I created a video below to showcase some of the concepts and functionality of the tool without making it a tutorial video because it can be hard to get to grips with “what does this tool actually do?” and “how does this tool actually work?”.

Concepts

A long time ago I wrote a test modelling tool which used directed graphs to model the application, which the user could then identify paths through, to create scripts.

AccelQ take a similar approach.

There is a modelling component which allows you model the tool and identify paths through the tool.

The transitions through the model are Actions, described in a domain specific language which uses predictive text to show you available options.

e.g.

Click on `create todo name input` element
Enter `todo name` in the `create todo name input` field
Press special keyboard key, ENTER in the `create todo name input` field
  • create todo name input is a re-usable locator to identify an element on the screen
  • todo name is a parameter passed into the Action to make it easy to re-use the Actions

The text is built up using predictive text so you don’t type all the DSL and are prompted for the parameters etc.

I’ve found some of the editors for online DSL tools to get in the way. The AccelQ editor supported creation of the DSL scripts well.

Also, you can build up complex paths if you need to since the DSL supports flow constructs like: if/else and loops.

The Actions are also re-usable so you can call Actions from other Actions and build up a DSL that matches your application.

In a similar way to BDD tools where you create Abstraction layers that model your application.

Element Location

AccelQ have support for identifying elements in the Web Application.

This uses a Chrome extension for creating a snapshot of the state of the page, and then it helps you identify elements on the page.

The tool can identify complex location strategies and you have the ability to add your own CSS locators if you want to.

You can also use multiple location strategies for elements.

Execution

AccelQ support runnning the path through cloud based Selenium providers.

I ran the paths locally from my machine. I ran the AccelQ agent locally, which connected to AccelQ and then triggered execution from AccelQ and all browser execution ran locall on my machine.

Summary

I only spent a short time with the tool, about 1 day elapsed to familiarise myself with the functionality.

I didn’t have any more support than a customer would have.

I managed to automate the TodoMVC application and create capabilities to:

  • create a Todo
  • amend a Todo (rename, toggle status)
  • delete a Todo
  • filter Todos
  • clear Todos
  • assert on Todo text and numbers

The basic capabilities that would allow me to automate most of the paths through the application.

I also explored data driven tests and calling Actions from other Actions to create a set of abstractions that logically model the application.

AccelQ offer a 14 day free trial and their pricing is clear on the website, so no need to guess how much the tool costs.

Overview of AccelQ Video
An overview of AccelQ Test Automation and Modelling Tool - Web Test Automation - YouTube
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I receive a lot of concerned emails and messages from testers. People concerned that their testing skills will not be enough to keep them in work and that the future revolves around automation which they don’t know how to do. No programming experience. And should they learn API or UI? What programming language should they learn? People do feel that their career is at risk.

And that’s what I cover in this podcast.

Disturbing Trend

I think it is a slightly disturbing trend that automating is so tied up with testing.

Because it almost doubles the time it takes to learn ‘testing’ - you have to learn to automate, and code, and test. And learning how to test requires that you learn other parts of the development process - requirements, management, communication, process design, architecture. And there is a lot to learn from other disciplines: math, sociology, psychology, psychotherapy, etc. There is enough to learn and study already to become good at testing.

Personally Take Long Term View

This is also why we need to take a ‘long view’ of our testing careers. Over 20 to 30 years, you can spend 5 years on each main discipline and become pretty good at all of them.

But recruitment often takes a short term view and requires expertise in all of this in 3 - 5 years. Recruitment is broken.

Automation and automatability are independent of testing. Sometimes we conflate them because we want to automate in support of testing so view automating through a testing lens. But we don’t exclusively automate in support of testing so both automating and automatability exist independently of testing.

The coding skills required to automate for testing are often viewed as simpler than production code. This is not true. To write code for automating we are taking a strategic view of automating and that requires production level coding experience: abstraction layers, unit testing, TDD, architectural patterns.

I do think learning how to code is useful. It has been beneficial for me. But I also had 3 or 4 years commercial programming experience before starting to learn testing. And informally 6+ years messing about with computers and programming prior to that. I had the time to develop skills and I enjoyed it. No-one should be forced to learn this.

When employers mandate coding as a requirement for testing it means they don’t understand the value that testing can bring. And while it seems hard that you won’t get a job with that company, you are probably better off not getting a job with that company. Hopefully there are enough companies not having coding and automating as a job requirement to allow you to progress your career. But it does seem concerning that so many companies do require this.

Automating Does Not Have to involve code

There are commercial tools that allow us to achieve automated execution aims without requiring extensive training and practice and experience in programming.

These can be a viable alternative to requiring all your staff to learn how to code and they can be used strategically to achieve your aims for automating.

The tools have changed over the years and I’ve been looking at a few tools recently that are affordable and do work.

My views on this have changed. Commercial tools used to be incredibly expensive, locked down, inflexible and still required a lot of coding experience, but the coder was hamstrung by the environment. Now the technology is catching up with the user experience and allowing people to automate without the emphasis on coding experience.

How do I get a job automating without experience?

Make your learning public: blog, github.

Explain what you have learned.

I recommend starting where you are.

i.e. if you are testing APIs, then learn to automate APIs. If you are testing web apps, then learn Selenium. If you are testing desktop apps, then you can still learn Selenium with WinAppDriver.

And start by automating to support your work.

Start tactically, rather than strategically.

  • tactically - getting something done fast which we might build on, but also might throw away or not share
  • strategically - committing to an approach that you will build on and maintain long term

Find a way to automate something that you do.

It doesn’t have to be:

  • robust
  • reliable
  • well written
  • fully automated

But it can support you.

I need to code. What programming language should I learn?

What language to learn?

Depends on:

  • what you want to do and
  • who you want to learn from and
  • who you have supporting you

If you can do this in your normal work and get support from people around you then choose the languages and tools that they know. Pair with programmers if possible.

Sometimes people don’t like asking programmers. I’ve found that programmers are happy to share their knowledge and help if you are actively learning and making progress on your own.

Start small.

I don’t have anyone that can help

If you don’t have that advantage then find a teacher that you want to follow their material and work through it.

  • youtube
  • online courses

At the same time… immerse yourself in that language. Blog posts. Twitter people. Books (o’reilly safari trial, use the library).

Read code

Immerse yourself in code.

Read code:

  • blogs
  • github
  • run other people’s code
  • hack about with other people’s code
Be prepared. Getting started is the hardest part

Getting started is the hardest part.

Scripting languages seem easy because of that, but all languages are complicated eventually, and when you start writing automated execution code strategically, you’ll be writing code.

If it is for career growth then you have the difficulty that people don’t really hire without experience so you have to demonstrate that experience.

But you might not have to

Perhaps you can extend your existing test approach to cover more technology risks.

  • Security testing
  • Technical Web and HTTP Risks

Look for tools that require less coding. Scriptless or augment with smaller scripts.

Don’t stress. Finding work takes time, regardless of how experienced or skilled you are.

Get better at what you already do.

Go deep, rather than wide.

Develop the ability to communicate the value that you and your approach adds.

Some related blog posts:

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

TLDR; Another month of lots of patreon content, with a few conferences and links

Seems like I’ve concentrated on work and conferences last month, as I haven’t released any blog posts. But I have been active on patreon, with 20+ posts.

I’ve been sorting out my live stream setup, to try and create more smaller pieces of content more regularly. Also to do more remote live events, because conference travel eats up a lot of time. You can now find me more active on facebook at EvilTester. And I’m even streaming periodically to Twitch as EvilTester T.V.

I don’t really “friend” on Facebook with my facebook account - I’m really using Facebook for the “page”, so if you don’t get a response to your friend request, that is the reason why.

I also updated the blog to create thumbnails, for all the blog posts… all 600+ of them. Clearly I automated that, and I described how in a Patreon post. The secret was to use ImageMagick and wrap a script around it.

I then extended the script to automatically create videos from blog posts with an automatically generated voice over. I’m not sure how I will actually use this, but it was an entertaining challenge and I experimented with a bunch of text to speech libraries, APIs, and ffmpeg. My final version uses the Amazon Polly Api. I suspect I’ll more likely use this to create social media videos rather than YouTube videos.

Conferences

I attended the Joy of Coding conference.

And Viv Richards and I performed an Ask Us Anything on the Ministry of Testing forums.

Videos will be released publically eventually, although videos for Joy of Coding (and this months Test Tribe AMA and Testival Webinar) have already been released to Patreon.

Patreon Posts For May 2019

There were 25 or so posts released to Patreon in May 2019.

Twitter May 2019

I found a few interesting links that I posted to Twitter, which are listed in this blog post.

Note to self: creating these twitter lists takes time… add this as a feature to Chatterscan Chatterscan.com.

Note to Note to self: knocked up an MVP - check the auto generated comment in the HTML source when you view the tweet list in Chatterscan.

Note to Note to Note to self: Thanks, that helped loads. I did not realise I had not Tweeted very much.

Note: I don’t summarise the content that I release to Facebook or Instagram.

And, if you are interested, I converted this blog post text automatically into a video. Just for chuckles. Ha Ha Ha!

I still have some bugs to fix.

2019 06 05 may 2019 summary - YouTube
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

TLDR; Another month of lots of patreon content, code reviews and a few blog posts and links

I’ve been concentrating on business this month, and creating a few new videos (for other companies) and conducting some code reviews.

If you want to know how I conduct code reviews then my Remote Code Reviews page explains it. This is a really good way to get experienced feedback on your projects even if you don’t want a more lengthy consultancy engagement.

I’m always happy to learn about your consultancy or mentoring needs. Remember consultancy is very cost effective because I hit your problems hard and fast or I work with your teams for short targeted engagements. You don’t hire me for 6 months, you hire me for days at a time and each engagement is tailored exclusively for you.

If you want me to work directly with your teams and help improve your approach and skill sets then you can contact me here or find more details of consultancy work on this information page.

Blog Posts

All my YouTube videos are embedded within blog posts, so I haven’t listed the videos separately.

Things that caught my eye

Things that caught my eye that I mentioned on Twitter

I mentioned a few other things on Twitter but if you scroll through my timeline for April then I’m sure you’ll find them

Patreon Posts For April 2019

There is usually more released on my Patreon site than publicly on the blog.

I release a PDF which has the full text of all my blog posts and patreon posts as a Patreon benefit to make it easier for Patreon supporters to catchup in batch mode.

If you want to keep up to date on a more regular basis you can follow me on - Linkedin, Twitter, Instagram, Facebook, Youtube

And I have newsletter you can sign up for.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Writing a Chrome Extension is pretty easy (getting it in the Chrome Store is much harder!). In this post I describe the steps to handling if your extension is rejected.

I have now released two extensions to the Chrome Store.

Both were rejected.

Multiple times.

For the same generic reasons.

Reasons

Dear Developer, Your Google Chrome item, “…” did not comply with our policies and was removed from the Google Chrome Web Store. Your item did not comply with the following section of our policy:

  • Do not post repetitive content.

  • Do not attempt to change the placement of any Product in the store, or manipulate any Product ratings or reviews, by unauthorized means, such as fraudulent installs, paid or fake reviews or ratings, or offering incentives to rate Products.

  • Do not post an app if the primary functionality is to link to a website not owned by the developer posting the app.

  • Do not post an app where the primary functionality is to install or launch another app, theme, or extension. For example, you cannot post an app if its primary function is to launch a desktop app that the user has already installed. Another example of a disallowed practice would be to post a packaged app that just launches a website.

  • Your app must comply with Google’s Webmaster Quality Guidelines

If you’d like to re-submit your item, please make the appropriate changes to the item so that it complies with our policies, then re-publish it in your developer dashboard. Please reply to this email for issues regarding this item removal.

My extensions did none of this.

The Quality Guidelines are too vague to be any help.

If you Google the error message you will see other people in the same situation. This doesn’t help, but it makes you feel less alone.

What to do?

Since you receive the rejection in an email.

Reply to the email nicely asking.

“Can you provide more information about why this failed the review and what I have to do to fix it?”

Or words to that effect.

If you are told to resubmit. Then resubmit, expecting to be rejected. Because you will likely go through this process multiple times.

I went through this cycle. With no changes to my extension. 9 times from 26th Feb 2019 until it was accepted into the Chrome Store on the 1st of April.

Why?

I suspect it is automatically rejected.

You might get lucky.

If you don’t then keep trying.

Summary
  • first google the error message
  • you’ll see other people in the same situation
  • if changes you need to make are not obvious then email support asking for a specific reason
  • keep doing this, and submitting and asking, until you get a reason
  • eventually, if you persist, you can make it
Video

I’ve created a video showing the steps I took.

Chrome Extension Rejected? What to do next. - YouTube

If you are interested in writing a Chrome Extension then check out all blog posts in this category Chrome Extension and see the videos in this Chrome Extension Playlist

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Writing a Chrome Extension is pretty easy (getting it in the Chrome Store is much harder!). In this post I describe the steps to release an extension.

I have now released two extensions to the Chrome Store.

Neither went in first try, but both eventually made it after adjusting the description and adding some more screenshots.

I have described the basic process for releasing the extension.

After following this process you may have to refine your entry if it is rejected during the submission process. I’ll describe that process in a later blog.

I have a video showing the full process at the bottom of the post. This text is basically a summary.

You can find the code for both my Chrome Extensions on Github and you’ll see all the images, descriptions and manifest files that I created there if you want to model my final submissions.

Release process summary

Up to date instructions and directions will be on the Chrome Store:

  • developer.chrome.com/webstore/publish

  • I created an icon in Gimp - 128x128, 48x48, 16x16

  • Add icons to the manifest

  • Write a description

  • Create screenshots for the application of 1280x800 or 640x400

  • I created a separate google account because the developer email address needs to be public

  • Create a zip file of the extension, I zipped my code so that the manifest was in the root of the zip i.e. not in a subfolder when unzipped

  • “Add New Item” from the Chrome Developer Dashboard

  • paste in all the descriptions, icons and screenshots

Over the submission process I:

  • refined the description
  • added a few more screenshots
  • added a YouTube video showing the plugin in action
Video

I’ve created a video showing this in action.

How to release a Chrome Extension to the Chrome Webstore - YouTube
Code

And you can find the source code on Github.

If you are interested in writing a Chrome Extension then check out all blog posts in this category Chrome Extension and see the videos in this Chrome Extension Playlist

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview