Loading...

Follow Gurock Quality Hub on Feedspot

Continue with Google
Continue with Facebook
or

Valid

We’re pleased to announce that TestRail 6.0 is now available, and has been released to all users of our cloud infrastructure.

In TestRail 6.0, we’re very excited to present brand new Modern and Dark Mode user interface themes, as well as support for Docker containerized on-premise installations.

All-New User Interface

With TestRail 6.0 you get not just one, but two brand new TestRail themes! We are extremely proud to present you with our new Modern theme. This will be the default TestRail theme going forward.

And, for those of you who prefer things a little moodier… We’re also pleased to introduce a brand new Dark theme.

 

If you want to switch UI themes, you can do so in your personal User Settings area. Also, if you were happy with TestRail just the way it was… Don’t worry, we’ve got you covered too! You can just switch back to Classic mode any time you feel like it.

Docker Support

Also with TestRail 6.0, we’re happy to introduce support for installing your on-premise TestRail instance using Docker containers.

Using Docker containers to configure, install and manage your TestRail instance streamlines the whole process of downloading and setting up TestRail on your own server. No longer do you need to fiddle around with the various aspects of setting up a web server, PHP, a database, IonCube, etc. From now on, installing TestRail can be as simple as downloading the Docker containers and spinning them up according to your preferred configuration.

For complete Docker installation instructions, please refer to the guide here: Docker Overview

How to Get TestRail 6.0 New Customers

A 30-day fully functional trial version of TestRail can be requested here. Choose a trial hosted on our fast and secure servers, or download TestRail to install on your own server.

If you have an active trial and wish create a subscription for TestRail Cloud, you can do so from within TestRail using the menu option Administration > Subscription. Or, If you want to order TestRail Server licenses, you can do so at our web store.

Existing Customers

TestRail Cloud instances are automatically updated to the latest version. To verify your version, use the menu option TestRail Help > About TestRail.

Registered customers using the on-premise version of TestRail can download the full version from our customer portal. Then, update to the new version as usual by installing it over your existing TestRail installation. There’s no need to remove your existing installation. The database upgrade wizard starts automatically when you access TestRail with your web browser. Please see the update instructions for details.

If you’re using Docker, please refer to the instructions here: Docker Overview

Upgrading to TestRail Enterprise

If you’re interested in our Enterprise package, please email us for a trial or quote via contact@gurock.com, or you can use the contact form.

Please ensure that you make a backup of your current on-premise installation before upgrading to the new version.

Modern Test Case Management Software
for QA and Development Teams

The post Announcing TestRail 6.0 with UI Enhancements and Docker Support appeared first on Gurock Quality Hub.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This is a guest post by Nishi Grover Garg.

A task board is a physical or virtual chart containing all current team tasks at hand and their progress over time. For an agile team, all sprint tasks can be represented on the task board, and their flow over various stages can be tracked in the daily standup meeting. Task boards are a great way to visually represent pieces of work and their status.

Besides helping to organize and track work and being the focal point of the iteration and relevant meetings, task boards can have numerous other benefits for an agile team. Here are four additional ways they can help.

Four benefits of task boards 1. Customize your process

Though task boards often start with the basic To Do, In Progress and Done stages, teams are free to design their task boards based on their own way of showing progress through the phases of a task. My team’s task board looked something like this, having each task flow through the relevant phases:

Another team may decide to have tasks for each activity, like Design Discussion, Coding, Reviews, Test Creation and Test Execution, and then move each user story along in the three stages of To Do, In Progress and Done.

Teams get to decide their own way of visualizing their work and find the best way to collaborate. This enhances a team’s interaction and understanding of the process.
An agile sprint is considered a success only if all tasks taken up by the team are completed with the desired quality by the end of the sprint.

2. Visualize your Scrum process

Teams new to agile may get easily overwhelmed by the fast pace. Task boards help them visualize their process and their work. Representing each user story in the form of smaller tasks breaks down the expected work in understandable and easily doable chunks. Seeing these small chunks of work move ahead in the task board every day also gives a sense of progress to the team. The sprint seems achievable and agile begins to work easily!

3. Improve commitment and visibility

Once tasks are put up on the sprint task board, the task owners are committed to deliver. As the sprint progresses, everyone feels a sense of ownership in getting their task moving ahead. If tasks linger in the In Progress stage for too long, the owner must answer to the team. Any impeding factors can be discussed and addressed to help the task move along.

Every day progress is tracked, and at the end of the sprint, if a task is not completed, the ownership is clearly visible, so the owner can explain the situation. On the other hand, if a task gets completed ahead of schedule, that too is visible and can be a small win for the owner as well as the entire team!

4. Facilitate team interactions

Task boards are the focal point of all team meetings and discussions. If used correctly and placed close by, team members can gather around the board for discussions about tasks. My team would not even wait for our evening standup meeting to move the tasks ahead on the task board once they were done! It acts as a communication channel as well as a motivating factor.

Modern Test Case Management Software
for QA and Development Teams

Physical and Virtual Task Boards

Though physical task boards are the most collaborative, distributed teams may find it difficult to work with them, so they may instead use virtual task boards. Online free tools and websites help in creating your own task board and sharing it with the team, so all members can update the status of their tasks periodically. Trello is a popular online task board creation tool, and it’s easy to use and customizable. These virtual task boards also result in less wasted time, easy tracking, and the ability to maintain the history, comments, and conversations for each task.

Whatever your preferred approach, give task boards a try for the benefit of your agile team.

All-in-one Test Automation
Cross-Technology | Cross-Device | Cross-Platform

Nishi is a corporate trainer, an agile enthusiast and a tester at heart! With 11+ years of industry experience, she currently works with Tyto software as an Evangelist and Trainings Head. She is passionate about training, organizing testing community events and meetups, and has been a speaker at numerous testing events and conferences. Check out her blog where she writes about the latest topics in Agile and Testing domains.

The post Four Ways Task Boards Help an Agile Team appeared first on Gurock Quality Hub.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This is a guest post by Carol Brands.

Back in January, I wrote about how we determined the minimal amount of testing required to meet our release date for a struggling project. But what could we do to make sure we don’t fall into the same trap for our next release?

This is the story of how we’re using post-release testing to practice a different approach that might help us avoid getting trapped by a deadline in the future.

Identifying the Issues

Our first order of business was recognizing what processes led to the project running behind in the first place. We identified the following three issues:

1. Backlog items did not go through enough review between their initial creation and when they were handed off to testers.

For the majority of the project, backlog items were written by the product manager in collaboration with the development manager. They worked at developing acceptance criteria that were understandable and testable without dictating implementation. However, often when these backlog items were passed on to the test team, we had questions.

We identified acceptance criteria that were fine in isolation but needed to be adjusted to reflect other work or features that they interacted with. Most of our backlog items were defined by their relationship to our legacy product, and we often found workflows available in our legacy product that hadn’t been considered.

Finding these problems during the testing phase of development meant that we experienced a lot of churn. We were writing defects constantly, and we often needed to send entire backlog items back into development.

2. Only testers were responsible for testing.

In the early stages of development, it became clear that the development capacity on the team was outpacing the test capacity. We had five developers working on the project and only two testers. Development began well before the test team got involved, so we began work on the project with over 50 stories already marked “Ready for Test.”

At that time, we thought that all the stories would have the same testing priority, so we started with the earlier stories and tried to work our way through the pile. Over time, it became clear that we wouldn’t be able to catch up to development, so we switched strategies to testing the most recently developed stories. This was better, but nothing could overcome the fact that we had more than double the developers as we did testers. This compounded the problems around backlog items being insufficiently reviewed prior to entering the testing phase.

3. Defects were being triaged infrequently via defect reports, not discussion.

By the time we reached the end of the testing cycle, we needed to be careful of what changes we chose to include in our release. Any unnecessary changes would take time to fix and test, and we didn’t have time.

We decided that we needed to document all potential defects so that they could be reviewed by the product manager, and then he could choose which defects would be fixed and which would be deferred. Since the backlog items hadn’t been reviewed by testers, we were finding lots of defects, all of which needed a detailed report so they could be reviewed by the product manager, whose time was limited by the approaching release date and other competing priorities.

Modern Test Case Management Software
for QA and Development Teams

Cleaning Up Our Mess

In our particular market, it’s not uncommon to have some lead time between a release and the first sale. Throughout our stakeholder conversations leading to the release date, we discussed using that time to complete additional testing. We decided to use this as a practice run for a different way of working.

First, we enlisted all the developers to assist with testing. We gave some basic directions:

  1. Explore thoroughly using a test environment, as opposed to testing the “happy path” on a local environment.
  2. Write down what you test. You should be able to answer the question, “Did you test this specific scenario?”
  3. If you find a defect, first talk to the product manager about it:
    • If the defect is going to be fixed, write a very basic defect report — just enough to give the product manager something to write release notes from.
    • If the defect is going to be deferred, write a full defect report so that when we make a decision about the defect in the future, there’s enough information to base the decision on.

Using this way of working, we’ve improved developers’ understanding of testing, they’ve practiced being responsible for testing backlog items, and we’ve increased the level of communication happening during development and testing. This feels like a good first step in getting developers more involved in the full development process, rather than treating the testing phase as something that belongs only to the testers.

We hope to iterate on these practices on our next project by including testers in backlog reviews before they go into development, and asking developers to participate in the testing phase as soon as we see more “Ready for Test” backlog stories than available testers. As we bring this project to a close, I am grateful for the lessons we learned from our mistakes.

All-in-one Test Automation
Cross-Technology | Cross-Device | Cross-Platform

Carol Brands is a Software Tester at DNV GL Software. Originally from New Orleans, she is now based in Oregon and has lived there for about 13 years. Carol is also a volunteer at the Association for Software Testing.

The post Tester’s Diary: Getting Ahead With Post-Release Testing appeared first on Gurock Quality Hub.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This is a guest post by Matthew Heusser.

Ten years ago Bernie Berger wrote “A Day In The Life Of A Software Tester”, for Software Test and Performance Magazine.

Yes, Magazine. As in, printed on paper. I know. It was a crazy time.

The amazing thing about the day in the life article was how little actually gets done. Bernie spends all day waiting. Waiting for answers on what the software should do, waiting for a fix, waiting for a new build, waiting to restart the webserver. If it’s possible to wait for something, he is probably going to wait for it. The waiting was so painful that it helped inspire the book “How To Reduce The Cost of Software Testing”. The general advice of my chapter was to never, ever report “blocked” for status – that there was always something you can do to influence the outcome.

Ten years later, an on-demand test environment can solve 90% of those problems … and that might be an understatement.

By on-demand, I mean at any given time, a tester can bring up a test environment to test against with the latest software. In this context, I generally mean a web server, possibly with a test database, just for the tester. In the case of a mobile app, it might mean pushing a new build onto a phone, but it is more likely to mean testing against a build with the latest microservice.

The first way the on-demand environment helps is with continuous integration and testing.

Modern Test Case Management Software
for QA and Development Teams
Continuous Integration and Testing

Most of the teams I work with today have continuous integration (CI). They perform a build every time there is a commit. Sometimes they run unit tests; sometimes they even make sure the tests pass. What I see less of is what some call continuous testing — actually building a full test environment for each build and performing some minimal amount of end-to-end testing.

That end-to-end test work will find out:

  • If a change broke the webserver
  • Or the database
  • Or the login
  • Or some other major area of functionality, minutes after the problem is introduced.

Moreover, because each build is tied to a specific commit, the CI system knows who broke it, and can notify them by email or instant message. That means any serious problem should be fixed and a new build introduced within ten minutes or so.

Going back to Bernie’s day in the life. If he had all his major problems fixed in fifteen minutes or so, I expect he would get the same day’s work done by 9:00 AM. That isn’t much of an exaggeration. The organizations I’ve worked with that have implemented this sort of system see a massive reduction in “waiting”, reducing the cycle time to get a card across the “board” from weeks to days, or days to hours.

This is, as the saying goes, a good start.

Dockerized Test Environments

We had on-demand test environments when I was at Socialtext, over a decade ago. They were based on Virtual Machines, were slow, and took a fair amount of time to populate. By virtual machine, I mean a copy of the entire disk image of the whole computer, taking up a great deal of CPU, disk, and memory. In order to “run”, the main computer needs to “swap out” the entire machine in memory to this virtual one.

What if it didn’t?

Containers fit the virtual machine image into a much more compact space. They run on the same operating system and have a great deal of shared disk and OS material. You can think of the container as just a diff from your own machine. That means less to swap out. Containers start with a ‘frozen’ operating system image you start up.

The first step with containers is to get them to generate for each build, or at least for any branch on demand within a few minutes. To borrow a phrase, you can “take one down, pass it around” between development and test.

Once the organization has an on-demand test environment, you can put them in a cluster and run the cluster in production.

The On-Demand Cluster

Imagine putting test servers in the same place as production servers – within the same cluster. Employees and testers are sent to the servers running the test code, while customers go to the stable version of the software. As new versions roll out, we send testers to even newer versions of the software. This creates a new form of end-to-end test – the running of synthetic transactions in production, for both test and production users. A synthetic transaction is a fancy way of testing in production, either with a fake user for whom no real processing will occur, or continually performing read-only activities. This is basically a sort of super-powered monitoring, modeled on the user.

Clusters have all sorts of capabilities for versioning, rollout, redundancy, and scalability. The most common open-source container cluster tool is Kubernetes, supported by IBM, Amazon, and Google’s Cloud offerings.

Where are you in the march toward on-demand test environments — and what is your next step?

All-in-one Test Automation
Cross-Technology | Cross-Device | Cross-Platform

Matthew Heusser is the Managing Director of Excelon Development, with expertise in project management, development, writing, and systems improvement. And yes, he does software testing too.

The post The On-Demand Test Environment appeared first on Gurock Quality Hub.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This is a guest post by Peter G. Walen.

People are talking about how AI is coming and how it will change everything around how we interact with and test software. They might be missing something important, like how AI is already impacting our lives and our work.

People who know me often list me as an “anti-automation” person. This is interesting to me because I use various “automation” tools to assist me in my work nearly every day. I prefer to think of myself as one who focuses on “do what makes sense, given the best information and the best tools for the job available at the time the decision is made.”

What I am opposed to is the idea of the new, cool, buzz-word laden solution being hailed as the next great thing that will make the world a better place. Years ago, the early automation tools that did a record and playback were hailed in just such a manner. I was skeptical of them. The next iterations of test automation tools were also hailed as fixing or avoiding the problems of earlier tools.

I want to see evidence, real, repeatable evidence, not hand-wavy advertisements posing as “solid research” before I’m willing to consider something “new.” I suspect it is because I have seen too many people, teams and companies burned by trusting these reports.

Which makes me writing this all the more interesting.

AI, Artificial Intelligence, from HAL in 2001, Skynet in the Terminator movies, and VIKI in “I, Robot,” has been the boogeyman countering the “technology makes everything better” trope in popular culture. Robots, ergo, AI, will take away everyone’s job from assembly line workers to call centers and now, apparently, to knowledge workers working in software.

The scary dystopian future many people fear colors all of us. From a zombie apocalypse to a robot/machine apocalypse, we, somehow, use these unsettling images as “entertainment.” The (original) Godzilla movies were based on the fear of what technology would do – these others are not very different.

And yet, we embrace technology all around us and convince ourselves that the Luddites were wrong and that technology is pretty cool. That is where I generally land. Yes. There are things we must be aware of. We as technology workers and members of the broader society do have a responsibility.

AI in Software Testing

What does that have to do with AI and software testing?

Everything.

Mostly because it is all around us. We are using the fledgling forms to do our jobs better, and to shape and hone our own application of this new-ish technology. From internet searches on the correct way to structure a query we aren’t familiar with but what we know isn’t working, to working on ideas to help our teams work better and more efficiently. We use AI.

But, Pete, search engines aren’t really AI. What’s your point?

Modern Test Case Management Software
for QA and Development Teams

What Makes AI, AI?

AI is driven by the combination of large volumes of data, significant (massive might be a more appropriate word) amounts of computing power and the underlying algorithms that power and drive the actual processing. This also describes a search engine. It describes smaller things like automatic braking functions in “smart” cars and autonomous vehicles. It describes the calculations that allow aircraft to fly reliably with little interaction with human beings.

When we humans get it wrong, it can go horribly wrong. When we get it right, no one notices after the first couple of encounters. It becomes mundane and the glossy newness fades. And we forget the amazement we first had because we expect it to work each and every time, without question.

What AI Can Look Like

(even when we don’t think that is what it is)

We can write scripts to test specific scenarios. We can have them make calls as needed to other pieces of software. We can anticipate what the responses should be, at least most of the time. To accommodate test environment limitations, we can mock those calls and responses, and return the correct response for the normal, usual calls. If we observe and trap the responses to less normal or unusual calls, we can verify how they behaved, and confirm what the correct response should be for these unusual conditions. We can then build-in conditional logic to handle those conditions so we can mock the responses from the called systems.

We have a crude form of AI when we do so.

This allows us the opportunity to control the responses we receive from the external applications, those we don’t have a direct responsibility to be testing, and make sure the predicted responses at least, work in our environment. This then allows us to isolate the conditions we need to look for, and gives us a focal point when we actually run tests against the same external systems without any mocks.

We have another form of AI here.

When we have conditions to trap “exceptions,” even without any mocks to external systems – where we define what “pass” looks like and what “fail” looks like, the more broadly we can define what those conditions are, the more likely we can find interesting things to investigate. We can allow humans to do the interesting work of investigation and review so the humans’ senses and observations are not dulled by the massive amount of mundane, uninteresting information that will be flowing past us otherwise.

This is another level of AI.

With these things stated then, what does the future for AI in testing really hold? That, I’m afraid I must leave to Doc Emmit Brown, from the “Back to the Future” movies. “The future hasn’t been written yet! It can be whatever we make it!”

We can free the humans to do creative work in testing, in investigation and understanding conditions and causes we have not yet accounted for.

This level of deep introspection is extremely hard, non-linear and ill-defined within most human brains. Teaching software to do this will be a challenge. Until it happens, we are safe from AI, so all those horrific dystopian ends will not happen.

Or at least, not because of AI.

All-in-one Test Automation
Cross-Technology | Cross-Device | Cross-Platform

Peter G. Walen has over 25 years of experience in software development, testing, and agile practices. He works hard to help teams understand how their software works and interacts with other software and the people using it. He is a member of the Agile Alliance, the Scrum Alliance and the American Society for Quality (ASQ) and an active participant in software meetups and frequent conference speaker.

The post AI in Testing: The Future is Here appeared first on Gurock Quality Hub.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This is a guest post by Pete Walen.

Many organizations want the benefits of automated testing. They believe that the “next step” in testing is to build automated versions of what they manually run. This seems logical, but is it going to give them the improvements they want or need?

The Challenge of Automating Manual Tests

Recently I was approached by three different people with very similar problems. All of them work in organizations that are trying to “go Agile” and for their leadership, this means “Automate All the Things.”

Talking with them at a local coffee shop, they each described very similar problems. The causes of these problems also seemed very similar to each other. Some arose from how the manual test cases were developed. Some were the result of taking “functional” tests from when features were developed or modified and taking them as written into the “regression suite,” where they joined other tests with similar origins. Others were simple “happy path” confirmation scripts, intended to make sure some piece of functionality continued to run.

Some tests seemed to be inherited legacy scripts whose purpose was no longer recalled and people working with the software did not understand. All three people described almost exactly the same problem. Loads of manual tests they were expected to run on a regular basis (weekly, monthly, per release) and not enough time to run them and do good testing of the new development on which they were “supposed” to be spending much of their time.

One of these, a friend who is a fairly new test lead, complained that she barely had time to get her team to run through the tests on a single OS/platform, let alone the plethora they were supposed to support. Her “solution” was to run the suite of tests on a single OS/Platform combination each time, and the next time run them on a different platform.

I was not sure this was doing what she hoped it would do. I was gratified that she was certain it was not but was “the best her team could do” given the constraints they had to work with.

The obvious solution was “automate the tests so they can be run more efficiently.” This seems like a reasonable approach, and I said so to each of them. And then I made a clarifying statement, “As long as you have some idea what the purpose of the test is. What do you expect this test/script to tell you?”

Many organizations have huge test suites, functional, regression, performance, whatever, and will boast about how many test cases or suites they have and run on a regular basis.

Automating those suites can be very problematic. The crux of the issue is “Why are we running this test?” We can have a general idea about what we want to check on a regular basis. We can also be focused on the core functions of the software working under specific conditions. Identifying those conditions takes time and effort. Simply using the “this is how we always do it” may seem reasonable at first, but to me, this is often a clue to underlying problems.

Modern Test Case Management Software
for QA and Development Teams

Evaluating Tests – Why does this test exist?

When I’m looking at how to test something, whether it is to test new functionality, verify that existing functionality has not been impacted by recent changes, or looking at performance or security of the software, the core question I try and address in each test is “What can this test tell me (or the product team) that other tests are not likely or cannot tell me?”

Many times, I’ve seen loads of tests added to test suite repositories that are simply clones of scenarios that already exist in the repository. These tests are calling out to be reviewed, refined, trimmed and perhaps pruned to keep the full test suite repository as relevant and as atomic as possible.

Having redundant tests for variances in implementation environment, platform, OS, may seem thorough, but are people really using them as intended? Having worked in environments where that seemed reasonable, we very quickly abandoned it because changing one process or script task point usually resulted in updating multiple scripts which were effectively identical.

This sort of made sense in the late 1980s and early 1990s (when I was first wrestling with this.) Then, the hack was to track each script’s execution and result in a spreadsheet with each possible configuration listed. Now there are much better options (ahem, e.g., Ranorex.)

The interesting thing is, those types of tests, redundant ones intended to be exercised on multiple environments, exist with at least some level of understanding of what they are for. Often times others are run simply because they are in the list of scripts to be run.

What About the “Automate Everything” Approach?

My concern with the “automate everything” idea, at least when it comes to regression tests, is that the same level of thoughtless conformance will exist among the people writing the code to do the automation. No one will ask why these tests are needed. No one will ask what the difference between any of the tests are. No one will understand what it is they are “automating.” Finally, very probably (at least in the instances where I’ve seen the “automate everything” model implemented) no one will ask any of these questions for a long, long time after the “automated testing” is implemented.

When looking at functional testing, testing being done to exercise new code or modified code, I’ve seen many instances where the people doing the work have an attitude approaching “check the box.” They write a quick test in the platform of choice without considering what it is they are doing, or why. Many will look for a simple happy path to be able to automate, without looking at potential areas of concern. When asked, the response I’ve seen tends to focus on “main functionality” and not “edge cases.”

Yet odd behavior is not often found in the simple, “happy” path. In my experience, exercising the application with an attitude of “what happens if this should occur…” leads to unusual results. Sometimes, they have value to reproduce and add to the manual or automated test suites. Sometimes they do not. It is in these cases that take consideration around how to create the scenario, set the environment and define the sequence of events to exercise the instance, that the greatest value is found.

Recommendations

To make any form of testing meaningful and valuable to the organization, thoughtful consideration is needed. Tests must be considered that would provide the greatest amount and value of information. The significance of the information produced can, and should, help drive the decisions around exercising the tests, let alone taking the time to write the automated scripts to carry the tests forward.

Once created, test suites must be reviewed periodically to make sure the tests present are still relevant and fulfill the purpose they were intended to serve.

Do I think automated testing is important? Yes. Absolutely. I find it invaluable in many, many scenarios. Using the right tool for the purpose at hand is terribly important for you to be able to have any level of confidence in the results.

Make sure your automation tests make sense. Make sure you are using automated testing for those things that the tools at hand are capable of testing and giving reliable results.

All-in-one Test Automation
Cross-Technology | Cross-Device | Cross-Platform

Pete Walen has over 25 years of experience in software development, testing, and agile practices. He works hard to help teams understand how their software works and interacts with other software and the people using it. He is a member of the Agile Alliance, the Scrum Alliance and the American Society for Quality (ASQ) and an active participant in software meetups and frequent conference speaker.

The post Moving from Manual to Automated Tests appeared first on Gurock Quality Hub.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Johanna Rothman, known as the “Pragmatic Manager,” provides frank advice for your tough problems. She helps leaders and teams see problems and resolve risks and manage their product development. Johanna is the author of fourteen books and hundreds of articles. Find the Pragmatic Manager, a monthly email newsletter, and her blogs at jrothman.com and createadaptablelife.com.

To read Johanna’s articles published on the Gurock Quality Hub, click on any of the links below:

Title Publication Date
Understand Your Geographically Distributed Agile Team 2019/06/11
Discover Long Feedback Loops 2019/05/14
Testing is Not a Shared Service: How Testers Can Help… 2019/03/26
Test Manager’s Role in an Agile Organization 2019/02/21
Reframe Test Failure to Learn Early 2019/01/17
Agile Tester as Servant Leader 2018/11/29
Leading Your Agile Team Regardless of Your Role 2018/10/19
Experimenting Your Way to Better Test Automation 2018/09/21
Release Criteria at All Levels: Turtles All the Way Down 2018/08/16
Lead Your Agile Culture Change 2018/06/28
Leading the Agile Culture Change 2018/06/15
MVE and MVP: Defining the Difference 2018/05/02
Using the Best of Agile and Lean for Your Project 2018/03/16
Continual Planning: The Driver of Continuous Testing… 2018/02/28
Feedback & Feedforward for Continuous Improvement 2018/01/24

Modern Test Case Management Software
for QA and Development Teams

The post Meet the Author: Johanna Rothman appeared first on Gurock Quality Hub.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This is a guest post by Michael G. Solomon PhD CISSP PMP CISM.

Blockchain technology is one of the most popular new topics in technology circles today. A blockchain solution can offer transparency, security, fault tolerance, auditability and, in some cases, reduced operational costs. It seems that everyone wants to roll out a new blockchain offering. But in spite of the attractive benefits, does it really make sense for your organization?

Blockchain technology is a huge shift in the way organizations design, test and deploy applications and data. Perhaps more than ever before, good design and comprehensive testing must precede the initial deployment of your new blockchain app. The stakes are high to make sure the benefits outweigh the additional upfront effort.

Let’s take a look at what blockchain does well and how you can decide if it would be a good choice for your next project.

What Blockchain Technology Is Good For

The most important decision when initiating a blockchain project is determining whether it’s a good solution for the problems your organization faces. When blockchain is a good fit, it can offer benefits that you just can’t realize with other technologies. On the other hand, if it isn’t a good fit, you can easily waste valuable time and money and end up with an app that doesn’t deliver what was promised. Careful upfront analysis and planning is important to make sure your project is a good fit for blockchain technology and that your organization is ready for the challenge.

A key benefit of blockchain technology is data transparency. Blockchain design relies on the fact that multiple nodes maintain full copies of the data, distributed across the blockchain network. Data that is distributed can be accessed by more users and provide more value. Advanced analysis is most beneficial when analysts have access to current and pertinent data. That differs from many existing technology solutions that build data isolation into their design. Isolating data and controlling access to it generally makes it easier to ensure data integrity and confidentiality, but transparency is sacrificed in the process. In traditional systems, data is often available to a limited number of users. Blockchains can provide integrity and confidentiality, as well as maintain data transparency.

Blockchains use agreed-upon rules, called consensus protocols, to provide data integrity. Public blockchains tend to use more intensive consensus algorithms, like Bitcoin and Ethereum’s Proof of Work (PoW) protocol, to provide agreement among untrusted nodes. Blockchains can also use encryption to provide confidentiality. Private and hybrid blockchains, such as Hyperledger Fabric, have central governance that make encryption key management easier than in public blockchain environments.

Another attractive benefit of blockchain technology is the fact that once a block is added to the blockchain, it cannot change. (Actually, data in blocks can change, but any changes to existing blocks breaks the cryptographic links between blocks. A broken link makes it immediately obvious to all nodes that one copy of the blockchain is now invalid. Since making an unauthorized change would be immediately detected, there is no good reason to do so.) Trusting that each block’s original state is maintained makes auditing and investigations easier and more reliable. If trusting historical data is important, a blockchain may be a good option.

Modern Test Case Management Software
for QA and Development Teams
Examples of Good Blockchain Use Cases

One of the best ways to learn about places where blockchain technology fits well is to examine successful use cases. Take a look at how organizations have successfully implemented blockchain and see if you notice recurring themes. If you want to sample existing enterprise blockchain use cases, check out IBM and Oracle. They each provide enterprise blockchain solutions and provide a view into how blockchain is being used today.

You’ll find lots of different types of use cases, but three tend to emerge over and over. One of these three use cases might be a good place for your organization to start exploring blockchain.

Carrying out financial transactions

Blockchain technology was designed from the very beginning to support financial transactions. It supports cryptocurrency well, but it does far more than that. Any type of financial transaction can benefit from a blockchain. The technology can not only manage the transfer of digital assets, but also enforces strict rules to ensure that all parties in a transaction behave properly.

The first generation of blockchains, such as Bitcoin, basically only recorded transactions. The second generation, starting with Ethereum, added smart contracts. Smart contracts are auto-executing computer code that must run on every blockchain network node to govern access to the blockchain. Every node must run smart contract code to complete transactions, and all smart contracts are guaranteed to produce the same result on all nodes. Smart contracts make it possible to write programs that allow users to enter into complex financial transactions without requiring any third party to mediate.

The third generation of blockchains, such as Hyperledger Fabric and Ethereum Enterprise, add scalability and support for enterprise infrastructure. This newest blockchain wave allows developers to create applications to conduct commerce at enterprise scale. The ability to automate transactions and remove middlemen gives enterprises the ability to open markets to individuals that may not have been able to access them in the past. Blockchain opens up a whole new world of opportunities.

Handling supply-chain movement

Another area in which blockchain shines is in its ability to track the transfer of digital assets through a series of owners. Almost all the products you and I buy pass through multiple hands before we make that final purchase. Blockchain apps can track products from the producer all the way to the consumer.

The process of moving products from producer to consumer is called a supply chain. Blockchain technology supports supply chain apps quite well. Producers can submit their goods to the supply chain, get paid, and even track their goods all the way to the consumer. On the flipside, consumers can trace their products all the way back to the original producer. Do you want to verify that your coffee came from a grower that you support? Blockchain can do that.

Creating a digital identity

A digital identity on a blockchain is a permanent claim that is paired with some real-world entity along with a set of attestations. An attestation is a type of evidence that the claimed identity is valid and legitimate. For humans, this could include biometrics attributes (like fingerprints or retina scans). But devices can have identities, too. Smart contracts can enforce many types of rules to govern transactions, and those rules don’t all have to be for humans.

Suppose you want to buy frozen mixed vegetables. How do you know that your vegetables stayed frozen all the way from the producer to your store? A blockchain app could require that the freezer truck used to transport your vegetables report its trailer temperature every 15 minutes. If the temperature ever rises above freezing, the smart contract could invalidate the shipment and refuse payment. The truck actually becomes a participant in the supply chain and needs an identity.

Deciding if Blockchain Makes Sense

Now that you’ve learned a little about what blockchain does well and how others are using blockchain, does it make sense for you? That all depends on what you want to do.

Remember that figuring out if blockchain makes sense for your project is a process, not a simple answer. The first step is to really understand your project and its unique requirements. If you don’t have a handle on the details of your project, deciding whether or not to use blockchain is going to be a lot harder.

A good place to start is a paper published by Karl Wüst and Arthur Gervais, titled “Do you need a Blockchain?” Their paper does a good job of describing aspects of applications for you to decide if blockchain is a good fit for your project. As you read the paper, you answer six questions that will help to determine whether your project would be a good candidate for blockchain or should use a more traditional solution.

Each question focuses on a different aspect of how your application needs to store data and support users. In a nutshell, you’ll find that if you need to store state data provided by multiple writers without relying on a trusted third party, a blockchain may be a good solution. Further questions about trust and public verifiability further help you to determine whether a public, private or hybrid blockchain is a better choice.

Whether to use blockchain technology is an important decision for any development effort. That decision changes the way your organization designs and develops the application, and it has a monumental impact on how you test. Since you can’t change anything once you put it on the blockchain, including smart contract code, testing becomes crucial to success.

Take the time to really determine if you need blockchain. If you find that blockchain technology is a good fit and you’re prepared to roll up your sleeves and develop your first app, you’ll open the door to a new technology that can provide amazing benefits.

All-in-one Test Automation
Cross-Technology | Cross-Device | Cross-Platform


Article written by Michael Solomon PhD CISSP PMP CISM, Professor of Information Systems Security and Information Technology at University of the Cumberlands.

The post Does Blockchain Make Sense for Me? appeared first on Gurock Quality Hub.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This is a guest post by Jim Holmes.

Years ago Mike Cohn coined the phrase Test Automation Pyramid to describe how teams should view mixing the various types of automated testing. Having a solid grasp of working with different automated test types is critical to help you keep your overall test automation suite lean, low-maintenance, and high-value.

Automated Test Types

While there are many types of automated tests, I’m focusing this article on three specific kinds:

  • Unit Tests: Tests which focus on a specific part of one method or class, such as exercising boundaries in an algorithm, or specific cases for validations. Unit tests never cross service boundaries. As such, they’re generally short, concise, and blisteringly fast.
  • Integration/Service/API Tests: These tests explicitly hit flows across service boundaries. As such, they invoke web services and/or calls to the database. Because of these cross-service boundaries, they’re generally much slower than unit tests. Integration tests should never spin up or utilize the user interface.
  • Functional/User Interface Tests: UI tests drive the application’s front end. They either spawn the application itself (desktop, mobile, etc.) or launch a browser and navigate the web site’s pages. These tests focus on user actions and workflows.
Understanding a Good Mix

I love Mike Cohn’s Test Pyramid as a metaphor reminding me that different aspects of the system are best covered by different types of tests. Some people take exception in its makeup, as they don’t like the notion of being forced to a certain ratio of tests; however, I view the Pyramid as a good starting point for a conversation about how tests for the system at hand should be automated.

For example, validating a computation for multiple input values should NOT be handled via numerous UI tests. Instead, that’s best handled by unit tests. If the computation is run server-side, then a tool like NUnit might be used to write and run the tests. If those computations are in the browser as part of some custom Javascript code, then a tool such as Jasmine or Mocha could handle those.

Integration tests, also oft-referred to as service or API tests, should handle validations of major system operations. This might include basic CRUD (Create, Retrieve, Update, Delete) actions, checking proper security of web service endpoints, or proper error handling when incorrect inputs are sent to a service call. It’s common for these tests to run in the same framework/toolset as unit tests–NUnit, JUnit, etc. Frequently, additional frameworks are leveraged to help handle some of the ceremonies around invoking web services, authentication, etc.

User Interface testing should validate major user workflows and should, wherever possible, avoid testing functionality best handled at the unit or integration test level. User interface automation can be slow to write, slow to run, and the most brittle of the three types I discuss here.

With all this in mind, the notion of the Test Automation Pyramid helps us visualize an approach: Unit tests, the base of the pyramid, should generally make up the largest part of the mix. Integration tests form the middle of the pyramid and should be quite fewer in number than unit tests. The top of the pyramid should be a relatively limited number of carefully chosen UI tests.

It’s important to emphasize that there is NO ACROSS-THE-BOARD IDEAL MIX of tests. Some projects may end up with 70% unit tests, 25% integration, and 5% UI tests. Other projects will have completely different mixes. The critical thing to understand is that the test mix is arrived at thoughtfully and driven by the team’s needs — not driven by some bogus “best practice” metric.

It’s also critical to emphasize the overall mix of test types evolves as the project completes features. Some features may require more unit tests than UI tests. Other features may require a higher mix of UI tests and very few integration tests. Good test coverage is a matter of continually discussing what types of testing is needed for the work at hand, and understanding what’s already covered elsewhere.

Modern Test Case Management Software
for QA and Development Teams

A Practical Example

Now that the stage is set, let’s walk through a practical example, in this case, one drawn from a real-world project I worked on a few years ago. The project is a system for managing configurations of product lines. The notion is a manufacturer needs to create a matrix of specific models to build, and various configurations in those particular models.

In this example, I’ll use a line of several refrigerator models, each with various options and configurations. The matrix needs to be loaded from storage, various calculations run as part of the product owner’s inputs, then the results saved back to storage.

A rough example of the UI might look similar to an Excel spreadsheet:

Sample Product Management UI

Business-critical rules for the app might include totaling up configuration selections to make sure that subtotals match the overall “Total Units Produced”, and highlighting the cell in red when subtotals don’t match–either too many or too few. Other business-critical rules might include ensuring only authorized users may load or save the matrix.

With all this in mind, a practical distribution might look something like this:

Unit Tests (Only one model shown) Model A 100 Total Units
  •  Standard Icemaker 67, Awesome 33, Subtotal == 100 no error (inner boundary)
  •  Standard Icemaker 68, Awesome 33, Subtotal == 101 Error (outer boundary)
  •  Standard Icemaker 67, Awesome 34, Subtotal == 101 Error (outer boundary)
  •  Standard drawers 80, Deluxe 20, Subtotal == 100 no error (inner boundary)
  •  Standard drawers 81, Deluxe 20, Subtotal == 101 Error (outer boundary)
  •  Standard drawers 80, Deluxe 21, Subtotal == 101 Error (outer boundary)
  •  Standard door tray 35, Bonus door tray 65, Subtotal == 100 no error (inner boundary)
  •  Standard door tray 36, Bonus door tray 65, Subtotal == 101 Error (outer boundary) 
  •  Standard door tray 35, Bonus door tray 66, Subtotal == 101 Error (outer boundary)
  • In-door screen sub-configuration, 90 units
  • Wifi units 80, no wifi 10, subtotal == 90 no error (inner boundary)
  • Wifi units 81, no wifi 10, subtotal == 91 Error (outer boundary)
  • Wifi units 80, no wifi 11, subtotal == 91 Error (outer boundary)
  • USB 70, no USB 20, subtotal == 90 no error (inner boundary)
  • USB 71, no USB 20, subtotal == 91 Error (outer boundary)
  • USB 70, no USB 21, subtotal == 91 Error (outer boundary)
  • Screen 90 units, no screen 10 units, subtotal == 100 no error (inner boundary)
  • In-door screen 90, no screen 11 units, subtotal == 101 Error (outer boundary)
  • In-door screen 91, no screen 10 units, subtotal == 101 Error (outer boundary)
Integration Tests

The test run setup loads a baseline grid with valid values, then exxecute tests by invoking web services as an authorized and unauthorized user. Include basic CRUD operations (Create, Retrieve, Update, Delete).

Basic Create test
  • Invoke “save” service call as an unauthorized user, web service returns expected HTTP error code (HTTP 403, eg)
  • Invoke “save” service call as an authorized user, check the database and validate updated JSON was indeed saved
Basic Retrieve test
  • Invoke “load” service call as an unauthorized user, web service returns expected HTTP error code (HTTP 403, eg)
  • Invoke “load” service call as an authorized user, web service returns expected JSON based on baseline dataset
Basic Update test, using baseline dataset with a few updated values
  • Invoke “save” or “update” service call as an unauthorized user, web service returns expected HTTP error code (HTTP 403, eg)
  • Invoke “save” or “update” service call as an authorized user, check the database and validate updated JSON was indeed saved
Basic Delete test
  • Invoke “delete” service call as an unauthorized user, web service returns expected HTTP error code (HTTP 403, eg)
  • Invoke “delete” service call as an authorized user, check the database to ensure matrix/configuration was properly deleted
User Interface Tests

Check major operations, such as:

  • Log on as an authorized user, ensure default data loads
  • Edit one cell, save grid, check the database to ensure data was properly updated
Final Thoughts

My test list is frankly sketchy. I’ve had to use a somewhat contrived example, and I’ve left off plenty of test cases I might consider automating based on discussions with the team. I know some readers might have objections to the particular mix, and that’s OK — as long as you’re developing your own ideas of the sorts of test coverage you’d like to see!

Point being, use these examples as a starting point for evaluating how you’re mixing up your own automated testing. Work hard to push appropriate testing to the appropriate style of tests, and by all means, focus on keeping all your tests maintainable and high-value!

All-in-one Test Automation
Cross-Technology | Cross-Device | Cross-Platform


Jim is an Executive Consultant at Pillar Technology where he works with organizations trying to improve their software delivery process. He’s also the owner/principal of Guidepost Systems which lets him engage directly with struggling organizations. He has been in various corners of the IT world since joining the US Air Force in 1982. He’s spent time in LAN/WAN and server management roles in addition to many years helping teams and customers deliver great systems. Jim has worked with organizations ranging from startups to Fortune 10 companies to improve their delivery processes and ship better value to their customers. When not at work you might find Jim in the kitchen with a glass of wine, playing Xbox, hiking with his family, or banished to the garage while trying to practice his guitar.

The post Practical Test Coverage appeared first on Gurock Quality Hub.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

This is a guest post by Nishi Grover Garg.

Agile focuses on motivated individuals acting together toward a common goal. Consequently, agile needs people to collaborate and requires complete transparency, communication, and cooperation, within and across teams. But at the same time, individuals instinctively try to outperform others in order to stand out in their teams.

This transition from individual responsibility to collective ownership is often the hardest part of the cultural shift that teams face when adopting agile.

Let’s look at ways to encourage healthy competition, more cooperation, and a sense of community among agile teammates.

Show People the Part They Played

Many people may be involved in working on a single feature or part of a sprint. The overall success of that feature will be defined by the sum of all their efforts. But if there is a key owner or contributor who framed the base or did the initial research or design that led to its overall better quality, you should highlight them in front of the team and say a few words of appreciation in the sprint review or retrospective.

Find a way to show people that their individuality is respected even though the success belongs to the entire team. Focus on the overall picture but highlight the business side of the benefits of their efforts, too.

Have Coworkers Appreciate Each Other

Peers’ approval and appreciation matters to all of us. During a sprint roundup or a daily standup, have people tell their stories about who helped them the most and who they feel contributed to different areas in an outstanding manner. Ask developers and testers to name each other for their remarkable contributions.

You should strive to make this as non-political and casual as possible. Employees are not being judged, just being appreciated for their accomplishments. The ones who put in extra effort should get recognized so that they do not feel lost in the crowd. This also ensures that average performers do not hide behind the overall success of the team, but instead get motivated to do better in order to stand out to their peers and managers.

Modern Test Case Management Software
for QA and Development Teams

Measure Personal Growth

When measuring success, it is frequently said that the team’s success defines their individual performance. As long as individuals are contributing in their own ways, their role or position shouldn’t matter. However, many employees find it difficult to get used to that way of thinking.

Many managers find it harder, too, owing to the fact that agile does not foster relying on numbers, hours or other easily trackable metrics to measure an individual’s performance. In fact, the whole idea of measuring individual performance is against the spirit of agile.

What we can watch for is a person’s individual growth over time and their efforts at continuous improvement. How are they helping other teammates in their tasks and assisting them to learn and grow? Have they bettered themselves over the last period, and are they motivated to continue to do so?

To compete with others, the first competition must be with yourself. And when others see you improving and excelling, it creates a healthy environment for them to try the same. Encourage people to take up new courses and training, participate in community events, and engage in other areas apart from their daily tasks. Give them a platform to grow, and then see who takes the most advantage of these avenues.

Offer Extra Initiatives

Our teams are always in need of new education in the form of technology, tools or research for upcoming projects. We can have a list of such initiatives and make them up for grabs by anyone who feels motivated or has some extra time or personal knowledge in that area.

The ones who take these initiatives can be given some grace in their regular tasks and some guidance on how to go about them. Later, they can present their learnings and findings to the entire team, giving them a new platform to showcase their skills and see the benefit of all their extra effort.

This helps the entire team by encouraging the sharing of knowledge, leading to better collaboration, and fostering healthy competition among teammates to try out new things.

Collaboration and Competition

It is a human tendency to try to be better than others in order to make ourselves stand out. We need to attempt to overcome this tendency to achieve the core cooperative nature of agile. However, although we want people to act in collaboration in agile teams, we still want to foster some sense of competition. This balance between collaboration and competition keeps our agile teams engaged, motivated and continuously learning.

All-in-one Test Automation
Cross-Technology | Cross-Device | Cross-Platform

Nishi is a corporate trainer, an agile enthusiast and a tester at heart! With 11+ years of industry experience, she currently works with Tyto software as an Evangelist and Trainings Head. She is passionate about training, organizing testing community events and meetups, and has been a speaker at numerous testing events and conferences. Check out her blog where she writes about the latest topics in Agile and Testing domains.

The post Co-opetition Among Agile Teammates appeared first on Gurock Quality Hub.

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview