Loading...

Follow The QA VIBES - Sadhvi Singh on Feedspot

Continue with Google
Continue with Facebook
or

Valid
Cross browser testing can turn out to be stressful and time consuming if performed manually. Imagine the amount of manual efforts required to test an application on multiple browsers and versions. Infact, you will be amused to believe a lot of test estimation efforts are accounted for while considering multiple browsers compatibility with the application under test.
The problem is not only restricted to manual efforts but also the requirement of a proper test bed that constitutes the presence of various platforms, multiple browsers and their versions.
Working on agile environment does not let testers enjoy the leverage of performing cross browser testing manually on multiple browsers, thereby leaving only limited time to choose which browsers to pick and leaving the rest browsers into the untested zone, hence risking your application under test.
In such scenarios, you start automation testing from scratch , which helps testing your application on multiple browsers with minimal efforts and time. You would be acquainted with cross browser testing through automation tools, but have you ever explored the idea of ensuring browser compatibility testing in parallel for automation testing in Selenium with TestNG or any other framework of your choice?
So First and Foremost, What Is Parallel Testing & How Does It Benefit Us?
Parallel testing In Selenium helps us to perform automated cross browser testing across numerous browsers and their corresponding versions at the same time with the help of an automation tool. It helps provide better coverage in the shortest span of time without sacrificing quality. Parallel testing in Selenium can be achieved via different automation tools.
In this article, I will show you how to perform parallel testing in Selenium with TestNG.
There are two ways of achieving parallel testing in Selenium with TestNG. One is mentioning the ‘’parallel’’ attribute in the testing.xml file for the desired methods while the other is setting the parallel attribute to true in your dataprovider annotated method.
I will be showcasing the latter in the article. In order to run parallel testing in Selenium with TestNG, we need multiple browsers setup of different versions on multiple platforms and access them. This can be achieved via Selenium grid parallel execution using which you would be able to perform multiple tests simultaneously, over numerous browsers, OS, versions and resolutions.
Parallel testing in Selenium grid with TestNG helps you run your tests on multiple browsers, operating systems and machine at the same time. It uses a hub-node concept where all the tests are run on a single machine called the hub but execution is performed by various other machines which are connected to the hub and are known as nodes which have different platforms and browsers configured on them. There can be multiple nodes but only one hub will be present in the grid setup. Parallel testing in Selenium grid with TestNG helps in reducing the execution time drastically.
In order to design scripts on a hub-node based architecture we need to be aware of two important concepts:
  • DesiredCapabilities
  • RemoteWebDriver
DesiredCapabilities
DesiredCapabilities is a class in org.openqa.selenium.remote.DesiredCapabilities package. It helps to set the properties of the browser like BrowserName, Version and Platform. The setCapabilityMethod()of the DesiredCapabilities class is one of the important methods used for performing parallel testing in Selenium with TestNG or any other framework on different machine configurations. It helps to set the device name, platform version and name, absolute path of the app under test etc. Other different DesiredCapabilities methods include getBrowserName(), setBrowserName(), getVersion(), setVersion(), getPlatform(), setPlatform(), getCapabilityMethod() etc.
Showcasing below code snippet, that uses to set the browser, platform and version number for performing parallel testing in Selenium grid with TestNG:
DesiredCapabilities capability= new DesiredCapabilities(); capability.setPlatform(Platform.WIN8_1); capability.setBrowserName(“firefox”); capability.setVersion(“58”);

RemoteWebDriver
RemoteWebDriver is an implementation of the WebDriver interface. The other implementations include the chromedriver, firefoxdriver, IEDriver etc. If running your tests locally, you opt for the above other drivers except remoteWebDriver, whereas remoteWebDriver needs to be configured and then can run on an external machine.
RemoteWebdriver helps you to connect with the server, send request to it which in turn drives the local browser of that machine. RemoteWebdriver is a client-server combination which can be used when development and execution environment are running on the same or different machine. It is the best option to go with when performing parallel testing in selenium grid with TestNG or any other framework of your choice. The only requirement for the RemoteWebDriver to work is, it should point to the URL of the grid.
So if you are using any driver apart from remote, the communication to the WebDriver is assumed to be local for example:
WebDriver driver= new FirefoxDriver();
The above statement will access the firefox browser of your local machine but if you are using RemoteWebDriver then you need to mention where the selenium grid is located and which browser you tend to use for your test. For example:
WebDriver driver = new RemoteWebDriver(new URL(“http://localhost: 8080/wd/hub”), DesiredCapabilities.firefox())
The above statement signifies your selenium server is running on localhost with port 8080 and the browser which will be instantiated is firefox. All you have to do is change the URL of the machine you wish to point to and run your tests on.
The major difference between any local driver and remoteWebDriver is that the browser opens in your local and you can view the steps performed by your script whereas in case of RemoteWebDriver you cannot see the browser open and perform the different actions.

Problems Of Performing Parallel Testing in Selenium Grid
The major drawback with parallel testing in selenium grid is the limitation of access to multiple platforms, browsers and versions that helps to run your tests for cross browser testing. You only have the access to those platforms, browsers and their specific versions on which your nodes are running and eventually the larger number of nodes you connect to via hub , the performance of Selenium grid degrades drastically.
Also, adding on is the time and efforts spent on setting up the initial implementation for distributed testing. Moreover, the cost for having those multiple machines setup is another drawback of opting for parallel testing in selenium grid.
To curb these drawbacks people are now moving on to cloud based platforms to support parallel testing in Selenium grid. A lot of platforms are available in the market that provides a means to access lot of browsers, versions, platforms all at one place with no architecture setup required and no additional cost to be borne for the machine setups.
One such platform is LambdaTest which helps to run selenium scripts on a scalable, secure and reliable cloud based selenium grid with access to around 2000+ browsers and different resolutions with multiple platforms. It provides detailed access to your tests with support of screenshots, console logs, videos, network etc. It makes your automation flow smoother with on the go access to integrations with multiple bug-tracking tools like Jira, slack, trello etc. A detailed level dashboard that provides you an overview insight on the number of tests ran, concurrent sessions, minutes and queues etc. With LambdaTest, you can easily perform parallel testing in Selenium grid with TestNG.
Now let’s dig into on how to perform parallel testing in Selenium grid with TestNG on cloud.
Performing Parallel Testing In Selenium Grid With TestNG On LambdaTest
In order to guide through I have build the Selenium script which verifies whether the LamdaTest homepage is open or not after user types ‘lambdatest’ on google. The following steps would be needed to kickstart with the process:
  • A LambdaTest account– You can signup from here . They have multiple packages based on your needs, which you can take a look at from here
  • LambdaTest Username, access key and URL to connect to.
  • Setup of Selenium jars, testNG and the platform you opt for to write your tests on.
Post you signup and login on the lambdatest platform, you can access your username and access token from the settings tab under profile section. The first time you may need to generate your access keys. Once its been generated you can copy them and keep them stored to use them in your scripts.


The below is the code snippet highlighting how parallel testing in Selenium grid with TestNG is running on multiple browsers and platforms via cloud.

 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
import org.openqa.selenium.By;
import org.openqa.selenium.JavascriptExecutor;
import org.openqa.selenium.Keys;
import org.openqa.selenium.Platform;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.remote.DesiredCapabilities;
import org.openqa.selenium.remote.RemoteWebDriver;
import org.openqa.selenium.support.ui.WebDriverWait;
import org.testng.annotations.AfterTest;
import org.testng.Assert;
import org.testng.annotations.DataProvider;
import org.testng.annotations.Test;

import java.net.MalformedURLException;
import java.net.URL;
import java.util.concurrent.TimeUnit;

public class Cross_browser {


public static final String username= "sadhvisingh24";
public static final String auth_key = "r3lUGd0gNBPOV5RCmtxpJp1ZA1VOE1z5I5Gq4LFOaUU9Au7lSw";
public RemoteWebDriver driver;
public static final String URL= "@hub.lambdatest.com/wd/hub";


@Test(dataProvider= "Set_Environment")
public void login(Platform platform_used, String browser_Name, String browser_Version)
{

DesiredCapabilities capability= new DesiredCapabilities();
capability.setPlatform(platform_used);
capability.setBrowserName(browser_Name);
capability.setVersion(browser_Version);
capability.setCapability("build", "cross_browser");
capability.setCapability("name", "cross_browser");
capability.setCapability("network", true);//to enable network logs
capability.setCapability("visual", true);//to enable screenshots
capability.setCapability("video", true);//to enable video
capability.setCapability("console", true);//to enable console logs

try {

driver = new RemoteWebDriver(new URL("https://" + username + ":" + auth_key + URL), capability);

}

catch (Exception e) {

System.out.println("Invalid grid URL" + e.getMessage());
}

try
{
driver.manage().timeouts().implicitlyWait(30,TimeUnit.SECONDS);
driver.get("https://www.google.com/");
driver.findElement(By.xpath("//input[@class='gLFyfgsfi']")).sendKeys("lambdatest", Keys.ENTER);
driver.findElement(By.xpath("//*[@id='rso']/div[1]/div/div/div/div/div[1]/a")).click();

String url= driver.getCurrentUrl();
Assert.assertEquals("https://www.lambdatest.com/", url);

System.out.println("I am at Lamdatest page");

}
catch (Exception e) {
System.out.println(e.getMessage());
}
}


@DataProvider(name="Set_Environment", parallel=true)
public Object[][] getData(){

Object[][] Browser_Property = new Object[][]{


{Platform.WIN8, "chrome", "70.0"},
{Platform.WIN8, "chrome", "71.0"}
};
return Browser_Property;

}



@AfterTest
public void tearDown(){


driver.quit();
}
}

In the above code I have defined the variables containing the username, access key, the URL to connect to and have used the RemoteWebdriver. A method name as ‘getdata‘ have been defined with the Dataprovider annotation where the different platforms, the browsers and the versions I wish to test my script on have been mentioned.
Note the attribute defined for the dataprovider annotation which plays the key. As I mentioned in above, here I am setting the attribute parallel to true for performing parallel testing in Selenium with TestNG on the defined browser-platform sets.
Now comes the major @test annotated method which takes the parameters returned by the dataprovider method. In this method the desiredcapability class is used to set the platform, browsername, version and some other major attributes provided by LambdaTest like build name, name of test etc. It also helps to setup parameters like video, logs, screenshots etc for your tests to true, in case you need them. 

With the remoteWebDriver you can setup the localhost you wish to connect which in our case, is a combination of our username, access key and URL . Then follows the simple validation for LambdaTest homepage.
Once the script is done and completed, you can run it. As I mentioned you may not be able to see the actions in a browser, but you can see the status of your tests on the LambdaTest dashboard post the tests finishes.
Below is the screenshot of the TestNG report post execution:

Below is the screenshot of the LambdaTest build details, showcasing the test results on the different platforms the tests were executed on. You can run the video to watch out the steps performed by your script:

The network tab screenshot displaying the request send and their corresponding response:

The log tab:

Your automation timeline screenshot displaying the number of automation suites ran:

The automation analytics screenshot displaying the number of tests passed, failed, time duration etc:
An overview of the dashboard access:
Here I chose to perform parallel testing in Selenium grid with TestNG through two browsers at once, you can opt to go even for multiple ones at the same time.
The main essence of the whole idea is to save time and efforts with better tests coverage which parallel testing in Selenium helps us to achieve and probably the much-needed requirement of the hour these days when we have multiple versions of browsers been launched every now and then
As the user market is increasing and the reach of people to those endless devices, your application compatibility to all those numerous platforms and browsers becomes crucial for us as testers in the least possible time. Probably opting for these cloud based techniques with the help of automated cross browser testing tools is the most fruitful path we can opt for.

Closing the article with below quotes:
“The bitterness of poor quality remains long after the sweetness of meeting the schedule has been forgotten.” (Karl Wiegers)

I have contributed this same article for lambdatest on their blog https://www.lambdatest.com/blog/speed-up-automated-parallel-testing-in-selenium-with-testng/

Author: Sadhvi



  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Being in the software industry as a part of quality assurance, you are always expected to carry a quality stick to ensure quality is maintained to the ‘T’. We are always asked to put ourselves into the shoes of the customer and ensure the product/projects meet its expectation with the highest quality achieved.
But the actual irony lies where all our quality metrics boils down to quantitative numbers and terms like bugs logged, a number of test cases written, a number of test cases executed, time spent on testing, URLs tested, browsers checked for cross browser testing, defect leakage etc.
We have designed our working system where we are asked to placed quality over quantity but eventually are analyzed on the quantitative approach. I believe focusing on quantitative approach for testing is unfair to your software testing team and even if we follow the quantitative approach there has to be a systematic way to judge the individual effort put up on the basis of our software testing metrics.


Is It Okay To Justify Our Software Testing Metrics With Numbers Alone?

We need to question ourselves on how to justify our testing approach where every path is been quantitatively visualized? This is one of the reasons why the quality of testing is been reducing drastically. A simple example could be when you measure your team efficiency or efficacy with the number of bugs logged.

The very first approach your every team member would be taking, is to find as many bugs as they can anywhere in the application. I know many would be arguing stating, how does it matter until we are finding bugs in the web application. But eventually, this is where the quality of testing comes in place.

Looking at the agile software development approach that we are working on these days, with shrinking cycles and testing been pushed to the end of each cycle, all we are left with is the seemingly high pressure to test applications in the shortest span available. We perform all those risk-based testing and smoke testing for each and every release to ensure application provides a seamless experience to its end users.

Driving the team with this numbering system won’t help in such crucial situations. It’s not the number that counts but the essence of each bug that can cause the level of disruption if let go off to the customer. So even though we may raise those ‘x ‘number of bugs into our bucket, we may have tremendously gone wrong when delivering a product in terms of quality, because of the mindset approach we have laid down while hunting down our application. This is one of the biggest reasons to consider while defining our software testing metrics where quality should outnumber quantity anytime.

Today the approach to maintaining quality is completely different and speaks only in terms of stakeholder’s satisfaction. Quality is completely customer driven. Quality equals profits as per stakeholders. Higher the quality higher is the level of predictability for the software, which means one can take a risk in terms of pricing play in the market.

The stakeholder would know where he stands in terms of stability of the product and how in-depth he can push the product in the market. But this gets lost completely due to the way we start working as we kick-start the project, which is collecting requirements, defining the scope of testing, team coordination and allocation, testing activities etc, we tend to forget our real mission as testers which is ‘the primary objective of building the project’. This is something that is the sole reason behind making the project that helps solve problems of end users.
However, the important question should be, that are we testing to ensure those problems are resolved? If not, do we provide frequent feedbacks to our stakeholders to help them leverage better insight into the project. It’s important to keep questioning ourselves ‘Is this what the customer expect?’ or ‘Is there a better way to resolve the same problem’. Just taking requirements from the client and building them does not make our job fulfilled. As we start working on the project we need to sit with our client to understand what are their expectations from the project and how do they visualize the quality aspect of it.

For example, if your client is more focused on the branding aspect than even a pixelated logo issue would be high severity for you, if not for the developer. If he is aiming to build a financial application, then maybe UI and UX may be a lesser concern for him as compared to the security of his user’s data. Here ‘objective’ is the key. This is something we need to entrails in ourselves as testers. This should be your team OKR(Objectives and Key results) rather than those number driven metrics.

OKR is a popular leadership process which helps individuals, teams and organization work together to fulfil their goals in one unified direction. Setup objectives across teams and organization. Such OKRs helps to focus on productivity and drives the company culture.

Quality Is Subjective & May Change From Customer to Customer.

It’s important how we lay down our testing efforts in alignment with these objectives. In fact, this helps us drive our decision of what to fix and what not to. Hence it all comes down to the bottom line, the clearer picture you have for your stakeholder views and mission of their project, the better it will be to build and prioritize your testing efforts. Analyzing your risk by questioning yourself ‘what your customer wants’ will help you drive quality. This may sound more of a business analyst roles and responsibilities but let’s not forget we as testers need to have this elementary skill too. Our testing strategy is driven from these analyses itself.
So let’s focus on writing a sufficient number of test cases that drives your customer and project objectives rather than focusing on writing a large number of test cases with no major crux involved in them. Highlighting the high severity issues rather than just filling your bucket with those umpteen number of minor bugs. Give precedence to the risk that can bring down your customer and not your evaluation matrix.

Quality was and will be the indisputable winner. Quantifying your testing process won’t suffice in anyways but of course, the question would remain unanswered to all those organizations who somewhere down the line needs a measurement ‘how do we measure quality then?’ The main intent of having those metrics was to focus on how to bring those numbers up/down or may be at the same level to achieve quality but as humans are humans, we get into this number business more seriously and drives them, because they have been marked as our growth evaluation. Hence, it is important to remember, what drives us as testers and how to build our evaluation matrix. If you are concerned about addressing browser compatibility testing then here is an article which would help you evaluate cross browser compatibility matrix for your testing workflow.

So we know by far, that quality testing is better than quantity testing. The way to make sure you step in the right direction is by recruiting the right software testers in your team and to imbibe the concept quality of quality testing and not quantity testing, in the software testing team that you already have.


How To Judge A Quality Tester Apart From The Others?

Been in the industry for seven years now, and after mentoring so many budding testing professionals, my whole idea on measuring individuals on the quality basis has always derived from their ability to analyze the business requirements, break them down into the smaller level of chunks and ensure those are built and worked as they are intended to.

It’s always been the intent of the tester that mattered to me rather than the numbers he/she gives me in terms of test cases or bugs. I have always preferred people who ask questions and understand the meaning of priority rather than people who ‘just test’.

The most common behaviour I have observed in so many testers is they start writing test cases as soon as the story/requirement gets allocated to them. They tend to forget the basic foundation step, which is to sit and analyze the requirements mentioned in the story. They forget to question themselves, by putting on the shoes of an end-user and realizing the workflows end-users may tend to use. Figure out the impacted areas and see through all the validation that user may validate through during the flow.

I always insist on making a checklist before I begin writing effective test cases, this helps in proper test coverage. Another important aspect is backtracking, be it the requirements or the bug occurred. This helps to ensure requirements that are not left out and one is able to find the root cause of the bug, which helps in reducing the occurrence of such bugs. Good bug reporting and positive attitude help in making of a good tester too.

These are some of the quality aspects I usually push through among my team. As far as their measurements are required, I put them into the skills section and mark individuals across them rather than those metrics.

     Some of the qualities of good testers include:

§  Understanding the priorities and severity business.
§  Ability to dig into the system and think through.
§  Following the quality processes and if required bring corrective measures for further improvements.
§  A quick and constant learner.
§  Passion for testing.
§  Good communication skills.
§  Analytical ability.
§  Co-operative and work in unison with other team members.

There could be more highly effective skills for becoming a successful software tester.We as testers due to this metrics marathon tend to imbibe some of the ‘not required’ or ‘bad’ qualities as a tester, and trust me, these are something very common these days, for instance:

§  Performing testing based on assumptions.
§  Reporting bug without analysis.
§  Poor business analysis skills.
§  Lack of customer insight.
§  Poor communication skills.
§  Incompetency to follow processes.
§  Fear of rejection of work or thoughts.

The key is to inject out these qualities and bring on the positive ones among your team. Here is how I encourage them to bring out the best of their qualities:

§     Conduct seminars on a regular basis to help them become proficient in writing bug report. Also, relay them how they could do better testing without falling into pre-assumptions.

§      In order to enhance poor communication skills, I insist them to loop and an internal call with developers. This helps to boost their confidence and understanding regarding the inbound and outbound process flow of the web application. Aiding quality for continuous testing and software testing metrics.

§      Fear of rejection always victimize freshers or young software testers. This can be easily dealt with cooperative management. I make sure to never criticize them on their mistakes, I rather provide them a suggestion on how they could do better testing of the web application or product. This way I make sure to get rid of any obstacle that holds my software testers from finding bugs.

§       In order to keep the spirits high I make sure to conduct award/gift ceremonies on either monthly or quarterly basis for recognizing the phenomenal effort. Doing so, I have testers become more competent among each other, in a healthy manner of course.

§      I believe Shift-left testing helps to boost the product quality. Incorporating Shift-left testing with continuous testing could do wonders in terms of time, resources and money. I encourage young software testers to be a part of Shift-left testing too. This helps them to understand test scenarios right from the client requirement gathering phase. Good understanding of SRS document helps them to stick to quality in terms of software testing metrics.

Don’t Drift Away From Your Objectives!

I have seen it happen many a time, that the software testers would focus too much on raising their bug count, and as a result, they would end up drifting away from the objective of the functionality they were meant to test. I am sure, you must have experienced the same too!
Well, quality test cases are very critical and if we don’t stick to our objectives and report bugs on the basis on increasing the daily bug log count, then we might end up overshadowing the critical, quality test cases.

Think of it as setting up the right OKRs(Objective Key & Results) for the test department. If you are a QA lead or manager, responsible for aligning the testing team in the release management then it becomes very critical to set up the right goals for your test department.
You can mark the above-discussed positive skills as your primary objective and measure your team on these bases. This helps bring about improvement in your team members and further growth, which would have a direct impact on your project/product.

 Our OKRs or evaluation matrix should be built to answers questions like:

§  What do we want and value?
§  What problems do we perceive and how to recognize them?

With those clearly defined OKRs we can help deliver a quality product as a team rather than comparing and analyzing the valueless numbers(metrics) across team members. Having said so, collecting those numbers to improve your quality objective should be the only essence. Believe it or not, numbers drive the psychology of people, hence it’s important how we frame and utilize them.

So let’s not push quantity to drive quality. Quality should be singly driven and should deem to be the one and only major aspect of achieving customer satisfaction. 

Closing the article with the below quotes:
“The principle objective of software testing is to give confidence in the software.” – Anonymous


Author: SadhvI





x
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

As you start on with automation you may come across various approaches, techniques, framework and tools you may incorporate in your automation code. Sometimes such versatility leads to greater complexity in code than providing better flexibility or better means of resolving issues. While writing an automation code it’s important that we are able to clearly portray our objective of automation testing and how are we achieving it. Having said so it’s important to write ‘clean code’ to provide better maintainability and readability. Writing clean code is also not an easy cup of tea, you need to keep in mind a lot of best practices. The below topic highlights 8 silver lines one should acquire to write better automation code.
1. Naming Convention
This is indeed one of the thumbs rules to keep in mind as we move from manual to automation or in fact writing code in any programming language. Following proper naming conventions helps in easier understanding of code and maintenance. This naming convention implies on variables, methods, classes and package. For example, your method name should be specific as what it is intended for. A ‘Register_User()’ method portrays the method displaying user registration within that method. Clearly defined method names add to the easy maintenance and readability of the script. The same extends to the variable naming. I have noticed many people mentioning variable as a, b, c etc or even Web Elements as Weblelement1, Webelement2 and so on. This gives no clue to the user seeing the variable name as what it intends to do.
Below is an example showing when naming goes wrong:



The above code shows how ‘method1’ gives no clue to the user as what this method exactly does. Also, all web elements are denoted via web1, web2…and so on. User cannot identify which web element captures which field.
A correct way of representation can be marked as follows for the same above code:




Here the method name ‘Register_User’ clearly defines user through name indicating this method contains code related to the registration of the user. Similarly, all web elements or variables are provided with names which relate to the captured fields used for the defined intent.
Usually, using camel casing for writing down methods or variables is usually encouraged for its better clarity in terms of readability and maintaining the script.

2. The Three R’s-Reduce, Reuse and Recycle
It’s important to ensure your methods are broken to the smallest chunks of user scenarios. They should cover simple and single flows. Do not overcomplex your methods with multiple functionalities covered in a single method. For example, a login feature needs the user to be registered on the application. Keep your register feature into another method and if required call that method in your login method. Reducing the complexity of the methods leads to easy maintainability of the code.
Also, reuse your methods wherever required, do not copy paste the same code in different methods. This leads to unnecessary duplication and redundancy in the code. Increasing the lines of code does not means you have written a good code. Refactoring and optimizing your code is a key to a writing stable, robust and better automation code.
Recycling is also another useful tip for writing better automation code. I have experienced people who automate the legacy system, do not tend to change the existing method in the automation framework and rewrite another method whenever there is a change in existing functionality. This simply makes the framework as brittle. Always update the existing methods whenever the flow changes, though it has its own challenges, where the new user may not be aware of the dependencies the method may have, but I believe we should always counter things for the longer perspective than achieving those shorter goals.
Below is an example of how the login code is being simplified into a small chunk of functionality and another registration method is been used for easier simplification of the whole process.



3. Structure Your Tests Well
Well, this is indeed one of the major actionable insight to ensure better automation code. It is not only easy to understand but does not take much effort in maintenance. Structuring your tests with the help of framework adds value to your work and reduces the maintenance effort in the long run. You can control the flow of your application via the use of annotations provided by frameworks like JUnit and TestNG. For example, using an annotation like @BeforeClass can help you direct your time-intensive activities like connecting to the database, setting up the browser etc related code in this method with @BeforeClass annotation associated with it. This helps an automation tester known right away as what exactly that method does and when it is called. Just imagine your setting up process is clear and sorted out from the other pieces of your code. Similarly, an @AfterClass annotation helps you perform clean up activities like disconnecting to the database, closing your current browser sessions etc.
Below is an example highlighting a better structuring approach is been shown through the TestNG framework:






Taking a decision of what annotations should be associated with which test method is important. With clear dependencies and priorities defined the tests and code can be structured based on the flow of the application.
4. Thorough Validation Of Your Tests
Being a QA you know, it is all about validating your expected and actual meets, the same stands for your automation code. If your script does not talk in terms of validation, creating one will never make sense nor be of any essence. Ideally, every user action should be validated as are your test case steps, whether it is validating the visibility of an element, typography, textual representation, redirections to a page or any kind of visual validation or even if it’s about evaluating the results from the database.
Even if your validation fails to make sure, the failure message is also displayed so that one can find out what went wrong. The biggest mistake we make in terms of validating our code is writing from the terms of ensuring the validation is passed. We never contemplate what may happen if the code fails or does not perform the expected, what would be needed to proceed ahead.
If you wish to break the test as soon as your validation fails and jump to the other test one can use hard assertions whereas if you wish to validate multiple checks on the same page, one can opt for soft assertions. To decide which assertion to use completely depend upon the use case.
Below is an example of assertions performed on a login page. In this different method are created where the user is logged in with valid credentials and then another method ensuring the user is not logged in with invalid credentials with an error message displayed.




There could be different approaches of covering your multiple validation checks, either you can opt to make different method for each validation like I did above or you can choose to make all validations in a single method under the try-catch block.

5. Sleep Does Not Improve Stability
The biggest myth we tend to believe especially when we are new to this automation business is by providing an ample amount of wait to our script necessary or unnecessary will lead to executing our script smoothly. On the contrary, it makes our script flaky, and increase the overall execution time. The major problem with this type of static sleep is, we are not aware of the load of the machine on which tests are run and hence these may lead to timeouts. Therefore thread.sleep should be avoided for maintaining better automation code. A better approach of using wait to your scripts is through condition binding, wherein the script can wait like a human till a certain condition is met. For example, waiting till the certain element is visible or not.
The explicit and fluent wait is more adaptable as an option to develop better automation code.
6. Making Your Tests, Data Driven
Testing becomes more effective when tested across multiple forms of data, similar is true when writing better automation code for testing a web application or any other software. In automation, the key is to test your test code through multiple forms of data rather than writing different test scripts for each of those data. This is easily achieved via the data-driven testing framework. It helps to store the test data input into an external database such as CSV files, excel files, text files, XML files or even ODBC repositories. This data is called into your scripts and run across the same test code again and again. This helps reduce redundancy and faster execution in comparison to manual efforts. This approach makes your tests more realistic as you always have the advantage of changing your test data and running it over and over again on the same test code, thereby helping discover new bugs.Another benefit of this approach is it leads to the reduction in the number of test scripts you may have to add, speeding up your test cycles.
Keeping in pace with it, it also helps in easy maintainability of the scripts. All hardcoded values in your code tend to break in case of any application changes. An easier way to achieve this is to make all your hardcoded components as variable driven. For example, all locators can be kept out of the code, by storing their respective values in an excel sheet and calling them in your script. In case, any of your locators get broken, one needs to just change the locator value in the excel and need not to touch the script at all.
A basic example of data-driven testing is :





The above code shows data been pulled from excel for different login credentials. The same can be extended for Xpaths also where the XPath values can be pulled from excel also. Here the key point to address through data-driven approach is to remove the hardcoded values from our code, making it variable oriented and along with it running the same piece of code across multiple sets of inputs.

7. Don’t Miss Out On Reporting!
Automation code won’t do good if it does not report the result to you. In order to optimize your work as an automation engineer, it’s important to know which test code passed and which failed accompanied with screenshots. The best ROI you can show your stakeholder is via reporting. Sharing those detailed reports provide visibility and reduce your time on verification of your test execution scripts. You can achieve reporting through various techniques like TestNG HTML report generation, JUnit report generation or via using extent library.
The below code shows an example where post completion of the login functionality a screenshot has been taken as a proof of validation pass and below is a sample of the TestNG report generated post-execution:



8. Don’t Forget Cross Browser Testing!
All web applications today support multiple browsers and versions. It’s important that your code should target multiple browsers rather than making them for a specific browser. Running code on a specific browser takes away the cross browser compatibility of your application. Perform cross browser testing to ensure your applications offers seamless user experience across all the major browsers, we can extend our automation for this testing. Frameworks like TestNG helps to easily executing a test across various browsers.
Below is a code displaying how to run automation code on multiple browsers via TestNG





The above code shows a method which takes a browser as a parameter where different browsers drivers are set up. Using TestNG XML file we have passed parameters as different browsers on which the code will run through for the login feature on both firefox and chrome.
That was all from my end. I hope these tips will serve as useful actionable insights for writing better automation code. Feel free to share the tips that helped you deliver better automation code. Cheers!

Closing the article with the below quote from Rod Michael
If you automate a mess, you get an automated mess.” 

Author: Sadhvi Singh


  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
JUnit is an opensource unit testing tool that helps to test units of code. It is mainly used for unit testing Java project, however it can be used with Selenium Webdriver to automate testing of Web applications. To be precise JUnit is a unit testing framework for java that helps to write test cases in a more structured and better format. Selenium and JUnit can be used independently of each other though combining both helps to write test cases in a more structured way. This article references automating application using JUnit and Selenium through a simple script.
We will be walking through the following sections:

  • ·        Downloading JUnit Jars
  • ·        Adding Jars to your Selenium project
  • ·        Incorporating JUnit annotations and method into your first selenium scripts
  • ·        Cloud testing via Selenium and JUnit 


v Step 1- Downloading Junit Jars

JUnit jar files can be downloaded from https://github.com/junit-team/junit4/wiki/Download-and-Install. The major jar files included are:
  • JUnit .jar
  • Hamcrest-core.jar

Download and save these files in your system.


v Step 2- Adding jars to your selenium Project

In order to add your JUnit external jar files into your project, you need to have your eclipse setup and selenium jar files downloaded. In order to install eclipse you can refer its official website https://www.eclipse.org/downloads/. Based on your operating system windows or OS, you can download accordingly. Post eclipse setup you can download your selenium jar files from its official website https://www.seleniumhq.org/download/. In order to create your selenium webdriver scripts you need to use user language specific drivers. In our case we are using Java, though selenium support multiple languages like c#, Ruby, Phython, Javascript along with Java. Download the selenium jar files and include them into your eclipse workspace, now we need to include our downloaded JUnit Jar files. Follow the below steps to do so:
o   
  •       Right click on your created project and select properties:




  •  Click on Java build path from the options:





o   Click on the ‘Add external Jars’ button and add your downloaded JUnit Jar files and click ‘OK’ post that:
This adds the JUnit jar files to your selenium project. The major class file/source files that are commonly used in this JUnit Jar files are:

  • Assertions
  • Annotations
  • Parameterized
  • Ignore
  • TestListeners etc

For detail list on the class files or source files refer http://www.java2s.com/Code/Jar/j/Downloadjunitjar.htm .Now lets incorporate JUnit into your selenium project.


v Step 3-Incorporating JUnit to your selenium script


We will be creating our first JUnit selenium simple script on https://www.lambdatest.com/ .





Details:


The above script opens the browser with https://www.lambdatest.com/ and clicks on ‘free sign up’ button to register. Post register, the script will check the URL it is redirected to in order to ensure successful registration. We have used two classes of JUnit one is Annotations class and the other Assertions.

The script consists of three sections:


ü @BeforeClassThis annotation runs the piece of code before starting any annotation in the class. As you can see, here we have opened the chrome browser before performing any action on it. The main actions are performed in the @Test annotation marked method.

ü @Test – This test method carries the functionality where the application is opened and registration process is carried out. To validate the result we have used assertion class where we are validating the success of the registration process using the context of current URL. This test annotation runs the piece of code after the @BeforeClass and @BeforeTest method and before the @AfterTest and  @AfterClass method.

ü @AfterClass- This annotation tells JUnit to run the piece of code once all the test have been executed. This annotation method usually carry process of closing the browser post all action items have been performed.
Attaching video of the execution of the above script:
                                   
                                         



Testing Cloud application via JUnit and Selenium



Cloud testing means testing cloud-based applications that are hosted on cloud. Using selenium along with JUnit to test cloud services for web apps becomes the most obvious choice due to its capabilities to run script on multiple browsers and platforms with any language support one choose for. With the use of JUnit it provides a more structured way of writing your test that leads to better maintenance, quality and effectiveness as compared to without using any frameworks like JUnit . Using Maven and Jenkins with Selenium and JUnit helps make a more powerful, smoother and end to end automation experience for cloud-based application, thereby providing for robustness to the whole flow.



Closing my article with the below quote from Jim Hazen

“It’s automation, not automagic.”




Author: Sadhvi Singh


  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

As we are moving towards rapid development cycles and quicker deliveries to market, driven through agile methodology, performing manual testing seems time-consuming, repetitive and prone to human errors. The requirement to implement automation testing from scratch seems to fit in the business owing to its flexibility of greater coverage to functionalities with lesser time to market and early discovery of issues as compared to manual tests. Having said so, manual testing in itself plays an important role in the software development cycle and cannot be replaced completely with automation testing. But transitioning from manual to automation is the need of the hour. At first, the idea of starting automation testing from scratch may seem intimidating. You may get stormed with questions such as how to start and where to start from? I am going to highlight some key notes for you to keep in mind as you plan to start automation testing from scratch.
          Why We Hesitate While Switching From Manual To   Automation Testing?

Automation testing is considered a widely used parameter to overcome the manual testing issues and probably trying to rule it out to the max. Performing the transition isn’t a piece of cake and may lead to multiple blockers that may come during this pathway. The few challenges one would face are:

  • Too Preoccupied With Upcoming Release Activities: We are too busy writing effective manual test cases, devising test strategy due to back to back planned up releases that we forget to plan a time window for developing automation test scripts.
  • The complexity, time and resources: Any new process comes with time and learning new areas and methods. If you are starting automation testing from scratch then areas like training of resources, giving ample time, an expectation of higher efficiency in terms of automation can be less with more time consumption and more monetary impact.
  • Test Stability and Scalability: For starting automation from scratch, it is important the AUT be stable so that multiple re-work refrains and there is easy maintenance. If AUT is not stable it can lead to major re-work and lesser efficiency in terms of quality. It’s important to have a scalable automation suite which sometimes becomes redundant and repetitive as regression testing suite increases post every release.
  • Automation of processes: Automation is not only about automating the tests but it is also about having an approach and pathway for including reporting, cleaning test data, setting up and tearing down various environments. Turning a blind eye to those may not return the quality and the metrics we were supposed to achieve from this transition.
  • Corporate mindset: Organization or senior management can be a difficult key to unlock when it comes in terms of showing the immediate effects of automation on the whole process. Automation acts as a long-term goal and not a short-term one. Pursuing management and promising quicker and sooner benefits of automation can be a tricky key to resolve. Providing a better road plan can act beneficially.

Why Should You Switch From Manual To Automation Testing?

This is one of the important questions your team must know. The decision to implement automation testing from scratch, should be based on the current issues you face while testing your application and not merely because your team or you were fascinated by the word automation. Taking the right decision at the right time is more important for better quality achievement and ROI. The below factors highlight the key areas as to why you need automation.

→ Moving from manual to automation testing can help you with these testing types:

  • Regression Testing: An ever-increasing regression suite which needs to be executed post every release to ensure no new or older functionality tampers.
  • Complex functionalities: Complex calculative areas which lead to human errors.
  • Smoke testing: Running automation suite for major functionalities will help assess the quality of the build. This helps to save time for teams by analyzing whether the build needs in-depth testing or not post the automation suite results.
  • Data-driven testing: Functionalities that required to be tested with multiple sets of data.
  • Cross-browser testing: This is one of the bigger issues that arise when it comes to supporting application on multiple browsers and versions or if refer to responsive testing for validating the website’s RWD(Responsive Web Design). Running manual tests over and again on multiple browsers takes a lot of effort, time and investment. Automating the application and running those tests on multiple browsers in parallel help making testing quicker, efficient, less monotonous and redundant. Cross browser testing tool such as LambdaTest can help teams ensure their applications are functional and cross browser accessible across the broadest range of browsers, versions and devices.
  • Repetitive tests: Tests that are relatively repetitive and unchanged from one test cycle to the other.
  • Performance testing: Manual alternatives do not exist and need an intervention of tool support.

A very important key area to kick-start automation testing from scratch is to ensure the application under test(AUT) is stable in all terms. An unstable application with too many frequent changes will lead to a lot of efforts in maintenance, thereby leading to larger investment and lower ROI. Automation testing may seem fascinating to start with but figuring the pain areas that should encourage automation for the organization is important. The project at initial stages may not require automation to focus on and would rely completely on manual testing.

→ Some key areas where manual testing is still prevailing than automation testing:

  • Subjective oriented Testing: For an application that should be tested from the usability and UIperspective, manual testing seems a more viable option than automation.
  • New and frequent changing functionalities: As mentioned above new and changing functionalities may lead to more efforts in automating and maintaining script and may lead to a waste of time.
  • Strategic Development: Few modules or functionalities may need a strategic approach to testing various viable business flows user may opt for. Working them through manual may seem more profitable and provide better coverage than automation.

         HOW TO START YOUR AUTOMATION PROCESS?

The very first step to consider while transitioning from manual testing to automation testing would be to define a proper scope for the Automation testing. 100% automation is one of the myths related to automation, so defining the scope of it is a very important element to distinguish what to automate? and how much to automate?

What To Automate?
The answer to this question lies in the following criteria’s:
Based on the frequency of testing: If you have frequent release hitting the market, it’s of more importance to automate your smoke testing as well as regression testing first, as that would help speed up the testing cycles with quicker time to market with lesser manual intervention.
Business and technical priority: This is of importance as based on the business needs and complexity, testers can split functionalities that need automation support first as compared to others. Areas with less business priority can be removed from the automation scope.
What can be automated- This factor depends upon a lot of areas like usability aspect which cannot be automated, other aspects like tool dependency can also limit the areas to be automated. Other aspects like application supporting multiple browsers should be prioritized for automation testing to save time on cross-browser testing.

How To Automate?
One basic fundamental that a team or any organization overlook is not all tests can be automated. Instead of targeting for the unrealistic goal of a 100% automation for your application under test, set a target for the portion of tests that you wish to automate. If you are new to automation testing, you can start by moving just a few percents of your tests from manual to automation. The key goal is to start small. Writing smaller test cases will help you in maintaining and reusing them in future areas of the application you wish to automate. Mapping your test cases with each method or function will help provide better coverage. Also labeling your test cases helps in easier identification, so the team can figure out which ones to automate and which ones do not! This also helps in better reporting. As I said before, do not aim for a 100% automation. Rather, when you starting automation testing from scratch then it would be better to just go by exploring new areas of the application via manual means and creating a risk plan as what needs to be automated and what need not, based on the business priorities. Also, create a list of browsers and devices with the help of web analytics to understand your end-user preferences as you start automation testing from scratch. This helps to ensure you are covering your application from a cross browser compatibility point of view as well.
A clear distinction of what areas should remain manual is as important as deciding what should be automated. Keeping these criteria to decide on the scope of automation helps to evaluate automation on a long run and provide better ROI when the plan to start automation testing from scratch.

Acknowledging The Unreliable Areas For Automation Testing

There are few testing techniques which if done manually, will yield more powerful results as compared to automation or cannot be achieved via automation at all. As I said earlier assuming everything can be automated is a myth and should not be preached. The following testing techniques are encouraged manually than for automation:
  • Exploratory testing – In real-world user intent to explore application than following them in the standard streamline workflows which we intend to automate. Exploratory testing cannot be automated as they may tend to follow a hay-way process which can only be achieved via human thought process.
  • User experience testing – Automation tools can’t fully capture the emotions, the likelihood of usage, eye soothing experience etc. for the application user tend to use. For example, if you need to experience your user issues or area, they face difficulty in using the application, that can only be achieved when used through a human experience then via a tool
  • Accessibility testing – This testing helps to evaluate how accessible your application is for end users. A tool cannot measure the accessibility rate, this can only be achieved through manual testing by analyzing the experience through the workflow or through application usage.

Selecting The Right Automation Tool
Automation testing is highly tool-dependent. Deciding which tool to use for automation testing of your application, depends on multiple factors like:
The domain of your application: Tool selection depends majorly on the domain of your application, whether the application targets a web-based application or a mobile based application. If it is based on the web-UI application one can go for tools like selenium, QTP and if it is a mobile-based application you can go for tools like Appium, Robotium etc.
Programming Experience: This is more oriented to the comfort level of the resources. One can choose from the top programming languages helpful for any tester or the resources are comfortable with. For example Java, JavaScript, Ruby, C# and many more.


Open Source or Commercial: This is one factor which is ruled more from an organizational perspective than from just mere choice of an individual when starting automation testing from scratch, as this has budget constraints. Examples of a few open source tools are Selenium, Appium etc and commercial tools like LoadRunner, QTP etc.

Below is an apt quote for this by Bill gates:
The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency.

Author: Sadhvi Singh


  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
API testing is often under-rated in the fast-paced Agile world. With frequent changes and early time to market, testing application with a UI interface is more acclaimed then testing API with only from a functional point of view. Since the world is more biased about looks these days then the internal functioning, API testing is usually been put at the back seat. But API testing not only helps reduce the churn rate of software lifecycle with bugs encountered early in the software development lifecycle but also helps in speedier project/product shipping to market.

Let's look into how API testing helps reduce churn rate and quicker time to market and how to cope with it in the Agile environment.

What is API Testing?

API stands for  Application Programming Interface. Lets first understand where does the API fit in an application. An application consist of three layers:

Data Layer- Data is retrieved from the database and file-systems and then stored.

Logic Layer- This is the brain of the application which processes the data between the layers in which it is sandwiched. Complete business logic resides in this layer and processes the data and send the required information to the application layer. This layer is made up of API

Presentation Layer- Also known as the client layer. This layer is visible to the end user and helps in accessing the several web pages of the application. This layer communicates with the logic layer and passes the information provided by the end user.

Therefore, whenever we come across people that talks about API  not having a User Interface is because it lies in the logic layer and takes care of the functionality part of the application. Testing those API's is known as API testing.


Testing the API'S


API testing is not a rocket science. It is equivalent to a tester testing an application but without a user interface. The approach towards testing it from the functionality perspective still remains the same. For example, if the user is testing a sign-up(create user) API, the approach of testing across the different parameters(fields) remains the same. Let's say for creating user one needs to provide an email address and password. As a tester, you may tend to look out for the mandatory check across those fields, email format, password complexity etc. The same should be looked while performing API testing and verifying the response(result) with these scenarios. API testing can be performed via various tools available in the market like POSTMAN, SOAPUI, SWAGGER etc



Testing API before UI is built, helps uncover functionality issues earlier, which leads to fixing them earlier and hence leads to lesser bugs discovered post UI is built. Once UI is available a tester can focus more towards UI/UX of the application and thereby providing faster closure of testing cycle. Thereby leading to smaller churn cycle of SDLC, and quicker time to market. Having said so, as a tester it's important to perform a quick cycle of functionality check post UI is been developed.

Please Note- API testing can also help perform security testing, performance testing and data volume creation. Since this article deals with coping up with API testing in agile, I would be focussing more towards that front. My upcoming articles on API testing will talk about performing security or performance via API testing in detail.


Where to fit in your API testing in Agile?

Agile is a fast-paced incremental development methodology, where smaller chunks of functionalities in forms of user stories are built and delivered in iterations to the end user. With its nature been fast paced and iterative the scope of testing as an activity keeps on shrinking due to multiple issues encountered by Testers in agile(Refer my article Agile Testing Challenges and their Remedies ) and hence bringing API testing into the picture is even more difficult and painstaking.

As a key point in agile its important that development and testing teams work parallelly and all stories in a sprint are closed and delivered. To encourage smoother workflow teams can collaborate and coordinate together such that each story can have backend and frontend tasks defined. As testers, we should create test cases taking into account both the backend(API) and frontend perspective of the application. As the back-end task get ready we can kick-start with API testing and with the completion of the UI part, we can take over the full fledge testing. 

Some projects or organizations tend to perform testing across technical tasks which are API by QA. But it's important to remember those will never be accounted as part of a deliverable from an end user perspective owing to its lack of visibility to the customer. Hence it's a good approach to split your user stories into backend task and frontend task so that both API and UI issues can be logged across the same story and helps create better statistics at the end of the sprint for your sign-off reports and defect analysis reports.

Since API testing takes a certain amount of effort in testing, it's important to ensure no redundancy in terms of testing is performed post UI is ready. Owing to this nature major of the organization do not encourage API testing. For example, I have come across testers who test the functionality deep driven at the API level and the same set of tests all over again once the UI is ready. Of course, this may help to provide you double checks of any possibilities of bug leakage but adds on to the testing efforts and does not serve any major purpose for performing API testing initially. In fact, it somehow defeats the purpose. It's important to ensure the same sets of tests are not repeated at both API and UI testing. A quick functionality check though is recommended once the story is full fledge ready with the User Interface.

As part of user story testing, API testing is encouraged but as part of the regression cycle, opting out API testing suits to both the testing team and the whole agile world. Since deep driven testing is already performed at the system level, regression can be performed at the frontend level with various functionality checks performed along with it.



Once API testing becomes part of a process of the testing lifecycle, all other activities of agile will accommodate accordingly. It's important to understand where to introduce API testing in your Agile project and where not to. API testing seems a fancy word to many testers out there but having the visibility of its usage and its benefits should be weighed upfront. There may be scenarios where lack of time, project needs/ budget, meeting all ends to perform testing is required then choosing a harder parameter of quality i.e API testing may seem insane. But having said so, API testing does add to your quality levels and uncovering bugs earlier with faster time to market over the time as we learn to go along with it.

Quote of the day-

. “Testing is a skill. While this may come as a surprise to some people it is a simple fact.”— Fewster and Graham



Author: Sadhvi Singh




  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
For the ever-increasing demand for mobile apps and the rapid increment of their presence in our mobile phones for all day to day activities, the importance of mobile testing is booming in the mobile market. With a huge scope defined for mobile testing, building a better strategy and knowing areas to focus on, helps narrow down the scope and performs quality testing. It's even more important to keep an eye on installation/uninstallation/update of the mobile app, which should be smooth and seamless.

Few areas to focus on as verification points while performing installation/update/uninstallation testing are

Installation Testing:

Installation testing is the testing performed to ensure the application is successfully installed and is working as expected. This is the first point of interaction of the end user with your app which needless to say should be smooth. Few key scenarios to look out while installing applications:

  1. Storage Space Availability:
Testing your application installation behaviour amidst storage space availability is important. Monitoring the application installer package behaviour during low disk space plays an important role. End users should be displayed proper error messages regarding the space availability issues. Usually, those are taken care by the OS and play-store/app store but handling the same at code level is also a good practice. Testing the application installation with sufficient and insufficient storage space should be one of the test scenarios for installation testing.

     2. Storage Location:

Certain apps work their best when stored in the device internal memory while others work best when stored in an SD card. If you do not save the app in the right designated memory, you may encounter issues in app installation because of an unknown error. Developers can provide an attribute during development that defines the storage space where the app should be installed. For example, if the app is to be installed in external storage, then it should be installed in external storage and should not be installed in internal storage. 
Hence it's important to test the app both ways when the attribute is defined and when it is not and try installing apps in both external and internal storage. It is also important to ensure that your app has the option to install/move to SD card as users may run out of memory,


   3.Track your Application Size

App size is one of the important parameters as its one of the factor users consider while installing the application. App with more size is usually not preferred by users for installation due to the availability of space in their mobile or due to certain enforcement provided by OS. For example, IOS let you install application of 50mb over the cellular network and post that IOS recommends user to install heavier application over a stable wifi connection. Hence it is important to keep track of your application sizes across different releases. You can use Android Studio for Android builds that helps to compare application with multiple APK's on different parameters one of which is size. This can be performed via APK Analyzer of Android Studio. Post-monitoring, you can propose your developers to optimize the app size. Also, ensure to verify the app size before and post-installation.

  4. Network Throttling

Network variances can often let to unpredictable behaviour of your application. It is important to verify your application behaviour under different network conditions when your app is installing. Hence whenever installing application try to switch on to different networks like 2G, 3G,4G or even wifi. Ensure your application gets installed on different network conditions. Also, evaluate your app installation behaviour when no network is present. The user should be displayed an appropriate error message like 'Kindly check your network connection' or 'Waiting for network connection' etc.

 5. Verify app installation across minimum and maximum versions

Apps developed are defined to be supported for a minimum version. Sometimes developers tend to support higher API version than the defined version as per requirements. Hence it's important to verify application installation on the minimum app version supported by the app. Sometimes issues like libraries used in app development no longer support the version, the app is supposed to be compatible with leading to issues in installation, hence it's important to test application on minimum version also.
It's important to test app installation not just on the minimum version defined but also on all the latest versions released in the market. This helps to ensure your application compatibility on multiple version support and bring out issues if any during the installation period.

Updating your application

App update is also a crucial factor while performing mobile testing. We as testers usually neglect this area of testing. But update feature of an app is also as important as installation. Any failure discovered by end users during an update of the app can lead to issues in the proper functioning of the current version of app or users disinterest in using the app due to an outdated version. Below pointers are important to be considered while testing across the updates for an application:

  • Forceful Update: Sometimes few updates across apps are mandatory to be performed due to current app incapabilities to function may be because of stale libraries or third parties etc or due to a crucial feature introduction which may lead to change in the business flow of the application. In such cases, those versions of apps are provided with a forceful update of the app through users. For example, an app with version 1.2 was no longer supported by the developers due to a depleted library used, hence a forcefull updated was thrown to end users, where either they had to update the application or if they choose not to, the application gets auto-killed. Hence testing application update across stale versions is also important.

  • Automatic Versus Manual Update: Android and IOS can automatically update your apps and help you save time. But sometimes you may have come across issues where few apps may not have been auto-updated, its because of the manual intervention required from the end users end, which is usually the case of change of permissions which the app may require now or any terms and condition of the app which may have changed and users may need to agree for future updates. Such scenarios should also be tested across by keeping your phone permission on for auto-update and verifying that the app does not get auto-updated due to manual intervention required.

  • Data Restoration: It's important to ensure all user activities or data remains intact pre and post update. It has been seen across many application where user data was lost post update for example edited videos or files from your mobile device etc. Hence this is another criteria to watch out while testing for the update of your application. In this case, I am referencing to the application who majorly takes your mobile data to edit or use for any reference in the app. Usually, e-commerce apps map individual data with login user ID's and in this case, this scenario may not be applicable.
         Note: When updating your app and application is running in the background, the app should be updated and the app running in the background should be auto killed.

Uninstallation Testing:

It is the testing performed to ensure the application is successfully uninstalled from the user's mobile device. It is performed to ensure all components of the app are removed during the uninstallation process. Post uninstallation ensure no data in the mobile app folder is remaining related to the app and all files are removed during uninstallation itself and the mobile is restored to its stable state.


With apps reach to end users, is just a click away and similarly apps removal from the device also, it's important we understand their importance as testers and validate them time to time. Focussing on mere functionality in case of mobile apps may not suffice. Its important to be in the shoes of the end users and validate every possible interaction of the end users with the app starting with installation and ending with uninstallation. Greater the seamless experience more likelihood of your app usage by the end users because in the end what matters is the experience.


Closing the article with the below quotes:

“I think the biggest change, and the one we’re already starting to see take shape, is that globally the majority of internet usage will be done via a mobile device and for most people the mobile web will be their primary - if not their only - way of experiencing the internet.”
- Peter Rojas, Co-founder of Engadget and Gizmodo 

Author: Sadhvi Singh







  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Have you ever come across the biggest challenge of setting up testing processes in your organization or you are in an organization where there is no testing process and you are in a dire need of one, but have no idea where to start from? An organization running with no testing processes defined is a person running on a treadmill with no pointers/targets defined. He has no clue when to stop and what or how much is been achieved. An organization with streamlined processes in hand helps organizations to grow and define quality parameters achieved or acquired across there services or products. So how do you introduce or invent testing processes in your organization.?

Before we kick into the details of how one can start, it's important to understand all organizations have a different work ethics, culture and process. The testing process to be set up may vary accordingly. The major perspective to understand while laying down the testing processes is "Testing is not a silver bullet that would resolve all problems" neither does it mean "developers no longer need to perform testing as testers are their now" neither does it mean "testers are tornados that destroy everything you create". Before anything else, you need to start working as a team-first where a "test friendly" environment is created and the team is aware that all of the people are working together to deliver a quality product on time and every time.




Analyze your work culture and environment:
This is a stepping stone of setting up your testing process or shall I say the base foundation. It's important to analyze how your organization works and is structured. Closely monitor the activities carried out from the start to the end of the project(s). Note down the bottlenecks of the project that impacts quality, for example, unplanned releases back to back or too many issues reported by stakeholders etc. Post writing down the bottlenecks, prioritize them based on their impact and start working towards them. Please note, setting up processes and their implementation is not a day activity, its a learn and grow process. It takes time to deliver the results you expect it too.



Introduce the testing cycle in the project(s)

Post the bottlenecks, the first thing one should ensure is to start the cycle of testing, no matter where your project is currently at, ensure you start testing(ofcourse I meant post you have a fair understanding of the project(requirements) you wish to test for). Let's say you are in the mid of release and you have stories or requirements in hand that are ready to be delivered, just perform ad-hoc testing. What purpose will this serve? This will provide you visibility and clarity of the testing space you have in hand and to the team who will be acquainted of the fact that development won't suffice for delivery and everything moving forward would be tested through. This would help in building up team collaboration among the team members.



Create your test plan and test strategy

Do not try to backtrack things at this stage as you enter testing. The main vision should be to ensure everything post-testing cycle is introduced should fall into places. Start working on the smaller chunks of things as in one bottleneck at one time. Create your test plan, to bring your releases on track. Communicate to team members as when will testing start, what is the testing scope, what are pre-requisites you would need to test. For example, all the requirements/stories to be delivered in a given release should be provided to the tester upfront. The development closure date for all those requirements should be shared in prior so that the QA can plan his testing activities. A test environment to be setup for the QA to test the deliverables etc. Similarly, define your test strategy and share it with your team. As in what forms of testing would be performed for example system testing, regression testing, smoke testing etc and what are outputs received and what cycle will be followed for a delivery of the requirement to the customer.
For details on how to build a test plan, you can refer my article  https://qavibes.blogspot.com/2018/06/all-about-test-plan.html


Define your testing outcomes and deliverables

No outcomes mean nothing achieved. Outcomes should be mapped in terms of each testing activity performed. For example, if system testing is performed across requirements, then outcome across requirement would either means bugs logged across requirements or closure of the requirement post-testing. Every activity should have a defined deliverable. For example, post system testing, a summary report highlighting the number of requirements tested, number of logged issues with severity should be shared with stakeholders. We should have defined parameters that highlight if desired outcomes are reached then one can enter to the next cycle of testing. For example, if the number of issues left for fixing can be deferred looking at the business criticality, then post its approval, testing team can kick-start regression before delivering the build to production. Similarly once the outcome for regression is positive and build is QA sign-off(deliverable), the team can push the build to production. Every outcome would help the team move to the next actionable plan with defined deliverables across each. This helps provide testing checks across software, making it more robust and structured, thereby ensuring a stable quality software.


Documenting and Tracking

In testing, documentation plays an important role. Once you have dived into the testing cycle with defined deliverables, it's important to start documenting. Test Artefacts should be provided across requirements like smoke document, test scenarios, test cases, regression suite etc. Once those are documented, those should be timely updated. This documentation helps provide better test coverage across your requirements. Providing test cases at an early stage also help developer visualize the requirement in more depth and ensure their code handles the documented scenarios. Documentation also adds to the stability of the product as all previous functionalities are backed by documentation and helps spread awareness even among new joiners who wish to have better visibility of the software.
Tracking your requirements until they are ready to be shipped is also important. Tracking bugs across their bug cycle help ensure no major breakthrough in the application is present. Tracking all testing activities helps to raise alarming signals for the software under test and take constructive actions across them.



Reinventing the wheel

Once you know the basic testing processes are set up where you are aware of what follows after the other, it's important to evolve your testing processes. Introducing tool support, testing metrics, standard documentation requirement, root cause analysis etc helps provide a deeper understanding of quality and make the team more focused on achieving it. As your testing processes get more skilled and streamlined, the quality bar of your product is automatically increased. Sometimes we get into the wrong track by focussing on areas that do not need attention at an early stage. For example, I have encountered people, who want to bring test tool into the organization as one of the starting points of introducing testing. We need to understand test tools helps increase the efficiency of our work and do not define a  testing process. Things like test tools help improvise your testing methodologies.

The testing process to be defined in an organization should be built on the base foundation of a vision that we need to achieve in terms of quality. All organizations may have different testing processes based on work ethics, culture, needs, budget and even constraints etc. While trying to build a testing process it's important you dig into these areas and then start defining them. What goes by standard may not fit in the organization and needs to be altered accordingly. Processes defined should never become barriers in the company or team path and should just act as a catalyst to speed up the process and help deliver quality.

Closing the article with the below quotes:

“While we may understand the problem, it is critical for us to change our thinking, our practices, and continuously evolve to stay ahead of the problems we face.”— Mike Lyles


Author: Sadhvi Singh



  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
These days it's all about Agile. As we are moving towards faster deliveries, rapid development and expeditious changes, it's becoming more difficult and exhaustive in maintaining quality to the required standards. Either it's about compromising on the current situation or stretching yourself to the 'T' to make things fall into your path. This emerging agile methodology has provided us with many lucrative benefits but along with it comes the cons or the problems with every phase we deal with either it is development, designing or testing. On the testing front, if compared with the waterfall model a lot many problems have been faced and we all are struggling to find solutions or working towards them. To keep intact with this software development pace, we need to improve our software testing process and methodologies by finding solutions for the different agile testing challenges. 

Below are the agile testing challenges, Agile testers usually face: 

Shrinking Testing Cycle

With shorter sprints and several stories in a sprint, the testing cycle gets shortened due to development time taking over the testing time. A major problem encountered during this tenure is stories getting delivered at the end of the sprint. This usually occurs due to agile planning performed more focussed from a development viewpoint rather than considering both the development and testing outlook. Such shrinking testing cycle leads to inadequate testing performed and bugs discovered at later stages like regression or even in production, which become expensive to fix.

How to overcome the challenge?

The very first approach to ensure that the testing cycle is not overridden by the development cycle is that during Sprint planning, both testing and development efforts are measured appropriately so that testers get sufficient time for testing. Along with it its important for QA's to prioritize the stories in a sprint to ensure all higher impacted stories from the business point of view are tested rigorously and the ones with lesser impacts can be tested with major scenarios to ensure better coverage in a lesser time frame and lesser probability of bug detection.

Changing Requirements/ Unexpected Sprint Changes

This challenge is something one should expect when been in an agile environment. With requirement changes, it becomes difficult for the team to cope up with these changes when been in the sprint mid-way or due to an addition/removal of a requirement. Such last-minute changes lead to chaos or make a planned sprint unplanned leaving the team frustrated and pressurized. Getting work done across the required changes or scrapping of work can turn out to be a nightmare, leading to more issues in quality.


How to overcome the challenge?

The best approach to deal with such scenario is to assure the product owner and scrum master are aware of the increased scope and either a low priority story is removed from the sprint to balance out the velocity of the sprint or if business needs require to build and deliver the sprint with this increased scope, make sure the tester is well versed with the changes the requirement may bring about and perform risk-based testing accordingly. You can refer my article Evaluating Risks, "The Risk-Based Testing"


Insufficient information on stories

Many times we come across issues where stories are not well defined or only one liner story requirements are provided. These issues occur due to unclarity on the requirement from the product owner's end and may be required to build a prototype of it so that more clarity can be derived from it. Such insufficient information leads to rework from development and testing perspective. Also, unclarity on this front may lead to insufficient test coverage.


How to overcome the challenge?

For stories with insufficient information, the QA can brainstorm along with the team to discuss the changes that need to be performed due to the insufficient information on the story and create test cases based on it. In a scenario where not much information can be derived, QA can create high-level scenarios and perform testing rather than awaiting more information across the story. Post getting an overview from the developed prototype, QA can update test cases across the story and perform further testing accordingly.

Broken Code due to Frequent builds

Due to consecutive build process with releases delivered back to back, the probability of code breakage increases with every build. This leads to previous functionalities or impacted functionalities which were running smoothly in the previous build gets impacted leading to bug injections in the software. Due to limited testing only across the designated stories, such bugs are escaped, leading to their discovery at later stages which increases the churn rate of the development cycle. 


How to overcome the challenge?

It is crucial that while creating test cases, the QA keeps in mind all the impacted areas relative to the story and pull required test cases from those stories and execute them along with the current test cases. This helps to ensure impacted areas are tested upfront along with the new features leading to bug discovery from those impacted areas if any. This helps such bugs not been discovered during the regression testing phase making it smoother and reduce the churn rate. Also to keep in mind post delivery of every build is to ensure a quick round of smoke test is run to ensure no major features are broken and helps provide a green flag to the QA to perform in-depth story testing post the smoke test is passed. An automated smoke test can help speed up the process and brings stability to the AUT


Inadequate testing documentation and test coverage

Due to continuous development, integration, releases and lesser available time for testing, it becomes difficult to create the required testing documentation like requirement traceability matrix or test scenarios or requirement document etc thereby leading to an insufficient flow of information across team members or freshers entering into the project. This may lead to long-term issues with the team not aware of the detailed project knowledge and hence leading to scenarios been missed while creating or executing them. Also with major documents like RTM not been available due to time constraints, it becomes difficult to track across requirements and hence may result in improper test coverage.


How to overcome the challenge?

To ensure information is widespread across members, due to lack of time in documenting the requirements or test scenarios, a better approach would be to have detailed KT recordings available for new team members to have a comprehensive understanding of the project. Similarly, in case of missing documents like RTM or test scenarios, a QA can ensure to have all test cases linked to stories to ensure proper test coverage is achieved. Also, few testing metrics can help ensure test coverage is achieved to the required level. You can refer my article on testing metrics for more details on it.


Lack of Communication

This area may turn out to be a bottleneck if there is missing co-ordination and communication among team members. Quite often it has been observed, that QA team were unaware of the changes made by the development team, this leads to the team not able to visualize or contemplate it while evaluating the risk across the changes done. This eventually leads to more bugs discovered at later stages and de-focussed testing performed by the QA team. Similarly, if there is a communication gap in terms of requirements communicated to the teams by the product owner, similar testing issues may be encountered.


How to overcome the challenge?

It is important to ensure that the team sits together and perform discussion across the changes made from the code perspective so that everybody is in sync with the changes and can evaluate the system more thoroughly while making sensitive decisions or risk-based decisions and perform testing activities accordingly. Raising issues or clarifications across requirements when conveyed by the product owner helps provide more visibility across the requirement not just to the QA but also to the team.


Inability to refine testing due to continuous testing approach

Testing remains no longer a phase as in waterfall model but a continuous activity that begins from user story analysis to test case creation to test execution, followed by regression testing and smoke testing post delivery. As the cycle completes the QA is moved to the next release where the same set of activities are to be carried out. With such back to back engrossed activities, it becomes difficult to take out time to sit and analyze the pitfalls or areas of improvement that could make testing more refined and structured. This leads to stale processes and finally software quality been depreciated.


How to overcome the challenge?

It is of high importance to ensure agile practices like sprint retrospectives are carried out which helps to inspect the key areas that need improvements and can be worked upon. Similarly, key metrics like defect leakage, defect rejection rate would help draw inferences in the testing processes, thereby helping to refine them further. Evolving better test strategies as per the application under test and in the available time would help improve the quality of the software. For example, an application that needs to be tested across multiple browsers and mobile devices can be strategically tested by implying equivalence partitioning across the more used browsers and versions. Such strategies and practises would refine the testing process in the limited time available for testing.



Agile methodology has provided various productive benefits to the software industry. With better visibility to the end customer and faster rate to market, have opened the doors to more customers inclination towards technology. It's of keen importance to find possible solutions to the ever emerging challenges to ensure quality remains intact with the trending methodology to serve its benefits that others can reap. Closing the article with the below quote that speaks about the agile challenges one encounters:
 “We may encounter many defeats but we must not be defeated.”– Maya AngelouYou will inevitably run into challenges along your agile journey, the key is to learn from these challenges and overcome them through standups and retrospectives.
Author: SadhvI












  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
In a fast-paced technology-driven environment, to maintain and achieve quality is of high significance. But how do we quantify that quality is achieved into the terms the project, the organization or the standards required? Does raising bugs across an application suffice our need of achieving quality? Or creating test cases and executing them ensures the product has met the quality parameters? It's much more when we talk about quantifying the quality parameters. It's actually about assurance of quality via 'The Testing Metrics'.This article will talk about the Key Performance Indicator(KPI's) of Quality Achievement, ie the Testing Metrics to be established to evaluate quality.

When we talk about a project, what is the end goal we desire or want for that particular project? As an organization or individual working on a project, we all thrive to ensure a timely, quality oriented and cost-effective delivery that satisfies customer needs and expectations. So in an idealistic scenario, the end results should be turned into performance indicators to ensure the results are met the way they should for a beneficial project. So let's dig into the key metrics one should cater for measuring quality.

Defect Leakage:

This is one of the most important parameters of measuring your team quality. As the name signifies, its the number of defects found in production or UAT environment and raised by the customer or the end users. These are the bugs which were not found or logged by the QA team and have been encountered by the end users or the customer. As in important ingredient of quality control, defect leakage plays an important role and should ideally be zero. Defect leakage is computed as

                                        Total number of bugs found in production/UAT
                           _____________________________________________________ * 100%
                                              
                     Total number of bugs logged by QA team - Total number of invalid bugs

This parameter can be computed per tester, considering the numbering of bugs logged by each QA in the application and analyzing the number of bugs leaked into production/UAT from the module the QA was assigned to test. Higher the defect leakage bigger are the issues regarding the quality of the software.

Defect Rejection Ratio:

Another aspect of mapping the key performance indicator is the defect rejection ratio. All testers should be mapped on the grounds of how many valid bugs are raised by the QA team. This is directly proportional to the QA understanding of the requirements. A high number of invalid bugs leads to the conclusion of in-ability of QA's in understanding requirements and leading to the development team time consumption in validating issues which are invalid. Defect Rejection Ratio is computed as:

                                              Total number of invalid bugs raised
                                ____________________________________    * 100%
                                             
                                              Total number of bugs logged

This percentage should also be ideally zero. As the percentage increases, the effectiveness and efficiency of testing come under consideration. The confidence of testing and on the tester is tremendously reduced. This parameter can also be evaluated on an individual basis for each QA member in the team.

Defect Density:

This is a parameter which is taken into consideration for both development and testing team as there key performance indicator but in completely two different approaches. Before we discuss what are the approaches, let's see what is defect density? Defect density is described as the number of bugs found in each requirement/module. Defect Density is computed as:

                                                Total  number of defects logged
                                      _________________________________________  
                                        
                                                Total  number of modules/requirement 
                                  
From a software quality perspective, this number should be close to zero or perhaps zero, which is ideally the presence of no bugs:).How are we suppose to compute this parameter from both testing and development perspective? From a development perspective, the defect density should be less, as this would compute less number of bugs across requirements and hence more quality of the software. Similarly, on the contrary, a higher defect density means more rigorous and deep driven testing performed across requirements. Having said so, less number of bugs also does not mean less rigorous or deep driven testing is performed. We need to ensure how we use this metric in conjunction with other metrics so that they yield the required results. Ideally, we all should aim for lesser defect density for a high-quality product.

In terms of agile, defect density can be used as the total number of bugs logged across stories. This parameter can be traced across developers with the number of bugs logged across requirements across each developer. Similarly the same can be traced for QA, where the number of bugs logged by QA across requirements which were assigned to QA for testing.

Defect Severity:

This metric helps to evaluate the number of bugs logged based on severity like critical, major, medium, minor/trivial/cosmetic. The only analysis of this metric been considered for performance evaluation is to map individuals on the basis of quality bugs raised by them based on business importance. Having said so this metric is controversial in its own terms, as in logging higher number of minor/trivial bugs should not map an individual to be inefficient or logging lesser number of critical bugs. It completely depends on the application and how well coded it was. A better approach of using this metric would be in conjunction with other metrics to get a better insight into quality achievements.

Defect Severity can be calculated as:
                     
                                                         Total Number of Critical Bugs logged
                                          ___________________________________________ *100%
       
                                                        Total number of bugs logged in AUT



                                                         Total Number of Major Bugs logged
                                          ___________________________________________ *100%
       
                                                        Total number of bugs logged in AUT



                                                         Total Number of Medium Bugs logged
                                          ___________________________________________ *100%
       
                                                        Total number of bugs logged in AUT



                                      Total Number of Minor/Trivial/Cosmetics Bugs logged
                                     ________________________________________________ *100%
       
                                                        Total number of bugs logged in AUT

Test Case Efficiency/Test Case Effectiveness:

Apart from the defects count, test case effectiveness is also considered as one important criterion for KPI's. This is a complex evaluation criterion.How is the test case efficiency/effectiveness measured? This criteria is usually evaluated by two ways. One is via peer review and the other is an analysis of bugs discovered. Post creation of test cases when test cases are submitted for peer review, the reviewer review the test cases. In case of any missing scenarios or test cases, the reviewer comments and the creator update test cases. This also gets tracked against the test effectiveness metrics. Similarly, sometimes we come across bugs that are logged by QA as part of ad-hoc testing or logged against in production/UAT which are popped up because of missing test cases, those are also tracked across this metrics. So basically test case efficiency/effectiveness is calculated as

                                         Total number of test cases missed
                           ________________________________________ *100%
                                        
                                        Total number of test cases written


Again lower the percentage, better the efficiency and effectiveness of the individual writing test cases.This metric helps validate beginners/fresher into the testing field from there ability on test coverage. It helps to provide them directions and guidance of improvement. Evaluating this metric could be complex and time consuming, but once part of the process, becomes a daily activity with high yielding results. This helps define quality achievement to completely another level.


Test Effort:

This metrics is usually computed across test leads/test managers. This metrics is computed in two terms, one is time variance and the other is cost variance. Usually, before the start of every project, the efforts involved in testing is computed in terms of time and cost . Similarly, post completion of the project the same time and cost is evaluated. Based on the variance in these two parameters, the test effort metrics is calculated. Test effort metrics is calculated as:

       Actual time spent in the testing cycle- Estimated time for testing
     ____________________________________________________________  *100%
                        Time estimated for closing the testing cycle

Ideally, there should be no variance percentage. But we may encounter variances even greater than 50% which again is a question mark on the estimation performed by an individual. It should be thought through and re-considered before the beginning of testing activities if such variances are noted on regular basis. If a variance of negative percentage is accounted for, an overestimation of activities can also be tracked across the KPI's.

The second variance to be considered is the cost variance. People usually get confused with this second variance and assume it to be similar with the fist variance of time, which is not the case. Let me share an example, in a scenario where testing activity was planned to end in 3 months with 4 resources but in the actual instance, the testing activity was closed in three months time only but with 5 resources. Here the time variance remained unaffected but the cost was varied by addition of the resource. So cost variance is as important as time. Cost variance is calculated as:

     Actual cost spent in testing- Cost estimated to be spent in testing cycle
  ________________________________________________________________   *100%
                               Cost estimated to be spent in testing cycle

Ideally, this percentage should also be zero, but any percentage with greater variances directly mean problems in the project management and estimation and should be reviewed, controlled and monitored timely to eliminate bigger variances.Also as a test effort metric both these variances should be considered hand-in-hand.

The above mentioned metrics play a major role in evaluating quality and achieving the quality goals. But along with them, we have many more metrics that can be used to detail out the performance indicators. While ensuring test metrics are established in an organization, its important to ensure what is the conclusion derived from these parameters.If post these metrics no conclusion and actionable items are drawn then the metrics used should be eliminated then and there as they are not serving the purpose they are meant to be. Another important aspect is to keep in mind, whatever metrics are rolled in the organization those become the mind-sets of the employees and all their work gets evolved around it. So evaluating the right set of metrics for the organization is crucial and worth considering. 

We can evaluate KPI's on above mentioned parameters and can mark individual or teams into different zones on basis of release, quarter or yearly cycle.We can also lay down the path of improvement and goals for next releases/quarter based on these KPI's. Not just ensuring quality is been carried out, should be the task of QA's but also ensuring it is achieved to the degree defined, should be also important. 

Closing the post with the below quote on KPI's

“Not everything that can be counted counts, and not everything that counts can be counted.”
– Albert Einstein


Author: SadhvI










Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview