Loading...

Follow Automate The Planet - Anton Angelov on Feedspot

Continue with Google
Continue with Facebook
or

Valid

The Orion Symposium gives the right of presentation to all groups discriminated against in our society. One of those groups we all ignore is hackers. Now Mr X from the PZRZYo Hacker Society will, in hologram form, remotely tell us more about the development of viruses and the problems in their testing. You’ll hear about the state of hacking today and in the future. Bonus: What qualities should a metropolitan hacker possess to have a successful career?

….

A hologram appears and a disguised voice begins to speak.

Hacking Today

Hacking and viruses are around you everywhere and all the time, and you are unlikely to be aware of them. Have you ever gone to celebrate Great Chik Chik Eel Day and then taken the Trans-galactic Express home? If you live on one of the 20 planets in the Chik Chik Beta system, the answer is most likely ‘Yes’. But has it ever been the case that the tickets had been sold out or cost 3,000 Galactic Dollars (100 times more than normal)? Tickets for an almost infinitely long express train should theoretically never sell out (at least this is what its creator claims). People think it’s a bug of the train, but more often than not it’s simple hacking. Why? A hacker bot buys up all almost-infinite tickets for microtebis and then offers them to wealthy pensioners desperate to get home at hundreds of times higher prices.

Anyone who owns a modern flat today definitely has a smart sink. The smart sink reads your current emotions and releases liquid at the ideal temperature and pressure to refresh you and relieve your concerns or meet whatever other needs you have – washing the dishes, your face, etc. A couple of tebis ago, there was a bubble in the rates of interior decorators, followed by a period where they all went out of business. How are they now some of the highest-paid professionals, and you have to make an appointment for three tebis in the future? Our hacker group was commissioned to hack all sinks in the Centaur T sector and flood all homes. Wet furniture is not that great, right? Good marketing, don’t you think?

If you pilot a tirob (HGV spaceship), you probably listen to one of the most popular reality shows in interstellar space – Spicy and True: Hacked en-route by an honest bolt. This is a radio reality show, where hackers hack people’s conversations live on air, with their algorithm selecting the spiciest scandals or declarations of love. The presenters give a live commentary on what’s happening. You can even bet on how it’d all turn out, or you can just phone in and ask for a shout-out to Kolyo for his birthday.

As hackers, we also often participate in election campaigns. This is a more expensive service, as a lot of heuristics have to be used in coming up with new ways of manipulating the masses. There are two main approaches to winning elections: attract new voters to your cause (if you have one) or (if you don’t have a cause and just want to absorb funds from the Stellar Union) vilify your competition as much as possible. One tebi ago, the group which had attacked Chik Chik Beta attacked it again. The opposition had hired them to malign the government. As you know, it is common practice to hold barbecue banquets just before an election. The contract hackers hacked all the smart grills and burnt all the appetisers to a crisp. So the voters were very disappointed and decided they should have lunch all day on election day. See how a smart grill can bring down the government?!

Problems in Virus Testing

When anyone hears the word ‘hacker’, they imagine some nerd in a movie taking down a drone or gaining access to an SPS (stellar power station) network in 3 presses of the keyboard. In reality, hacking and writing viruses is a complex and time-consuming task. In the next section, we’ll explain that you should have many more qualities than simply being a very good coder. First, to take down a military drone or hack into a factory network, you need to find out the technology on which their software system is based. Since there are over 100,000 programming languages, to be a very good hacker, you’d usually specialise in a small number of them. Not that there are no genius hackers who can work with half of them, but the point is that even they don’t know everything.

There’re many viruses that have made their creators very rich bourgeois. But to become a rich bourgeois, your virus must first work properly. Do you have any idea how hard it is to test a polymorphic virus? In short, those types of viruses encrypt themselves and, at the same time, alter their code to hide from anti-virus and anti-pest software. But since Agile methodology is used for their development, you need to test your virus iteratively, i.e. you need to insert hidden ways of detecting it while you’re running the tests. If you skip this step, you’ll infect your machine and you’ll have to destroy it (we recommend incineration with a blowtorch at 1,056.66 degrees and pouring carbidetox acid from Lake Chakarunga over the remains). You need to remove those “back doors” before you go into PROD, otherwise security software might find and use them. But if you remove them and accidentally get infected with your own virus – DISASTER@! Your progeny will rob your money and send it back to your account. Could be worse, yes, but for every legal or illegal trans-galactic banking transaction, you usually pay 5 intergalactic cents. This recursive siphoning will eventually cost you your house, mansion, island and possibly kidney, if you have one, + the bath in which they’ll take it out for (don’t worry, the ice usually comes free of charge).

No matter what virus you’re writing, if it’s intended to infect the masses, you’ll have to test it on all possible combinations of hardware and operating systems (and there are hundreds of them – just like the number of corporate ketchup and gherkin producers ...)

Inevitably, there are also a lot of side costs to security software. You’ll ask why does he of all people need security software? Well, you need to test whether various antivirus programs detect it. If it fails the automatic test, then you’ll need to update your virus and test it again. And speaking of updating, you must be sure that this functionality in your application works properly before you go into PROD. Otherwise, it’s game over for your brilliantly beautiful virus. And you must have adequate protection from other hacker groups. You usually don’t want someone to steal your virus source code. And at $100 per license, you save 3 tebis on development and support of something that people have devised long before you. It’s much better to be a bourgeois hacker writing your viruses in peace than having to engage in such bot-based activities. Taking into account the listing price of Aldebaran gherkins and the share price of the Andromeda South SPS, you’d save an average of $56,700 on self-development of security software by buying an off-the-shelf package. You can instead invest those dollars in hiring at least 31.33 djondjonbolcheta to comprehensively test your creations.

Note (for bourgeois hackers): A lot of being a hacker entails participating in a club founded by other bourgeois hackers. That’s all well and good, but there are certain requirements. The most important one is to be a gentleman and follow the etiquette down to a tee, i.e. knowing how to drink tea properly. I’ll show you a version abridged 1,456-fold of this part of the code of conduct (you can read the full version in “The Most Complete Pocket Guide for the Bourgeois Hacker Gentlemen with 100,000 Tips - 46th Edition”). Tea is always drunk before 4 pm Kra Kra planetary time. It’s a good idea to buy a watch that shows that time even if it’s 49 hours and 5 microtebis on your planet. This is a legacy reason, as the first such gentleman hackers hail from there. The cup is also a very important part of the ritual. It should be exactly 125 m and 34 g. Always brew loose-leaf tea, rather than teabags. Serve with a pot of hot water/milk and a bowl of sugar. It’s very important to have a 27 cm cloth napkin. Always drink it in a sitting position with your back straight. It’s proper etiquette to wait at least 7 microtebis before you drink first. It actually used to be considered disrespectful to drink first when you’re a guest. Always reach with one of your right arms or limbs. It should be finished before 38 microtebis have passed, when it would’ve gone completely cold. This is also the perfect length of an intellectual conversation, because everything after is just an unnecessary stretching of time. So I recommend – if you want to be part of our society, please learn to make and drink tea properly!!! (Those fruit-flavoured abominations with caramel bits are not tea, and if we witness such an obscenity – be warned – not only will we reject your application, but we’ll also steal all your source code!)

Qualities of a Successful Hacker

Of course, without being really good at writing code, you can’t be a hacker. Unless you outsource development, i.e. hire some djondjonbolche to bring your ideas to life at a lower price. But you still have to be technically literate.

You have to be a very good analytical thinker, since you’ll have to find out exactly what you are hacking and what technologies and languages ​​it is written in. You must be ready to root around refuse bins and arrange shredded confidential documents like a puzzle to retrieve the information. Very often, you need to have at least a few friends, at least one of whom has to go undercover in the establishment that will be hacked if it’s of brick-and-mortar type: SSP, bank, factory, etc. That is, you must have some basic social skills. At other times, you may need to react faster and bribe certain officials. The study of sociology and psychology can be quite helpful in such cases. You’ll be highly dependent on those officials, so you’ll have to take them out to the right places with the right type of men/women and you’ll have to know the colour of the cocktail you’ll buy for them.

Besides drinking tea, every self-respecting bourgeois hacker must be able to write grammatically. So help you God if there’re spelling mistakes in the phishing message to the wealthy pensioners who have to send you money because they think you’re their son who has broken his clavicle on a hiking trip in the Kra Kra mountains.

Hackers must have a very good understanding of emotions. As you know, we use many emotion-capturing devices. In order to use the emotions after you’ve stolen them, you need to be able to understand them. By reading a person’s emotions, you can find out their passwords or figure out which pyjamas would move them emotionally and sell them at a triple price through your offshore company for stylish gentleman’s pyjamas, the latest Italian fashion.

Last but not least, you must be a financial wizard. You won’t become rich if you don’t know how to hide your hard-earned money. The galactic tax office is ever vigilant and should not be underestimated. It’s always good to know how to avoid some tax or other. And if you can also invest in bonds/shares, so much the better for you. You have be the king of offshore dealings and Caymans-based companies.

How to Earn Like a Hacker?

There are different ways to earn like a hacker, depending on whether you are mediocre or not. If you don’t want to fully dedicated yourself to the profession, so you can continue to have a social life and the occasional quiet drink of tea, here are some ideas. You can hack the algorithms that compose songs for the galactic charts. This way, you can short the company’s stock if the song is very bad and earn yourself a tidy sum. Of course, if you have no idea whether a song is a hit-in-the-making or not, it’s better to consult with a specialist in confidence.

Another idea off the top of my head is to do business in baby upgrades – still quite fashionable and popular. With good marketing, you’ll find customers in the more suburban parts of the galaxy. The idea is that babies usually come out of the labs with factory-locked capabilities. That is, when they grow up and decide to unlock better memory, vision or RAM, this is not an actual upgrade; rather, the capability is simply unlocked with a code. For modest amounts remitted to the Caymans, you can unlock everything in early childhood if the parents want a wunderkind. Of course, every individual has their own personal features, which is how we’re all different, but nobody can argue with 100GB RAM. If you’re not competent enough to do the full unlocking, a cheaper option would be to hack the transport system of the hospital/veterinary unit. Once in, you simply have to redirect a fully unlocked baby to another location and swap them. It’s unlikely that anyone would notice. Possibly... Just make sure they are of the same species, otherwise it’ll be embarrassing.

If you’re in the top 10% of hacking capabilities, then there’s a chance to earn a quick buck for one or two planets with a personal ocean and a view of three suns of different colours (the rarer colour combinations are more expensive, but the tan they can give you is very interesting. For example, my favourite rare combination is purple #6c4675 + pink #e3b1d2 + orange #FF6600). You can also make such money during hybrid warfare by penetrating the enemy’s SPS systems. Another way is to offer some of the most wanted types of viruses:

virus affecting the ruling algorithms by changing their morals;

virus that alters and modifies combat simulators (we are talking about the simulations we mentioned in the previous lecture, which simulate the entire galaxy and theoretical combat);

virus increasing the fuel consumption of all starships ordered by one of the fuel cartels (they’re known to give very large bonuses + a lifetime supply of free fuel)

Hacking in the Future

One of the directions in which hacking is going is that the intergalactic government is considering legalising the practice, following the example of prostitution and drugs. It happens anyway and we can’t stop it however hard we try – so let’s tax it at least! So, very soon, if you hire one of us, you may also receive an invoice by mail and would have to pay VAT.

Another popular thing happening right now, a new movement so to speak, is hacking as a service. This means, you pay a monthly subscription, whether you use our services or not. But you have a personal hacker on retainer. He can get you a “free” parking space in the crowded mall in your neighbourhood or flood your neighbour’s flat – the one who makes a lot of noise at night. Or he can simply ensure you always have a day or two of paid leave left.

The power goes off and the hologram disappears. (Obviously, the aim is to avoid the asking of inconvenient questions).

The post Testing in the Galaxy- Chapter 6: Virus testing, hacking today and more appeared first on Automate The Planet.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Nowadays, more and more companies are building test automation frameworks based on WebDriver and Appium for testing their web and mobile projects. A big part of why there are so many flaky tests is that we don't treat our tests as a production code. Moreover, we don't treat our framework as a product. I will present to you all the challenges during the testing of our test automation framework. In the first article from the series, I gave you an overview of our test infrastructure what tools we use for source control, packages storage, CI and test results visualization. In this second part, we will discuss various types of testing that we did to verify that each aspect of our framework is working as expected- functional, compatibility, integration, usability, installability and many more.

What We Have to Test?

Before I share with you the details of our test infrastructure, I need to explain what we have to test. Our test automation framework is written on .NET Core in C#. This way we achieve the cross-platform requirement. Also, it has modules for APIdesktopwebAndroidiOS and load testing. Generally, there are two ways to use the framework. Most of our clients use it by installing NuGet packages. (NuGet is the package manager for .NET. NuGet client tools provide the ability to produce and consume packages.) To ease the process of configuring the projects we provide a Windows UI installer and CLI installer for Windows and OSX. The UI installer also adds various project, item templates and snippets for better user experience in Visual Studio IDE.

The second way of using the framework is if you are an enterprise user and has access to the full source code. The usage is slightly different, since we have removed all security protections like licensing and obfuscation. So, we need to test it separately.

Functional Testing

Most of our functional tests are written with our framework against demo apps. For the web, for example, we have developed mocked local web pages so that the tests can be as fast as possible. The same is valid for API tests which are run against a local web service run together with the tests. As for the mobile tests, we tried different approaches- real devices, emulators, simulators and cloud providers. We wanted to execute them using the cloud providers, but the tests were slower and if you use a service which provides real devices, the device is not always free. So, we decided for beginning to use emulators and simulators in case of iPhone. 

The first type of functional tests we have is to check whether the framework can work with the various UI controls- buttons, text boxes, date pickers, etc.

[TestMethod]
[TestCategory(Categories.Chrome)]
public void DateSet_When_UseSetDateMethodWithDayBiggerThan9_Chrome()
{
var dateElement = App.ElementCreateService.CreateById<Date>("myDate");
dateElement.SetDate(2017, 11, 15);
Assert.AreEqual("2017-11-15", dateElement.GetDate());
}

The way we developed our framework if it is not used correctly, it throws the appropriate exception with a specific message mentioning what went wrong and what to do. This is something that needs to be validated.

[TestMethod]
public void DateSetThrowsArgumentException_When_Month0_Edge()
{
var dateElement = App.ElementCreateService.CreateById<Date>("myDate");
Assert.ThrowsException<ArgumentException>(() => dateElement.SetDate(2017, 0, 1));
}

As mentioned one of the key things that differ the frameworks from the software libraries is the extensibility. For each control, we support different hooks/events to which you can subscribe and execute your logic. Here is how we test it.

[TestMethod]
public void SettingDateCalled_BeforeActuallySetDate()
{
Date.SettingDate += AssertValueAttributeEmpty;
var dateElement = App.ElementCreateService.CreateById<Date>("myDate");
dateElement.SetDate(2017, 7, 6);
Assert.AreEqual("2017-07-06", dateElement.GetDate());
Date.SettingDate -= AssertValueAttributeEmpty;
void AssertValueAttributeEmpty(object sender, ElementActionEventArgs args)
{
Assert.AreEqual(string.Empty, args.Element.WrappedElement.GetAttribute("value"));
}
}

We have another logic which allows the user to change globally how a particular method behaves. For example, if you are not happy with how we have implemented the button.Click() then you can change how it is done- for example using JavaScript.

[TestMethod]
public void LocallyOverriddenActionCalled_When_OverrideGetDateGloballyNotNull()
{
Date.OverrideGetDateGlobally = (u) => "2017-06-01";
var dateElement = App.ElementCreateService.CreateById<Date>("myDate");
Date.OverrideGetDateLocally = (u) => "2017-07-01";
Assert.AreEqual("2017-07-01", dateElement.GetDate());
Date.OverrideGetDateGlobally = null;
}

For some features such as BDD logging instead of running browsers and devices, we use pure unit tests with mocking some of the dependencies. This is a feature that once the test completes, creates a human-readable log what happened. 

private Mock<IBellaLogger> _mockedLogger;
public override void TestInit()
{
_mockedLogger = new Mock<IBellaLogger>();
App.RegisterInstance(_mockedLogger.Object);
}
[TestMethod]
public void CreatesBDDLog_When_ClickButton()
{
var button = App.ElementCreateService.CreateByIdContaining<Button>("button");
button.Click();
_mockedLogger.Verify(x => x.LogInformation(
It.Is<string>(t => t.Contains("Click control (ID = button)"))), Times.Once());
}

Here we use a mock to verify that the correct message is saved.

Another type of tests is where we check the localization of elements. In the case of Android, we have an advanced logic which will scroll down until the element is visible before performing the action.

[TestClass]
[Android(Constants.AndroidNativeAppPath,
Constants.AndroidDefaultAndroidVersion,
Constants.AndroidRealDeviceName,
Constants.AndroidNativeAppAppExamplePackage,
".view.ControlsMaterialDark",
AppBehavior.RestartEveryTime)]
public class ElementCreateSingleElementTests : AndroidTest
{
[TestMethod]
public void ElementFound_When_CreateByText_And_ElementIsNotOnScreen()
{
var textField = _mainElement.CreateByText<TextField>("Text appearances");
textField.EnsureIsVisible();
}
}
Learning Tests

What are the learning tests? These are not tests that test your product or framework directly. Instead, if your framework uses a 3rd-party library such as WebDriver or Appium to automate the products, then you probably know that the usage changes from time to time and not everything is so stable as you wished. The learning tests are tests that use these low-level libraries in the ways you will use them later in your framework. Once you make these tests to pass, then you can verify if the upgrade of the low-level library was successful or not.

[TestMethod]
public void OrientationTest()
{
IRotatable rotatable = ((IRotatable)_driver);
rotatable.Orientation = ScreenOrientation.Portrait;
Assert.AreEqual(ScreenOrientation.Portrait, rotatable.Orientation);
}
[TestMethod]
public void LockTest()
{
_driver.Lock();
Assert.AreEqual(true, _driver.IsLocked());
_driver.Unlock();
Assert.AreEqual(false, _driver.IsLocked());
}
Features Not to Automate

Many features are hard to be automated or even if it is doable the ROI is too high. We have different approaches to such functions. For example, we have troubleshooting features part of the framework that took screenshots and videos in case of test failure. Instead of trying to automate whether the video was OK or not. We use these features ourselves during the process of analyzation of failed tests if there are any.

[VideoRecording(VideoRecordingMode.OnlyFail)]
[Android(Constants.AndroidNativeAppPath,
Constants.AndroidDefaultAndroidVersion,
Constants.AndroidDefaultDeviceName,
Constants.AndroidNativeAppAppExamplePackage,
".view.Controls1",
AppBehavior.ReuseIfStarted)]
public class VideoRecordingTests : AndroidTest
Upgradability Testing

Once there is a new version of Appium, WebDriver or any other 3rd-party library we use, we first execute all learning tests. If there are any problems, we investigate and try to work around them. If we cannot, we submit an issue to the repository maintainers. If all learning tests are green, then we upgrade our code and execute all other tests to find out that the new upgrade is working.

All APIs change so is the API of our framework. We try not to introduce breaking changes, but we leave the old API for a couple of releases, mark it as obsolete and give instructions on what needs to be changed.

It is a good practice not to delete the old versions of the API immediately, instead give your users time to migrate their code to the new versions. In such cases, we mark the old API with the ObsoleteAttribute with providing explanations for why the code is deprecated and what should be used instead.

Here is an example from the official WebDriver C# bindings GitHub repository.

public interface ITimeouts
{
TimeSpan ImplicitWait { get; set; }
[Obsolete("This method will be removed in a future version. Please set the ImplicitWait property instead.")]
ITimeouts ImplicitlyWait(TimeSpan timeToWait);
}
Installability Testing

As mentioned we need to make sure the UI installer is working correctly. To do so, we have a checklist of things to verify that everything is running smoothly.

Also, as I told you previously, we have CLI installers for both platforms. 

dotnet new -i Bellatrix.Web.GettingStarted

dotnet new Bellatrix.Web.GettingStarted

This is how the project is created in an empty folder. To test this behavior we created CI builds that perform it and verify that the projects can be built after the installation.

Portability Testing

Portability means- “the ease with which the software product can be transferred from one hardware or software environment to another”.

In the case of frameworks, it means can you run the same tests using a different browser or Android version, screen size, etc and the test to continue working. We promise such behavior to our users, so we need to check it.

..
[TestClass]
[Android(Constants.AndroidNativeAppPath,
Constants.AndroidDefaultAndroidVersion,
Constants.AndroidGalaxyDeviceName,
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Nowadays, more and more companies are building test automation frameworks based on WebDriver and Appium for testing their web and mobile projects. A big part of why there are so many flaky tests is that we don't treat our tests as a production code. Moreover, we don't treat our framework as a product. I will present to you all the challenges during the testing of our test automation framework. First, I will describe what we had to test then I will give you an overview of our test infrastructure what tools we use for source control, packages storage, CI and test results visualization. In the second part, we will discuss various types of testing that we did to verify that each aspect of our framework is working as expected- functional, compatibility, integration, usability, installability and many more.
Before we talk about how we can test the test automation framework, please review a couple of definitions where I defined what exactly are the test automation frameworks and what makes them different from software libraries.

What We Have to Test?

Before I share with you the details of our test infrastructure, I need to explain what we have to test. Our test automation framework is written on .NET Core in C#. This way we achieve the cross-platform requirement. Also, it has modules for API, desktop, web, Android, iOS and load testing. Generally, there are two ways to use the framework. Most of our clients use it by installing NuGet packages. (NuGet is the package manager for .NET. NuGet client tools provide the ability to produce and consume packages.) To ease the process of configuring the projects we provide a Windows UI installer and CLI installer for Windows and OSX. The UI installer also adds various project, item templates and snippets for better user experience in Visual Studio IDE.

The second way of using the framework is if you are an enterprise user and has access to the full source code. The usage is slightly different, since we have removed all security protections like licensing and obfuscation. So, we need to test it separately.

Test Infrastructure 

We use AzureDevops for CI. Also, we use its GIT support for a source control system. We made different spike projects to choose the most appropriate CI, and test results visualization tools. We experimented heavily with a combination of Jenkins and Allure for tests reporting. However, it turned out that Jenkins agents keep disconnecting and the overall maintenance cost was more significant. As for the Allure, it had some integration with AzureDevops, but it didn’t work out of the box for us.

So, the most straightforward and stable setup we found is AzureDevops build agents which run on test agent machines executing the tests, and later we use the built-in test results visualization/dashboarding functionality of AzureDevops.

We store our PROD framework NuGet packages at NuGet.org. Since we release a major version with lots of new features and improvements we needed a storage for the versions under development. We store these packages at MyGet.com which is de facto our test environment.

I mentioned that our framework is built to be cross-platform. So, we have a couple of test agent machines. Most of the tests are executed against Windows but also we purchased an OSX machine in the cloud. You will be surprised how many problems there are when you try to run the tests for the first time against a MAC machine. Many things need to be considered like how you store paths to files in config files and such.

Branching Strategy

Part of each successful test strategy is how you manage the versions of your source code. As mentioned we use GIT. We have one RELEASE branch which holds the current stable LIVE version of the framework. After we release the new version, we check the priorities in the backlog and road map. Based on them, we pick top 3 features and create a new branch from the RELEASE. For example- LOAD_TESTING_RELEASE_5. This is the current working branch. Since the new development may continue for weeks, we still need to maintain the LIVE version, like- critical bugs or support for new driver versions for latest browsers. For such cases, we create one more branch called MAINTENANCE, cloned from RELEASE. We make all of the fixes and improvements, run all of the tests against it, and if everything is running smoothly, it is merged to RELEASE. Finally, we release all packages, installers, etc. You can think of it as a service pack release.

The last piece of the puzzle is how we handle the enterprise edition that I told you about. Since, in the other versions, we have calls to licensing services, making sure that the proper license is installed. On a build, we call a 3rd-party tool to obfuscate all of our DLLs so that nobody can reverse engineer them. For each RELEASE and FEATURE_RELEASE branch, we maintain a second copy where we merge all changes. There we removed all of the mentioned security checks. We have builds that verify this code as well since usually, the merge is a manual operation and problems may occur. Also, the way how people use the framework is slightly different.

We have a separate repository for holding all packages that deliver the drivers for different browsers, demo projects, templates. We call this repository INFRASTRUCTURE_SHARE. It contains many executables and other large stuff. We separate it from the primary source code since it gets larger and larger maintaining the versions of the executables. If we put all of this in the main repository, the builds can get much slower downloading GBs of data each time.

Release Automation

As you can imagine, it can get quite complicated not to forget about something with so many moving parts- packages, VS templates, VS snippets, NuGet templates, drivers, demo projects, UI installers. To avoid human errors and to speed up the release process we strive to automate most of it. We built a tool called Release Manager which we use for these tasks.

It has a CLI version as well so that we can use it in CI builds. It can build all projects and obfuscate them. After that, it can publish the generated packages to the specified environment LIVE or TEST. It can produce the two types of templates from the pointed branch and much more.

In the next part of the series, I will share the different types of testing that we do.

The post How to Test the Test Automation Framework- Test Infrastructure appeared first on Automate The Planet.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In this article part of the Design & Architecture Series, we will talk about handling environments' test data in automated tests. We will discuss why hard-coding data in tests is considered a bad practice, leading to flaky tests. The solution will be to use configuration files based on the build configuration. Moreover, we will look into ways how you can parameterize your tests and run them a couple of times based on rows in a CSV file.

Hard-Coding Test Data

Let's start with discussing why hard-coding test data in the tests is a bad practice. Here is an example from automated tests automating the shopping process in eBay that I developed to explain the Template Method Design Pattern and its usage in automated tests.

public partial class ItemPage : WebPage
{
public ItemPage(IWebDriver driver)
: base(driver)
{
}
protected override string Url => "http://www.ebay.com/itm/";
public void ClickBuyNowButton()
{
BuyNowButton.Click();
}
public double GetPrice() => double.Parse(Price.Text);
}

In the above code, the URL of the page is directly listed in the code. However, 99% of the time, we need to run our tests against different test environments where we use different URLs and data. It is not very practical to copy-paste the page object and just change the URL right?

Another common practice is to place such data in a class which is full with constants.

public class ClientInfoDefaultValues
{
public const string EMAIL = "angelov@yahoo.com";
public const string FIRST_NAME = "Francoa";
public const string LAST_NAME = "Sonchifolia";
public const string COMPANY = "Moon AG";
public const string COUNTRY = "France";
public const string CITY = "Paris";
public const string ADDRESS1 = "9 Rue Mandar";
public const string PHONE = "+33186954328";
public const string ZIP = "75002";
}

We have a similar problem with this piece of code because the accounts and the data can vary on the test environments.

The above approaches lead to flaky tests in a couple of ways. I saw many tests where the data is copy-paste for each of them. Another bad thing that can happen as mentioned is to duplicate some of the core logic, such as page objects. All of these copy-pasting increases the time for maintenance. Moreover, if a part of the test data has to change at some point, you may miss to replace it if it is not in a central place.

Using Test Data from Configuration Files

Here I will show you how we can utilize the .NET Core native support of JSON configuration files to solve the problem.

The techniques that I will present to you are based on the code of our test automation framework BELLATRIX. There we use a similar approach for handling test environments data.

Project Configuration

First, you need to install the following NuGet packages.

  • Microsoft.Extensions.Configuration.Json
  • Microsoft.Extensions.Configuration
  • Microsoft.Extensions.Configuration.Binder
Creating Configuration Infrastructure

Next, we need to create the JSON config file where we will store the test data. I want to be able to configure three different aspects of the automated tests- WebDriver timeouts, URL settings, and billing default values. For each of them, I will create a separate section in the JSON file. Create a new JSON file named testFrameworkSettings.json

{
"webSettings": {
"elementWaitTimeout": "30",
"chrome": {
"pageLoadTimeout": "120",
"scriptTimeout": "5",
"artificialDelayBeforeAction": "0"
},
"fireFox": {
"pageLoadTimeout": "30",
"scriptTimeout": "1",
"artificialDelayBeforeAction": "0"
},
"edge": {
"pageLoadTimeout": "30",
"scriptTimeout": "1",
"artificialDelayBeforeAction": "0"
},
"internetExplorer": {
"pageLoadTimeout": "30",
"scriptTimeout": "1",
"artificialDelayBeforeAction": "0"
},
"opera": {
"pageLoadTimeout": "30",
"scriptTimeout": "1",
"artificialDelayBeforeAction": "0"
},
"safari": {
"pageLoadTimeout": "30",
"scriptTimeout": "1",
"artificialDelayBeforeAction": "0"
}
},
"urlSettings": {
"ebayUrl": "http://www.ebay.com/",
"amazonUrl": "https://www.amazon.com/",
"kindleUrl": "https://read.amazon.com"
},
"billingInfoDefaultValues": {
"email": "angelov@yahoo.com",
"company": "Moon AG",
"country": "France",
"firstName": "Francoa",
"lastName": "Sonchifolia",
"phone": "+33186954328",
"zip": "75002",
"city": "Paris",
"address1": "9 Rue Mandar"
}
}

For each test environment we will have a separate copy of this file where you can change the data. For example, the initial testFrameworkSettings.json can hold the data for our DEV environment. The testFrameworkSettings.Debug.json for the LOCAL dev environment and so on. The structure of the files is based on the build configurations of your solution. So, for each test environment you will have a separate build configuration. When we change the build configuration the correct file will be copied to the bin folder and read by the code.

Next, we need to edit the MSBuild of the project by double clicking on it (VS 2019). You need to add the following piece.

<ItemGroup>
<None Update="testFrameworkSettings.$(Configuration).json">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</None>
</ItemGroup>

Based on the configuration, the right JSON file will be copied to the output directory.

Next part is to create a class that allows us to access the values in the right config file.

public sealed class ConfigurationService
{
private static ConfigurationService _instance;
public ConfigurationService() => Root = InitializeConfiguration();
public static ConfigurationService Instance
{
get
{
if (_instance == null)
{
_instance = new ConfigurationService();
}
return _instance;
}
}
public IConfigurationRoot Root { get; }
public BillingInfoDefaultValues GetBillingInfoDefaultValues()
{
var result = ConfigurationService.Instance.Root.GetSection("billingInfoDefaultValues").Get<BillingInfoDefaultValues>();
if (result == null)
{
throw new ConfigurationNotFoundException(typeof(BillingInfoDefaultValues).ToString());
}
return result;
}
public UrlSettings GetUrlSettings()
{
var result = ConfigurationService.Instance.Root.GetSection("urlSettings").Get<UrlSettings>();
if (result == null)
{
throw new ConfigurationNotFoundException(typeof(UrlSettings).ToString());
}
return result;
}
public WebSettings GetWebSettings()
=> ConfigurationService.Instance.Root.GetSection("webSettings").Get<WebSettings>();
private IConfigurationRoot InitializeConfiguration()
{
var filesInExecutionDir = Directory.GetFiles(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location));
var settingsFile =
filesInExecutionDir.FirstOrDefault(x => x.Contains("testFrameworkSettings") && x.EndsWith(".json"));
var builder = new ConfigurationBuilder();
if (settingsFile != null)
{
builder.AddJsonFile(settingsFile, optional: true, reloadOnChange: true);
}
return builder.Build();
}
}

There are a couple of important notes about it. First, it is implemented as a singleton class, which means that you can access the values without creating a new instance each time. Next, the first-time initialization and loading of the file are happening in the InitializeConfiguration method where we get the first file matching the criteria. This is why it is crucial to copy the correct JSON file. Next, we add the JSON file to the builder. For each section of the config, we have a separate DTO class, which is the C# representation of the data. We have a distinct method for each section.

Here is how looks the class that represents the billingInfoDefaultValues section.

public sealed class BillingInfoDefaultValues
{
public string FirstName { get; set; }
public string LastName { get; set; }
public string Country { get; set; }
public string Address1 { get; set; }
public string City { get; set; }
public string Phone { get; set; }
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Day 3, Lecture Four of the specialised conference titled “Orion Symposium”. The subject’s very interesting, especially if you love freelance services and make a living from them. We’ll talk about small purple tanks for personal use. In general, they’re quite different from those used by professional armies. Those differences affect their testing, as we’ll see later on. We’ll discuss in more detail, perhaps in a separate lecture, how to test more complex professional military weapon systems (after you sign a declaration of non-disclosure of military secrets and agree to “third party processing of your personal data”).

Let’s start with the question, “What the f**k is a personal tank for freelance services?” Professional armies, which mainly conduct battles using autonomous weapon systems, such as drones, tanks, submarines and ships, are sometimes controlled by sophisticated algorithms that do not require human intervention. But those innovations caused the financial and economic crisis from 45 tebis ago. They brought layoffs and mass unemployment of military staff. Not to mention the psychological traumas suffered by so many ex-soldiers, deprived of the means to pour out their aggression and have their egos stroked. This led to mass protests organised by all military trade unions. It was decided to allocate quotas for private armies, which would consist of freelance renegades. Now you just have to download the “War Me" application, which will show you the closest military confrontations. Join the battle and if your involvement is successful, a certain amount of cryptodollars will be transferred to your account, depending on the amount of damage you’ve inflicted.

The Difference between Freelance Renegades and Professional Armies

As I mentioned earlier, professional armed forces consist of a set of various weapon systems run by “smart” algorithms. They necessitate the use of different testing methods. We’ll have a more detailed lecture on this, but now I’ll only outline some of the most basic points.

First, you can work as a test-algo-psychoanalyst. This is a branch of the QA profession, where you psychoanalyse the master algorithm and its subordinate algorithms. You ask it different moral questions, whether something is good or bad, and try to convince it that a certain action doesn’t follow the requirements, that it’s a bug, rather than a feature request. You have to explain to it that evaporating the jellyfish memory lake is not particularly moral. You can do so by focusing on future economic performance and the drop in share prices of gherkin factories. (If you’ve not heard of jellyfish memory lakes –  they’re the latest data storage and ultrafast cloud computing technology where, instead of using quantum computers, biomechanisms harvest the power of small Quar jellyfish. You’ll be surprised at how much computer power you need to organise your entire gherkin business. For instance, calculations of triple integrals and solutions of differential equations to predict whether a certain curve of the jar will increase the company’s market share and boost its stock price on the intergalactic exchange.)

Another difference is where and how the basic functional testing of these systems is performed. Because they can very often cause quite a lot of damage. One solution is to use simulations. There are simulation systems of various complexity. The most powerful one is kept under the direct management of the Red Pepper Centaur B corporation. It develops most of the new weapon systems for the Ministry of Defence and Attack. Through offshore companies, they also effectively have a monopoly on the energy market. Their subsidiary, Stellar Energy, owns 85% of the stellar power grid. They’re building stellar power stations using Dyson sphere technology. (A sphere covering the whole area of ​​a star and consisting of devices that absorb energy and then inject it into the intergalactic power grid.) The largest simulation complex SIMGAL is located near a six-star power station called Andromeda South (just like some tattoos don’t always have a very profound meaning, so does SIMGAL come from the simple contraction of “simulation of the galaxy”). Sixty percent of the energy generated by the neighbouring stations is consumed by SIMGAL. This facility simulates everything in the known parts of our galaxy with a delay of just a few days, so that all available information can be gathered and processed in real time. Where does this information come from? Very simple – from all of us. The government’s development of a human memory upgrade with a permanent record of everything that happens isn’t just for our own convenience. Rewinding the events we’ve shot makes it very easy to find out what was said and by whom 3 tebis ago. But all this information is also sent to the servers of the Ministry of Defence and Attack, where it is processed and sent to the complex. So SIMGAL has enough data to simulate almost everything that is known. There are, of course, many other systems that track galactic space, the movement of asteroids, planets, stars and black holes, as well as all sorts of gravitational anomalies. Analysts are constantly studying the vast amount of information and predicting certain risks that may exist. Your role as a tester is to invent scenarios that use the risks and place them in the simulation. This way you can monitor what our master algorithm would do. If it doesn’t do well, you can tell the algo-psychoanalysts to talk to it. (If you ask me, this is an ideal job opportunity for you if you like leisurely jobs and hulling industrial quantities of sunflower seeds.)

Functional Testing of Tanks for Freelancers

As I’ve shown to you, simulation testing is a costly business affordable only for some large corporations. Not to mention that you, as a freelance renegade, won’t always have money for expensive weapon systems. A recent popular tactic used by new start-ups in the industry for gaining a foothold on the market is to offer free tanks under the FREEMIUM business model.

You get your free tank and all you have to do in exchange is put advertising banner and broadcast ads on large screens mounted on it. When your shells destroy a building, the tank also fires a parachute-strapped package that sponsors/advertisers have filled with promotional brochures – from restoration of building insulation to laying new power lines to your new mansion if you decide to move. Roadside billboards will be personalised and you’ll need to open at least 10 ads per hour to avoid paying interest. Be careful what you buy – if you buy voodoo needles, you’ll only see such ads (from small voodoo needles to voodoo cleavers for use in serious sowing and prosperity rituals) (*you’d get somewhat depressed if the road was long).

Tanks have different aspects that need testing. For example, check if the guarantee really has sufficient coverage. It has to be adapted to different planetary gravitations because you can’t fly your tank without being able to control it.

In other words, someone needs to drive for 20 tebis and do it on a fairly large scale at that. You have to test how accurate the cannon is at different distances set in the interface: -1, 0, 1, 100, 101, int. Max, abg, $%&2. You can, of course, run isolated testing using paint-filled projectiles, rather than real shells. It goes very well with the construction of new neighbourhoods, where you can also try painting a new building green if the test is passed. You have to be careful which planet you are testing on, because in certain neighbourhoods you may be fined for making too much noise, no matter how modern and supposedly silent your tank is.

Most such tanks run on stellar energy and gravity pads that protect paving stones, surfacing materials and the environment. But if you want a full vintage feel, you can order an old diesel model with chain tracks. To be honest, I have no idea know what you’d do with it, since its interface isn’t compatible with intergalactic motorways, so you’ll only be able to drive it wherever someone assembles it for you. In addition, you’ll be fined for leaving tracks on the road and for making real loud noise.

However, it’s still very popular on planets where ploughing is necessary. There’s nothing like barging in through the gate with a crash and bang on a tank. “This will always earn your workers’ respect.” (Quotation from “133 Management Tactics for Enhanced Production”.

It’s vital to test the driver's chair. It has to be extremely comfortable. Imagine something digging into your back while you’re driving for half a tebi along motorways. You should measure the driver’s height very well. If the seat isn’t long enough, you may well get a pain in the neck or kidney. And when you think about it, these days massages and rehabilitation are quite expensive. You’ll have to go to an all-inclusive spa hotel at Hoholulu, and as you know, that’ll cost you as much as another tank. (Unless you recommend one of our vacancies to a colleague or acquaintance, as mentioned in the sponsorship message during the previous lecture. For that, you’ll get a free trip and an all-inclusive package.) But it’s much easier to just measure your seat.

The real PROD testing is done by paying a fee and having a city generated from containers or whatever site you need. Of course, the different models are tailored for different terrains and battles. It makes a difference whether you’re fighting in a city or wetlands. Best testing practices suggest you should use your temporarily cloned version to test the tank. Of course, there are consulting outsourcing companies that can do the same thing for you if you don’t have all the necessary knowledge. After all, you’re just a client who wants a tank. Androids can’t be used because, as we already told you, they only get involved in endeavours that further science in one way or another, and here there’s no way of twisting the semantics to make carrots from cabbage.

There are many  more extras in personal tanks than in autonomous weapons. For example, a refrigerator and a mustard and pickle generator. (You still have to eat something through the endless stellar nights.) It’s also not a bad idea to pay for an entertainment system and a little mothernet access – at least enough to visit your favourite social media. Otherwise, you’d be gambling with your mental health. Some models include 1 hour per week virtual discussion with a psychoanalyst. It’s always better to go for a check-up before you completely go off the rails.

We can give you a recommendation for our new model and if you put down a deposit now, we’ll also give you free travel insurance and 95% off on your accident insurance. (This, of course, does not apply to battles when you’re in the tank or near it*)

*(this should be read very quickly) this distance is much greater than the theoretically possible range of your tank’s guns. In the past we have had occasions where a small djondjonbolche goes in the tank while you are in the toilet and then shoots you or runs you over by accident.

The post Testing in the Galaxy- Chapter 5: “Tank and Algorithm Testing” appeared first on Automate The Planet.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In the series we will define the basic terms that every developer needs to know about testing. The purpose is to give all team members a shared understanding of the fundamental terminology of the quality assurance and all related processes. Later this will improve the communication and reviews quality. It will further increase the testing capabilities of each member. In this part, we will talk about the basic concepts and terminology in unit testing.

As part of the professional services we provide at BELLATRIXwe consult companies and help them to improve their QA process and set up an automated testing infrastructure. After the initial review process and giving improvement recommendations for some companies we need to hire new talents that can help the company to scale-up the solutions we provided. This was part of training we did for a company we consulted so that we educate all of their developers.

Introduction

It’s been floating around since the early days of the Smalltalk programming language in the 1970s. Improve code quality while gaining a deeper understanding of the functional requirements of a class or method.

Unit of Work

Unit of Work- a unit of work is the sum of actions that take place between the invocation of a public method in the system and a single noticeable end result by a test of that system. A noticeable end result can be observed without looking at the internal state of the system and only through its public APIs and behavior.

An end result is any of the following:

  • The invoked public method returns a value (a function that’s not void).
  • There’s a noticeable change to the state or behavior of the system before and after invocation that can be determined without interrogating private state. (Examples: the system can log in a previously nonexistent user, or the system’s properties change if the system is a state machine.)
  • There’s a callout to a third-party system over which the test has no control, and that third-party system doesn’t return any value, or any return value from that system is ignored. (Example: calling a third-party logging system that was not written by you and you don’t have the source to.)

This idea of a unit of work means, to me, that a unit can span as little as a single method and up to multiple classes and functions to achieve its purpose.

public void ClearCart()
{
var cartItems = _appDbContext
.ShoppingCartItems
.Where(cart => cart.ShoppingCartId == ShoppingCartId);
_appDbContext.ShoppingCartItems.RemoveRange(cartItems);
_appDbContext.SaveChanges();
}
Command-Query Separation

First described by Bertrand Meyer. 

Command-Query Separation- a method should be a command or a query, but not both. 

Command- a method that can modify the state of the object but that doesn’t return a value.

public void ClearCart()
{
var cartItems = _appDbContext
.ShoppingCartItems
.Where(cart => cart.ShoppingCartId == ShoppingCartId);
_appDbContext.ShoppingCartItems.RemoveRange(cartItems);
_appDbContext.SaveChanges();
}

Query- a method that returns a value but that does not modify the object.

public List<ShoppingCartItem> GetShoppingCartItems()
{
return ShoppingCartItems ??
(ShoppingCartItems =
_appDbContext.ShoppingCartItems
.Where(c => c.ShoppingCartId == ShoppingCartId)
.Include(s => s.Pie)
.ToList());
}
Why Is This Principle Important?

A method should be a command or a query, but not both. 

There are a number of reasons, but the most primary is communication. If a method is a query, we shouldn’t have to look at its body to discover whether we can use it several times in a row without causing some side effect.

End Result- We should not look into the private state of an object.

private string _shoppingCartId;
public static ShoppingCartService GetCart(IServiceProvider services)
{
ISession session = services.GetRequiredService<IHttpContextAccessor>()?
.HttpContext.Session;
var context = services.GetService<AppDbContext>();
string cartId = session.GetString("CartId") ?? Guid.NewGuid().ToString();
session.SetString("CartId", cartId);
return new ShoppingCartService(context) { _shoppingCartId = cartId };
}

We can check whether a third-party system is called. In the example below the _logger.

[HttpGet("{id}")]
public async Task<IActionResult> GetPie(int id)
{
try
{
return Ok(pieDto);
}
catch (Exception ex)
{
_logger.LogCritical($"Exception while ... id {id}.", ex);
return StatusCode(500, "A problem happened.");
}
}
Properties of a Good Unit Test

A unit test should have the following properties:

  • It should be automated and repeatable.
  • It should be easy to implement.
  • It should be relevant tomorrow.
[TestMethod]
public void ReturnCorrectDate_When_OneItemPresent()
{
var shoppingCartService = new ShoppingCartService();
shoppingCartService.AddToCart(new Pie(), 100);
var item = shoppingCartService.GetShoppingCartItems().First();
Assert.AreEqual(item.Created.Minute, DateTime.Now.Minute);
}

Anyone should be able to run it at the push of a button. It should run quickly. It should be consistent in its results.

[TestMethod]
public void ReturnCorrectDate_When_OneItemPresent()
{
var shoppingCartService = new ShoppingCartService();
shoppingCartService.AddToCart(new Pie(), 100);
var item = shoppingCartService.GetShoppingCartItems().First();
Assert.AreEqual(item.Created.Minute, DateTime.Now.Minute);
}

It should be fully isolated (runs independently of other tests).

[TestMethod]
public void AddPieToCart()
{
var shoppingCartService = new ShoppingCartService();
shoppingCartService.AddToCart(new Pie(), 100);
}
[TestMethod]
public void ReturnCorrectDate_When_OneItemPresent()
{
var shoppingCartService = new ShoppingCartService();
var item = shoppingCartService.GetShoppingCartItems().First();
Assert.AreEqual(item.Created.Minute, DateTime.Now.Minute);
}

When it fails, it should be easy to detect what was expected and determine how to pinpoint the problem.


What Is an Integration Test?

Integration testing- is testing a unit of work without having full control over all of it and using one or more of its real dependencies, such as time, network, database, threads, random number generators, and so on.

I consider integration tests as any tests that aren’t fast and consistent and that use one or more real dependencies of the units under test. For example, if the test uses the real system time, the real file-system, or a real database, it has stepped into the realm of integration testing. If a test doesn’t have control of the system time, for example, and it uses the current DateTime.Now in the test code, then every time the test executes, it’s essentially a different test because it uses a different time. It’s no longer consistent.

[TestMethod]
public void ReturnCorrectDate_When_OneItemPresent()
{
var shoppingCartService = new ShoppingCartService(new AppDbContext());
shoppingCartService.AddToCart(new Pie(), 100);
var item = shoppingCartService.GetShoppingCartItems().First();
Assert.AreEqual(item.Created.Minute, DateTime.Now.Minute);
}

The above test is an integration one because we use two dependencies- AppDbContext and DateTime.Now.

Legacy Code

Regression- a regression is one or more units of work that once worked and now don’t.

Legacy Code- source code that relates to a no-longer supported or manufactured operating system or other computer technology.

Legacy Code- source code that relates to a no-longer supported or manufactured operating system or other computer technology.

Legacy Code- any older version of the application currently under maintenance.

Legacy Code- it often refers to code that’s hard to work with, hard to test, and usually hard to read.

Legacy Code- code that works. Code that has no tests!

Final Definition- Unit Test

A unit test is an automated piece of code that invokes the unit of work being tested, and then checks some assumptions about a single end result of that unit.

Control Flow

Control flow- code is any piece of code that has some logic in it.

public decimal GetShoppingCartTotal()
{
var total = _appDbContext.ShoppingCartItems.Where(c => c.ShoppingCartId == ShoppingCartId)
.Select(c => c.Pie.Price * c.Amount).Sum();
return total;
}

Control Flow- has one or more of the following: if statement, loop, switch, case, calculations, or any other type of decision making code.

public static ShoppingCartService GetCart(IServiceProvider services)
{
ISession session = services.GetRequiredService<IHttpContextAccessor>()?
.HttpContext.Session;
var context = services.GetService<AppDbContext>();
string cartId = session.GetString("CartId") ?? Guid.NewGuid().ToString();
session.SetString("CartId", cartId);
return new ShoppingCartService(context);
}

Properties are good examples of code that usually doesn’t contain any logic and so doesn’t require specific targeting by the tests.

public string ShoppingCartId { get; set; }
public List<ShoppingCartItem> ShoppingCartItems { get; set; }

Once you add any check inside a property, you’ll want to make sure that logic is being tested.

..
public string ShortDescription
{
get => _shortDescription;
set
{
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In the series we will define the basic terms that every developer needs to know about testing. The purpose is to give all team members a shared understanding of the fundamental terminology of the quality assurance and all related processes. Later this will improve the communication and reviews quality. It will further increase the testing capabilities of each member. In this part, we will talk about the different testing types.

As part of the professional services we provide at BELLATRIXwe consult companies and help them to improve their QA process and set up an automated testing infrastructure. After the initial review process and giving improvement recommendations for some companies we need to hire new talents that can help the company to scale-up the solutions we provided. This was part of training we did for a company we consulted so that we educate all of their developers.

Definitions

System Testing- the process of testing an integrated system to verify that it meets specified requirements.

Component Testing- the testing of individual software components.

Component- a minimal software item that can be tested in isolation.

Integration Testing- testing performed to expose defects in the interfaces and in the interactions between integrated components or systems.

What does the product?

Functional Testing- testing based on an analysis of the specification of the functionality of a component or system.

How well does the product behave?

Non-functional Testing- testing the attributes of a component or system that do not relate to functionality, e.g. reliability, efficiency, usability, maintainability and portability.

Acceptance Testing- formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.

Acceptance Criteria- the exit criteria that a component or system must satisfy in order to be accepted by a user, customer, or other authorized entity.

Off-the-shelf Software- a software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.

Acceptance Testing

The focus is on the customer's perspective and judgment. The customer is actually involved. Performed in a customer’s like environment. Finding defects is not the main focus.

For the integration of a commercial off-the-shelf (COTS) software product into a system, a purchaser may perform only integration testing at the system level and at a later stage acceptance testing.

Operational Testing
  • Disaster recovery
  • Testing of backup/restore
  • User management
  • Maintenance tasks
  • Security vulnerabilities
Alpha Testing

Takes place at the developer's site. A cross-section of potential users and members of the developer's organization are invited. Developers observe the users and note problems.

Beta Testing

Sends the system to a cross-section of users who install it and use it under real-world working conditions. The users send records of incidents with the system to the development organization.

Functional Testing

The function of a system (or component) is what it does.

  • Referred to as black-box testing
  • May be performed at all levels
  • Can be done from two perspectives:
  • Requirements-based
  • Uses a specification of the functional requirements
  • Business-process-based
  • Uses knowledge of the business processes
  • Use cases are a very useful basis for test cases
Non-Functional Testing

How well or with what quality the system should carry out its function.

  • Performed at all test levels
  • Includes, but is not limited to:
  • Performance testing
  • Load testing
  • Stress testing
  • Usability testing
  • Maintainability testing
  • Reliability testing
  • Portability testing
Non-Functional Quality Characteristics 

Reliability- the ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations.

Reliability Testing- the process of testing to determine the reliability of a software product.


Usability- divided into the sub-characteristics understandability, learnability, operability, attractiveness and compliance.

Usability Testing- testing to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions.


Efficiency- the capability of the software product to provide appropriate performance, relative to the amount of resources used under stated conditions.

Efficiency Testing- the process of testing to determine the efficiency of a software product.


Maintainability- the ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment.

Maintainability Testing- the process of testing to determine the maintainability of a software product.


Portability- the ease with which the software product can be transferred from one hardware or software environment to another.

Portability Testing- the process of testing to determine the portability of a software product.

Re-testing (Confirmation testing)

After a defect is detected and fixed, the software should be re-tested. To confirm that the original defect has been successfully removed. This is called confirmation.

Regression Testing
  • Retest of a previously tested program
  • Needed after modifications of the program
  • Testing for newly introduced faults
  • As a result of the changes made to the system
  • May be performed at all test levels
  • Maintenance Testing

    Maintenance Testing- testing the changes to an operational system or the impact of a changed environment to an operational system. The process of testing to determine the maintainability of a software product.

    Main Тypes
    • Adaptive maintenance (Planned)
    • Product is adapted to new operational conditions
    • Corrective maintenance (Ad-hoc)
    • Defects being eliminated
    Testing After Maintenance
    • Anything new or changed should be tested
    • Testing is needed even if only the environment is changed
    • Regression testing is required
    • The rest of the software should be tested for side effects
    • The scope is related to the risk of the change, the size of the existing system and to the size of the change
    • May be done at any or all test levels and for any or all test types
    • Key activity – impact analysis

    Impact Analysis- the assessment of change to the layers of development documentation, test documentation and components, in order to implement a given change to specified requirements.

    Triggers for Maintenance Testing
    • Planned modifications
    • Enhancement modifications (adapting software to the user's wishes, for instance by supplying new functions or enhancing performance)
    • Changes of environment (planned operating system or database upgrades)
    • Corrective and emergency changes
    • Migration (from one platform to another)
    • Should include operational test of the new environment
    • Retirement of a system
    • May include the testing of data migration or archiving if long data-retention periods are required
    Static Testing

    Static Testing- testing of a component or system at specification or implementation level without execution of that software, e.g. reviews or static code analysis.

    Static Analysis- analysis of software artifacts, e.g. requirements or code, carried out without execution of these software artifacts.

    Early detection of defects prior test execution. Early warning about suspicious aspects of the code or design by the calculation of metrics, such as high complexity measure. Identification of defects not easily found by dynamic testing. Improved maintainability of code and design. Prevention of defects, as lessons learned in development.

    Dynamic Тesting

    Software is executed using a set of input values and its output is then examined and compared to what is expected.

    • Can only be applied to software code
    • Detect defects and to determine quality attributes of the code
    • Dynamic testing - Testing that involves the execution of the software of a component or system.

    Dynamic testing and static testing are complementary methods, as they tend to find different types of defects effectively and efficiently.

    The post Testing for Developers- Testing Types appeared first on Automate The Planet.

    • Show original
    • .
    • Share
    • .
    • Favorite
    • .
    • Email
    • .
    • Add Tags 

    In the series we will define the basic terms that every developer needs to know about testing. The purpose is to give all team members a shared understanding of the fundamental terminology of the quality assurance and all related processes. Later this will improve the communication and reviews quality. It will further increase the testing capabilities of each member. In this part, we will talk about fundamental testing process.

    As part of the professional services we provide at BELLATRIXwe consult companies and help them to improve their QA process and set up an automated testing infrastructure. After the initial review process and giving improvement recommendations for some companies we need to hire new talents that can help the company to scale-up the solutions we provided. This was part of training we did for a company we consulted so that we educate all of their developers.

    Terms

    Test Basis-  all documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis.

    Test Codition- an item or event of a component or system that could be verified by one or more test cases e.g., a function, transaction, feature, quality attribute, or structural element.

    Test Design- the process of transforming general testing objectives into tangible test conditions and test cases.

    Test Suite-  set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.

    Test Log- a chronological record of relevant details about the execution of tests.

    I create while ago a detailed article describing how to create test logs- Be Better QA- Start Creating Test Logs

    Testware- artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing.

    Test Planning

    Determine the scope and risks and identify the objectives of testing. Determine the test approach (techniques, test items, coverage, identifying and interfacing with the teams involved in testing, testware). Determine the exit criteria.

    Test Analysis and Design
    • Review the test basis
    • Identify gaps and ambiguities in the specifications
    • Prevent defects appearing in the code
    • Evaluating testability of the test basis and test objects
    • Identifying and prioritizing test conditions based on analysis of test items, the specification, behaviour and structure
    • Designing the test environment set-up and identifying any required infrastructure and tools

    Part of test design- identifying necessary test data to support the test conditions and test cases, initial conditions, actions, expected results.


    Test Implementation and Execution

    Execute the test suites and individual test cases - manually or by using execution tools. Log the outcome of test execution. Compare actual results with expected results. Report discrepancies as incidents and analyzing them in order to establish their cause.

    Re-testing- testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.

    Regression Testing- testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.

    Testing- the activity that initially finds failures in a software item.

    Debugging- the process of finding, analyzing and removing the causes of failures in software.

    Often following the debugging cycle the fixed code is tested to retest the fix itself to apply regression testing to the surrounding unchanged software.

    Exit Criteria- the set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing.

    Validation vs Verification

    In every development life cycle, a part of testing is focused on verification testing and a part is focused on validationtesting

    Validation

    Are we building the right system?

    Does the product (or a part of it) solve its task?

    Is this product suitable for its intended use?

    Verification

    Are we building the system right?

    Does the product meet its specification?

    Has it been achieved correctly and completely?

    The post Testing for Developers- Fundamental Test Process appeared first on Automate The Planet.

    • Show original
    • .
    • Share
    • .
    • Favorite
    • .
    • Email
    • .
    • Add Tags 

    In the series we will define the basic terms that every developer needs to know about testing. First, we will start with defining what exeactly is testing and discussing the basic terms. The purpose is to give all team members a shared understanding of the fundamental terminology of the quality assurance and all related processes. Later this will improve the communication and reviews quality. It will further increase the testing capabilities of each member.

    As part of the professional services we provide at BELLATRIXwe consult companies and help them to improve their QA process and set up an automated testing infrastructure. After the initial review process and giving improvement recommendations for some companies we need to hire new talents that can help the company to scale-up the solutions we provided. This was part of training we did for a company we consulted so that we educate all of their developers.

    Definitions

    Software quality definition found in IEEE Standard Glossary Of Software Engineering Terminology.

    Software Quality- The degree to which a system, component, or process meets specified requirements.

    Software Quality- The degree to which a system, component, or process meets customer or user needs or expectations.

    Testing is a process rather than a single activity. It includes all life cycle activities such as static, dynamic testing, planning, preparation, evaluation of software products and related work products.

    Main Objectives for Testing
    • Identification: Identify defects in the software
    • Confidence: Enable stakeholders to gain confidence in the system’s level of quality
    • Decision-making: Provide information for decision-making
    • Prevention: Prevent future bugs by improving the development process and standards
    • Help meeting standards
    • Contractual or legal requirements
    • Industry-specific standards
    Cost of Failure

    To correct a problem at requirements stage may cost $1. To correct the problem post-implementation may cost thousands of dollars. In extreme cases a software or systems failure may cost LIVES.

    Terms

    Anomaly- Any condition that deviates from expectation based on requirements specifications, design documents, user documents, standards, etc. or from someone’s perception or experience. Anomalies may be found during, but not limited to, reviewing, testing, analysis, compilation, or use of software products or applicable documentation. [IEEE1044]

    Bug/Defect/Fault/Problem- A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.

    Failure- Definition Actual deviation of the component or system from its expected delivery, service or result. Also called ‘external fault’.

    Defect/Fault Masking- An occurrence in which one defect prevents the detection of another.

    How Much Testing Is Enough?

    Too much testing can delay the product release and increase the product price. Insufficient testing hides risks of errors in the final product. We need a test approach which provides the right amount of testing for the project. We can vary the testing effort based on the level of risk (technical and business risks) in different areas.

    Seven Testing Principle1- Testing Shows Presence of Defects

    Testing can show that defects are present. Cannot prove that there are no defects. Appropriate testing reduces the probability of undiscovered defects remaining in the software.

    2- Exhaustive Testing Is Impossible

    Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Risks analysis and priorities should be used to focus testing efforts.

    3- Early Testing

    Testing activities should start as early as possible in the software or system development life cycle and should be focused on defined objectives. The later a bug is found – the more it costs!

    4- Defect Clustering

    Testing effort should be focused proportionally to the expected and later observed defect density of modules. A small number of modules contain most of the defects discovered during pre-release testing or show the most operational failures

    5- Pesticide Paradox

    Same tests repeated over and over again tend to loose their effectiveness. Previously undetected defects remain undiscovered. The test cases need to be regularly reviewed and revised. New and different tests need to be written.

    6- Testing Is Context Dependent

    Testing is done differently in different contexts.

    Example: safety-critical software is tested differently from an e-commerce site

    7- Absence-of-errors Fallacy

    Finding and fixing defects itself does not help if the system built is unusable and does not fulfill the users' needs and expectations.



    The post Testing for Developers- What Is Testing? appeared first on Automate The Planet.

    • Show original
    • .
    • Share
    • .
    • Favorite
    • .
    • Email
    • .
    • Add Tags 

    One of the essential tasks every QA engineer should master is how to log bug reports properly. Many people are confused about what information to include in such reports. This is why I decided to create an article discussing what the crucial fields of an issue report are. Moreover, we will look into bug statuses what an upgraded statuses workflow. I say 'upgraded' since it is a bit more complicated than usual, but I will explain why I added additional statuses. Also, you will find information about bug taxonomy fields which can help you to calculate later various quality metrics that can be used to improve the QA process in the future. I will write a dedicated article about quality metrics, how to calculate and visualize them.

    As part of the professional services we provide at BELLATRIXwe consult companies and help them to improve their QA process and set up an automated testing infrastructure. After the initial review process and giving improvement recommendations for how to improve their QA process. Sometimes they include changes how the automated tests are executed or complete refactoring. However, there are times when before upgrading the test automation we need to improve the manual functional testing. Below you will read about the bug tracking strategy we proposed to the client of ours.

    Bug Statuses

    For Triage – once the bug is created it goes in this status. There is a meeting each day where a group of senior people (senior QA + senior dev) which goes through all bugs for triage and decide whether something is a bug or not. Next, if they agree that the problem is a bug, they choose what happens with it whether it will be analyzed immediately or whether it will be archived or deferred.

    Analysis – before the actual fixing, comes this analysis phase. A developer reads the issue description thoroughly and starts debugging or searching where the problem is. Here is the time when various issue fields are populated like what was the reason for the bug (root cause analysis) and other bug taxonomy grouping which we later use for various reporting measurements. If the problem is located depending on the time needed for the fix the bug can go in Fixing or Deferred status. If the problem cannot be reproduced, we move the issue to Cannot Reproduce status, and after that, we start to monitor it (moving it to Monitoring status).

    Deferred – we set bugs in this status if they will be considered later to be fixed, e.g., they are not high priority to be fixed as soon as possible. If we plan to make a new development to certain feature before planning all the work, we can check all deferred bugs for it and see if we will fix some of them.

    Archived – we agreed that the problem is a real bug, but we don’t set it to Deferred because we decided that no matter how many times, we return to it, it won’t be so important to be fixed. However, we keep track of all these bugs for logging purposes. They can help us to decrease the number of duplicate bugs.

    Communicate- some logged observed problems are problematic, but it is hard to decide whether they are a real issue or not. Most often there isn’t anything related to them in the documentation or requirements if we decide that the problem worth the time to investigate, we first need to ask a product owner what he thinks and how should the feature behave.

    Cannot Reproduce- when the analyzation process often starts it is hard to reproduce some more complex issues even if they are perfectly described in the report. We set the status for reporting purposes only. Usually, such bugs have been firstly monitored for some time if we have agreed that we need to spend time for further analyzation.

    Monitoring- if a bug cannot be reproduced during the analyzation process so that the root cause can be found, we can agree that we will spend some time to monitor whether the bug will be reproduced by someone. If for example 2 weeks pass and nobody can reproduce it we move it to Cannot Reproduce status.

    Reopened- before logging a bug, we need to check whether such bugs have not been already logged. Sometimes we will find that such bugs exist but are in Done status. After that instead of logging a new bug, we will move the bug in Reopened status which saves us time to populate all fields but more importantly gives us information for some quality measurements.

    Fixing – we set an issue in this status once it is clear what is the root cause and we agreed that there is enough time to be spent for fixing it.

    Code Review- once the bug is fixed and tested locally by the dev, he makes a pull-request and asks a colleague to perform a code review.

    Integration Testing- once the bug passes the code review process, the bug is deployed on DEV environment where it can be retested/regression tested by the bug reporter in most cases QA.

    Failed Testing- it is possible the retesting phase to observe that the bug was not fixed then it is not returned immediately to Fixing but instead set to the Failed Testing status which we use again to gather some quality metrics.

    Integration Testing- tested again on the TEST environment integrated with other stories under development.

    For Deployment- once we verify that the bug is fixed, we can deploy it LIVE.

    Bug Workflow
    Bug Fields
    Main Fields

    Title – meaningful short explanation what is the problem

    Description – fully describe what is the problem.

    Actual Results – what are the observed results of the test.

    Expected Results – what is the expected behavior of the tested functionality.

    Steps to Reproduce – list all steps needed to reproduce the issue- login, click forecast button, etc.

    Environment- on which environment was the problem observed. Give all relevant details about the setup if it is required.

    Assignee – who will be responsible for analyzing and fixing the issue.

    Reporter- who reported and logged the bug.

    Attachments- attach a screenshot if the problem is UI related. If the bug is related to complex workflow- record a video. You can add any relevant dumps or other files.

    Priority - the level of (business) importance assigned to an item, e.g. defect. Urgency to fix.

    Priority Levels

    In decreasing order of importance, the available priorities are:

    1 - Blocks development and/or testing work, production could not run, causes crashes, loss of data, or a significant memory leak. Any problem that prevents the project from being built is automatically selected with priority 1. It required immediate action.

    2 - Major or important loss of function, that need urgent action.

    3 - Low or less important loss of function, that do not need urgent action.

    4 - A small problem with no functionality impact.

    Severity- the degree of impact that a defect has on the development or operation of a component or system.

    Severity Levels

    Severity

    Why the QA should assign the status?

    Example

    What should be done by QA?

    What should be done by DEV?

    Blocking


    QA cannot do manual testing/Smoke automation tests are failing

    Functional/Configuration issues that lead to: Yellow screen, Missing module, etc.

    Log a bug or send email + verbal notification

    Resolve as soon as possible 

    Critical

    Main part of feature is not working as expected

    Core Functional/Unusable UI issues that lead to: Form is not submitting, Buy button not working, Form is not syncing data, etc.

    Log a bug + notification

    Give high attention

    High

    It is not recommended to release without this fixed

    Functional/Broken UI - UI differences from design, Validations, etc.

    Log a bug

    Do this after all Blocking and Critical bugs

    Medium

    Good to be fixed if we have time

    Minor Functional/Minor UI issues - Off the happy path scenario, Some responsive issues for specific resolutions/browsers, etc.

    Log a bug

    Do this if the story estimated time is not reached

    Low

    Documentation purpose

    URL with .

    Log a bug

    Check this out

    Bug Taxonomy Fields

    All fields below help us to categorize the bugs, providing meaningful statistics and metrics which can be later used to improve the overall quality/development processes.

    Root Cause Analysis – after the initial analysis the developer who leads the fixing should describe here what he found- what is the actual reason for the observed behavior.

    Root Cause Reason- category for grouping bugs by root cause reason like- missing requirements, not cleared requirements, missed during code review, not enough knowledge about a specific technology or something else.

    Later the grouping by this field can help us spot problems in certain areas of our workflow for example code review, requirement phase or testing.

    Root Cause Type- category for grouping by more technical type- DB, UI Components, API Integration and so on.

    Grouping by this field later can help us to see if any of the problems are related to specific technical area where we can refactor or do more training.

    Bug Appearance Phase- describe in which phase of the process the bug appeared- requirements, code review, DEV testing, integration testing, system tests, etc.

    Later we will use the field to see which phase help us most and rethink whether we should invest in some practices or not.

    Bug Origin- contains information whether the bug is caught during our internal process or clients reported it.

    It is used to calculating an important metric measuring the ration of PROD vs. caught bugs. It usually should be less than 5%.

    Functional Area- specify in which area of the product the bug appeared- ticket submission, order creation, invoice generation, etc.

    When we have more data, we can use to see in which area most of the bugs appear. We can optimize this way the estimation process or propose certain refactoring of the code if this is the reason for the issues.

    The post How to Write Good Bug Reports And Gather Quality Metrics Data appeared first on Automate The Planet.

    Read for later

    Articles marked as Favorite are saved for later viewing.
    close
    • Show original
    • .
    • Share
    • .
    • Favorite
    • .
    • Email
    • .
    • Add Tags 

    Separate tags by commas
    To access this feature, please upgrade your account.
    Start your free month
    Free Preview