Loading...

Follow Froglogic - Automated GUI Testing and Code Cove.. on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Software development success is often achieved in a team of engineers, and part of this success is achieved through thorough, comprehensive testing. In some settings, source code is shared among all involved on a given team, but in many cases, the source code is secured, and only parts of it are available to a given engineer. Imagine the case where you have a distributed team of test engineers, all of whom do not have access to the application’s source code. Ideally, we would like to collate the manual tests of each test engineer and the unit tests of the developers into one global coverage report, to analyze the testing efforts of the whole team. In most code coverage tools, this sort of blackbox testing is not possible: without the source code, the tool is not able to produce coverage metrics. This is not only possible in Squish Coco, but easy and quick to achieve. In this week’s tip, we’ll cover how to handle multi-user testing in Squish Coco in order to get the coverage of all tests from all engineers, even if the source code is not available to all on the team.  

To demonstrate this, we will be using a C++ parsing program that acts simplistically as a calculator for standard expressions. This program is packaged with all versions of Squish Coco. In this example, we have a master user who has access to the source code, and additional test engineers who do not have access to the source code, but are completing manual tests. The developer in charge of the source code has written a number of unit tests.

Unit Tests

To begin, the developer executes the unit tests:

$ make clean
$ ./instrumented make tests

We can open the coverage report within the CoverageBrowser:

$ coveragebrowser -m unittests.csmes -e unittests.csexe

In the condition coverage view, we can see the coverage is at 45.581%. Taking a slightly deeper look into one of the functions, factorial, we see a 0.000% condition coverage. It’s clear that no unit tests have been written for the factorial function. 

Distributing the Application

The developer will now ship the instrumented application to the manual testers. First, he or she will instrument the application, so that the coverage record of the manual tests will be recorded.

$ ./instrumented make

This generates an executable for the parser program, called parser. The developer will ship this to the testing team for further testing. Whenever this program is run, and manual tests are entered into the parser, a *.csexe file will be generated. This is the execution report. 

Creating a Blackbox Database

If the intent is to keep the source code secure, the developer can create a ‘blackbox’ database in which execution reports can be imported, but no source code is shown. This is achieved through the following:

$ cmmerge -o parser_blackbox.csmes --blackbox parser.csmes
Manual Tests

With the executable distributed to the testing team, the first test engineer issues some tests to cover the factorial function:

> 3!

	Ans = 6

> 0!

	Ans = 1

> 1.54!

       Error: Integer value expected in function factorial (col -1)

And a parser.csexe file is now generated. 

This can be opened in the CoverageBrowser:

$ coveragebrowser -m parser_blackbox.csmes -e parser.csexe      

…where “parser.csexe” is the execution report generated by doing the manual tests. 

We see that, by using a blackbox database, the source code is not viewable, and only a  reduced amount of information is shown (i.e., the global coverage, a window for comments, etc.) Click Save before exiting the CoverageBrowser. 

Merging the Results

Once the manual testers have completed their work, it is up to the developer to merge all coverage data in order to get a full scope of the coverage. 

The developer will first need to import his or her unit tests. In other words, load the execution report:

$ cmcsexeimport -m unittests.csmes --title=‘UNIT-TESTS’ unittests.csexe

Finally, the developer can merge all reports into one, called “all_tests.csmes.”

$ cmmerge -o all_tests.csmes -i parser.csmes parser_blackbox.csmes unittests.csmes

Opening this in the CoverageBrowser, we see the following:

Note that in the bottom right hand corner of the window, we can toggle the execution view to show coverage levels from manual tests and unit tests (located in CppUnit). 

In the above screenshot,  we also verify that factorial is now 100% covered, owing to our manual tests. 

Generalizing to Multiple Testers

The example above covered only one developer-tester pair, but the same steps can be generalized to multiple testers. 

To collate all results, issue the following:

$ cmmerge -o all_tests.csmes -i parser.csmes uniittests.csmes parser_blackbox.csmes parser_blackboxUSER2.csmes ... parser_blackboxUSERN.csmes

Note that each user should have different names for their tests. In the above screenshot, we’ve named the one tester’s efforts “TESTER1.” 

The post Multi-User, Blackbox Testing with Squish Coco appeared first on froglogic.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

A C or C++ program always includes header files that are provided by the compiler or the operating system. When they are instrumented by Coco, sometimes unexpected error messages occur.

This post explains why this happens, when this happens, and what to do about it.

A Concrete Example

Here is an example of such an error. It occurs during the compilation of the TextEdit example for Coco with MinGW on a Windows machine:

mingw32-make[2]: Entering directory 'C:/Users/Someone/textedit/textedit_v1'
csg++ -c -o textedit.o textedit.cpp
c:/mingw32/include/c++/bits/random.h:73: col 44: Warning (Squish Coco): syntax error, unexpected ';', expecting '}'
c:/mingw32/include/c++/bits/random.h:117: col 45: Warning (Squish Coco): syntax error, unexpected ';', expecting '}'
c:/mingw32/include/c++/bits/random.h:200: col 4: Warning (Squish Coco): syntax error, unexpected '}', expecting $end
c:/mingw32/include/c++/bits/random.h:6066: col 2: Warning (Squish Coco): syntax error, unexpected '}', expecting $end

More error messages follow, and the compilation fails:

Fatal Error (Squish Coco): Could not generate object file

This is a very specific kind of failure. That is, the error message does not occur in the source files of the project.

One can see in the first line of the compilation log that the project resides in the directory C:\Users\Someone\textedit, but the error occurs in the file C:\mingw32\include\c++\bits\random.h, which is outside the project. The file is instead a header file of the C++ standard library that comes with MinGW.

Related to this is another notable difference to other Coco errors: the error is compiler-specific. When the same project is compiled with, for example, Visual Studio, the error does not show up.

How to Handle it

As in many cases in which Coco cannot handle a file, we must exclude it from instrumentation. As we will soon see, it is better to exclude all system header files. The error messages give us the information we need.

In the example, MinGW is installed in the directory C:\mingw32, and the paths to all its system header files begin with “C:\mingw32\include“. The simplest way to exclude these files is therefore to add a compilation option

--cs-exclude-file-abs-wildcard=C:\mingw32\include\*

But why is this necessary? And why can’t Coco handle this case on its own?

Files that are Excluded

The answer is that in most cases, Coco does exclude system header files on its own. But to exclude them, Coco needs to know where they are.

In the following situations, the location of the system system header files is known to Coco and they are excluded automatically:

  • Under Unix, when a built-in compiler like gcc or clang is used. The system headers are then in the directory /usr/include and its subdirectories are excluded automatically.
  • Under Windows, when Visual Studio or MSBuild are used. Coco then knows how to find the system header files that come with the compiler and how to exclude them.

For other build systems, the system directories must be given explicitly. The most common cases are:

  • MinGW, as we have seen, and,
  • Most cross-compiler toolkits, under Linux and also under Windows.

For cross-compiler toolkits it is often also necessary to create a special version of the compiler wrapper. The documentation shows how this is done.

Why System Header Files Must Be Excluded

We have not yet explained why files like random.h cause an error.

This is a trade-off in the design of Coco. System header files may contain compiler-specific code or syntax that is never used in application code. One could extend Coco’s parser to handle these cases, but it would be:

  1. An enormous effort, because the work needs to be done for every compiler, and,
  2. Unnecessary, since the goal of a coverage measurement tool is to measure the coverage of an application and not of the system libraries.

Coco is therefore built to work with standard C and C++ code, and it automatically tries to exclude the system header files. The only disadvantage of such an approach is that sometimes Coco cannot find the header files.

What Happens When a File is Excluded

When Coco is instructed to exclude a file from instrumentation, the CoverageScanner reads it in an accelerated mode. It does not parse it and does not insert any instrumentation code in it.

When a header file is excluded, it is ignored in all cases in which it is read by the compiler, in reaction to an #include statement.

Other Reasons to Exclude Files

Even if a file can be parsed by Coco, it may be a good idea to exclude it. In C++, this is especially true for template libraries like the Standard Template Library. Such a library contains template code for data structures, like std::vector or std::map. These classes are built for efficiency, and when they are instrumented, the additional code may slow down the instrumented program considerably. This is the other reason why in the example we excluded the whole MinGW library directory tree and not just a few files with unusual syntax.

The same is true for C and C++ libraries in general. Some of them contain template data structures and most contain inline code. (Note that newer versions of C allow an inline statement too). They will be instrumented by default and may slow down the program; in every case they will show up in the coverage report even if they do not belong there.

Other Languages

For completeness:

  • C# does not have an include statement, therefore the problem does not occur.
  • For QML coverage, the Qt library files are excluded by default. When the environment variable COCOQML_DEFAULT_BLACKLIST is set to 0, they are included again.

The post Coco and System Header Files appeared first on froglogic.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Here at QASource, our 700+ engineers work with a variety of tools daily to test various software applications in diverse industries for different businesses, from startups to Fortune 500 companies. So it’s safe to say we’ve seen our fair share of software testing tools.

Today’s trends in automated testing have most engineers looking for open-source and commercial tools that support GUI automation for a variety of technologies. So when it comes to finding the right tool for automating application test cases for such a broad technology spectrum, choices can be slim.

But Squish changes that. It is now one of the single most popular tools for automated GUI regression testing. In addition to the robust cross-platform support, it provides a handy toolkit for automating web, embedded, desktop, and mobile applications—all in one place, without having to resort to a handful of different tools.

This versatility is what our QA engineers like most about Squish, in addition to these other benefits:

  1. Data- and Behavior-Driven testing. Automation teams are shifting their focus toward Data- and Behavior-Driven testing, and Squish supports a variety of data sources and is 100% compatible with the Gherkin language.
  2. Support for popular scripting languages. Squish supports JavaScript, Python, Perl, Ruby and Tcl for script authoring, allowing QA personnel to use a familiar language when writing test scripts. What’s more, these languages are vastly more expressive than trees or lists of steps, and offer the advantage of reuse of a large ecosystem of existing, built-in modules. 
  3. Visual Verification.  Squish provides support for hybrid verification using their algorithm to verify combinations of object properties along with screenshot verification. This helps to reduce false positives and allows for quick troubleshooting.
  4. Image-based testing. There are cases where Object-based recognition is not suitable due to unsupported toolkits or custom UI controls. Squish provides robust Image-based recognition methods to handle such applications. Some of these features include a per-pixel tolerance, image cross-correlation and Fuzzy Image search.
  5. Distributed Batch Execution. QA teams commonly run test suites on multiple workstations for faster execution. Squish supports distributed batch execution to help QA teams finish testing with better efficiency.  
  6. Integrated Development Environment. Squish offers its own powerful, Integrated Development Environment (IDE), which allows teams to write and debug their test scripts smoothly.
  7. Extensive integration. Squish pairs with various Continuous Integration (CI) tools like Jenkins, so batches can be scheduled from anywhere. It also integrates with other top tools like Ant and Maven.
  8. Record and playback easily. New users can easily record the script for their application and convert it into code in a supported language.
  9. Helpful customer support. An active, responsive and efficient customer support team helps resolve any questions and address customization requirements during testing.

Testing Windows, web, embedded, or mobile applications? Squish can help. froglogic offers customizable packages based on your testing needs, whether it be a combo package for Windows and Qt or Windows and web, or other. It also allows you to switch between all of these application types to automate your end-to-end test scenarios.

Squish provides plenty of automation power for teams looking to test applications across multiple platforms, allowing work to get done faster and engineers to have clearer insight into reporting. Plus, the ability to contact a responsive, knowledgeable support team is a huge bonus!
We have a lot of experience with automated testing tools, and we feel confident in recommending Squish for your automated testing needs. If you’re looking for a team to deliver test automation services, we may be able to help. We’d love to learn more about your current testing project, and you can find us at QASource.com.

The post QASource: How Squish Simplifies Cross-Platform Testing appeared first on froglogic.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

GUI test exeutions have to be configured for various purposes. Test engineers might want to configure a test in order to use specific test data as input to the Application Under Test. Or the Application Under Test itself has to be executed with a variation of program arguments.

While many tests are self-contained, there are cases in which the configuration is done externally, for example through a scheduler that drives the test executions, such as a Continuous Integration server. In these cases, the squishrunner executable can be used to pass arguments to the test scripts. The test script may access those arguments and adopt its behaviour based on the values.

Passing Script Arguments to squishrunner

The squishrunner option that supports this is called --scriptargs. It has to be specified at the very end of the complete squishrunner command call. All arguments which are specified after --scriptargs are passed to the executed script.

The following example shows a squishrunner call that passes two arguments to the test script:

./squishrunner --testsuite /path/to/suite --testcase tst_case1 --reportgen xml3.2,/path/to/result --scriptargs argument1 argument2

Accessing these arguments in the test script depends on the scripting language that is used. Assuming that Python is used, we need to use the sys module which provides a list of command line arguments through the argv variable. The following snippet shows how to log all arguments to the Squish Test Result. Visit the Squish documentation to learn how to access arguments in JavaScript test scripts.

import sys
def main():
   test.log('Number of arguments:' + str(len(sys.argv)) + 'arguments.')
   test.log('Argument List: ' + str(sys.argv))

When executing the snippet above, you will notice that the first field, argv[0], holds the name of the script itself. This is common practice in many scripting languages. The first argument we pass to the squishrunner --scriptargs option can be accessed through argv[1].

Configuring an Example Application through Script Arguments

Let’s get back to one of the initial examples: we want to pass a program argument to configure the AUT. Let’s assume this argument is called --nolauncher --autoconnect. The command for such an application startup could look like this:

./exampleApplication --nolauncher --autoconnect

To pass these arguments to the test script, the squishrunner call would look like this:

./squishrunner --testsuite /path/to/suite --testcase tst_case1 --reportgen xml3.2,/Users/flo/tmp/result --scriptargs --nolauncher --autoconnect

Finally, we need to access the arguments in the test script itself, and pass it to the startApplication call:

import names
import sys

def main():
   autCommand = "exampleApplication"
   for i in range(1, len(sys.argv)):
      autCommand = autCommand + " " + sys.argv[i]
   startApplication(autCommand)
   ...

The post GUI Test Configuration Through Script Arguments appeared first on froglogic.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Students at Collegium Balticum in Poland participated in a two-day intensive course on the Squish GUI Tester, led by Dr. Marcin Caryk of infinIT Codelab. During the lectures, students initially spent time learning the basic Squish functions, including creating test suites and test cases and recording their first test scripts. In the second day, course attendees began writing their own tests in Python using built-in Squish libraries, and spent time exploring more deeply the Object Map and various UI controls and Squish’s more advanced functions. Students reported being happy to learn and use the tool, noting especially its breadth of application usage. Lectures will again take place with a new set of students next year, and we at froglogic look forward to hearing feedback from the next round of lecture attendees. 

The post “Squish is a powerful tool.” Students at Collegium Balticum Get Their First Training on the Squish GUI Tester appeared first on froglogic.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Python Virtual Environments are great tools for separating different Python configurations on the same machine. Configuring Squish to use a virtual environment gives you the freedom to install packages and make changes that aid your testing efforts without compromising your existing Python installation(s).

Here we will show you how to set up Squish to use a Python Virtual Environment for testing.

First, install Python (ver 3.3 or higher) to use the virtual environment functionality. For Squish to be working with your Python installation, it has to be compiled with the same version. This is done by either asking our support staff to prepare a binary Squish edition with that Python version or by compiling Squish from sources.

Quick note for the configuration flags to compile Squish with Python 3: 
$ <SQUISH_SOURCES>/configure … --disable-python --enable-python3
--with-python3=<PYTHON3_DIR_PATH>

At this point, make sure that the environment variables (PATH, PYTHONPATH, PYTHONHOME) point towards the Python installation from which you are creating a virtual environment. Then you can execute:

$ pip install virtualenv
$ python -m venv <PYTHON_VENV_PATH>

After the virtual environment is created, configure Squish to use it by altering the paths.ini in <SQUISHDIR>/etc/.

In LibraryPath change @(SQUISH_PREFIX)/python to <PYTHON_VENV_PATH>/bin
(<PYTHON_VENV_PATH>/Scripts on Windows).
Set Scripting/PythonHome to "<PYTHON_VENV_PATH>".

There are more detailed instructions on how to change the Python installation of Squish here. Now you are good to go!

The post Setting Up a Python Virtual Environment with Squish appeared first on froglogic.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In some scenarios, you might need to test multiple instances of your application at the same time, for example if multiple desktop clients access a server. The Squish GUI Tester is capable of handling such scenarios from within one test script. We will demonstrate how to write a test script for this with the network-chat example from Qt.

The Interface of the Chat

You can identify the application you are sending your events to by using the ApplicationContext API. To change the current context, you call setApplicationContext with the new context.

def main():
    main=startApplication("network-chat")
    sub=startApplication("network-chat")
    snooze(2)
    setApplicationContext(main)
    message = "Hello. How are you?"
    sendMessage(main,message)
    setApplicationContext(sub)
    verifyMessageSent(sub,message)

def verifyMessageSent(message):
    textedit = waitForObject({"type":"QTextEdit"})
    raiseWindow()
    if message in str(textedit.plainText):
        test.passes("\""+Message+"\" "+" was sent")
    else:
        test.fail("Message was not sent.",message)

def sendMessage(message):
    lineedit = waitForObject({"type":"QLineEdit"})
    mouseClick(lineedit, 43, 3, Qt.NoModifier, Qt.LeftButton)
    type(lineedit, message)
    type(lineedit, "<Return>")

This setup requires a snooze, because the example from Qt takes some time to register the other instance. In some tests, this results in the message not being sent correctly.

But if you now execute a mouse click on the message input field, this might cause a failure, because there is another window in the way. When startApplication starts the processes for the two instances, the windows are stacked on each other. It is therefore necessary to bring one instance to the foreground.

Bringing the Correct Window to the Foreground

If you want to automate the instances correctly, it is mandatory that there is no window blocking your AUT. Moving one instance to the side would be one way, but it is easier to just bring the window of the active context to the foreground.

The code for this is simple. For Python you need a workaround to use the raise function because ‘raise’ is a reserved keyword in this language.

def raiseWindow():
    o = waitForObject({"type":"ChatDialog"})
    o.show()
    getattr(o, "raise")()
    o.activateWindow()

If you have more instances you are working with, it is better to adjust your functions to switch to the right context. You can see a modified version of the verifyMessageSent function below.

def verifyMessageSent(context,message):
    setApplicationContext(context)
    textedit = waitForObject({"type":"QTextEdit"})
    raiseWindow()
    if Message in str(textedit.plainText):
        test.passes("\""+message+"\" "+" was sent")
    else:
        test.fail("Message was not sent.",message)

We have made the same adjustment to the sendMessage function, which you can see in the complete code at the end of this article.

Now you can properly switch between, and automate, two instances of the same application.

In case you want to perform the test with more instances, you can get the other context references when you start the instances. Another option would be to use the function applicationContextList and iterate over the available application context objects.

Conclusion

You can see the complete code for the test case below. The chat program recognizes all Ethernet adapters as potential clients. The verifyMessageSent function fails when you check if the client received the message only once. When using this for other applications you need to adjust the real names. Otherwise Squish will not find the right objects.

def main():
    main=startApplication("network-chat")
    sub=startApplication("network-chat")
    snooze(2)
    message = "Hello. How are you?"
    sendMessage(main,message)
    verifyMessageSent(sub,message)

def verifyMessageSent(context,message):
    setApplicationContext(context)
    item = waitForObject({"type":"QTextEdit"})
    raiseWindow()
    if message in str(item.plainText) and str(item.plainText).count(message)==1:
        test.passes("\""+message+"\" "+" was sent")
    else:
        test.fail("Message was not sent correctly.",
                  "Message sent: "+str(message in str(item.plainText))+
                  "\nMessage received once: "+str(str(item.plainText).count(message)==1))

def sendMessage(context,Message):
    setApplicationContext(context)
    mouseClick(waitForObject({"type":"QLineEdit"}), 43, 3, Qt.NoModifier, Qt.LeftButton)
    type(waitForObject(names.chatDialog_lineEdit_QLineEdit), Message)
    type(waitForObject(names.chatDialog_lineEdit_QLineEdit), "<Return>")
    
def raiseWindow():
    o = waitForObject({"name":"ChatDialog","type":"ChatDialog"})
    o.show()
    getattr(o, "raise")()
    o.activateWindow()

The post Testing Multiple Instances of the Same Application appeared first on froglogic.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The ability to work with remote devices is one of Squish’s core features. While extremely powerful, it does not come without its disadvantages. During interactive test script recording it requires interaction with both the Squish IDE and the AUT. Debugging a faulty test script also usually requires preview of the current AUT state. This can be tedious if they are running on different machines, and it can become especially difficult if both systems are not in the same physical location.

Common Problems in Existing Remote Control Tools

Many platforms support a variety of dedicated remote control tools that could be used alongside Squish to resolve that problem. However, these tools come with some problems of their own:

  • Interoperability. If the controlled and the controlling platform are radically different, it may be difficult to find software that works on both systems.
  • Squish compatibility. Both remote control applications and the Squish GUI Tester use artificial UI events for their operation. This can lead to unwanted interference between them.
  • Setup. A separate set of tools may require a lot of work to become operational. This may include purchasing the license(s), installation and configuration, network setup, etc. It may need to be redone each time a new test system is added.
  • Availability. If you are testing a multi-platform AUT, you will probably need remote access to various kinds of test systems. In case the chosen software does not support some of them, you may be forced to use a heterogeneous solution which includes separate pieces of software and is difficult to use and maintain. Some embedded and mobile platforms do not offer any remote control software at all.

In order to help in overcoming some of these problems and to speed-up development of tests, the upcoming Squish release will include a remote control feature specifically tailored for GUI testing.

An Android AUT controlled by Squish IDE

The feature is designed as a testing aid and requires a connection to a working AUT on the target system. The data required for its operation is embedded within regular Squish communication. This means that the remote control can be used with minimal setup effort. In most cases, it should be just one click away.

Limitations

The comfort of working with a far-off device over the remote control is directly dependent on the available bandwidth. Despite the lossless compression used on the video stream, slow connections may prove insufficient for comfortable work. In such a case, you may opt for a loss-y compression of image data that requires less data to be transferred for the cost of some image distortion. The screenshots used for image search and verification are still sent using lossless compression – just as you’re used to.

Remote control should be available on any platform supported by Squish. However, on some of the platforms it cannot be used without additional setup.

As far as we know, no currently available Wayland compositor offers features required for remote controlling the user session. We are currently working on a set of plugins for all popular compositors. In order to access remote systems using Wayland, a corresponding plugin will have to be installed and enabled.

Remote control of a Gnome shell Wayland desktop with the froglogic plugin Summary

The ability to see and to control remote test systems will let you avoid the need to move between different physical test setups or to install and maintain additional software. It will always remain compatible with Squish, and it will make recording and inspection of tests on multiple target systems easier and faster. It will grant the test writers instant access to devices in remote locations and minimize the need for interaction with the physical controls of the device.

The post Upcoming Squish Feature: Remote Control of Test Devices appeared first on froglogic.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Sometimes we can’t launch our application from our test case, which is often the case if we perform tests on embedded devices, where the AUT is launched during device startup.

In such cases, we simply attach to the AUT at the beginning of each test. At the same time, we want to keep our test cases independent, so bringing back the AUT to the initial state is part of the cleanup of the previous test case.

Unfortunately, if something unexpected happens during execution, our AUT might get stuck in an unknown state or become frozen. The only way to bring it back to the desired point may be to restart the device.

In the following Python example, we’re going to show you the example implementation of this approach.

The rebooting will happen in the init() function. This way we are independent of the previously executed test outcome.
We will use the paramiko Python module to establish an SSH connection and execute a reboot command on the device. The paramiko module is not part of the Squish Python package, but can be easily installed with pip.

  1. Define the init() function that is called automatically before a test case main() execution:
def init():
	test.startSection("Test initialization")
	attachToApplication("AUT")
	if is_initial():
		test.log("AUT is in the initial state. Continue test execution.")
	else:
		test.log("AUT is not in the initial state. Reboot test environment")
		reboot()	
		waitForAut(timeout_s=20)			
	test.endSection()

2. Define is_initial() function that checks if the AUT is in the initial state:

def is_initial(timeout_s=20):
    """ custom code that checks if your application is in the initial state """
    try:
        waitForObject(names.initial_screen, timeout_s*1000)
        return True
    except:
        return False

3. Define thereboot() function that connects to the embedded device over SSH and sends a reboot command:

import paramiko

def reboot():
    ssh = paramiko.SSHClient()
    ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
    ssh.connect(hostname=DEVICE_IP, username=DEVICE_USERNAME, password=DEVICE_PASSWORD)
    ssh_stdin, ssh_stdout, ssh_stderr = ssh.exec_command("sudo shutdown -r now")
    if ssh_stdout.channel.recv_exit_status() > 0:
        raise RuntimeError("AUT reboot failed")

4. Define the waitForAUT() function that waits for the device to be rebooted and that the AUT is available for attaching again:

from datetime import datetime

def waitForAut(timeout_s=20):
    start = datetime.now()        
    while True:
        try:
            attachToApplication("AUT")
        except RuntimeError as e:
            if (datetime.now() - start).total_seconds() > timeout_s:
                raise RuntimeError("Can't attach to the AUT")
            else:
                pass

Please note that you can use a global cleanup() and init() to implement this solution.

The post Rebooting a Remote Test Environment From a Test Script appeared first on froglogic.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Skyguide, headquartered in Geneva, Switzerland, is a company with a longstanding history of contribution to the development of Swiss aviation. Today, skyguide provides air navigation services for Switzerland and its adjacent countries to ensure safety of civil and military air navigation through management and monitoring of the Swiss airspace. Providing safe, reliable and efficient air navigation, skyguide manages a record of guiding over 1.2 million flights a year through Europe’s most complex airspace.

“What’s really key for us, having to do end-to-end integration testing and not normally having access to all the source code, is a tool like Squish that can talk to an application on Linux and one on Windows…it provides exactly what we need.”

We sat down with Mr. Duncan Fletcher and Mr. Geoffroy Carlotti, two Test Automation Engineers at skyguide, to learn about their company’s longstanding history of using Squish to test a diverse set of applications. Engineers at skyguide follow a Behavior-Driven Development (BDD) paradigm for their automation efforts. That is, a methodology that centers around stories written in a common language that describe the expected behavior of an application, allowing technical and non-technical users to participate in the authoring of feature descriptions, and therefore tests. The engineers we spoke to are responsible for foundationally defining this BDD framework for other teams within the company, thereby allowing technical and business people to participate in test automation.

“The fact that we’ve reduced the needed framework down to one tool is one reason we chose Squish.”

Engineers at skyguide are no strangers to advanced automation techniques using Squish. In one application they test, described Fletcher and Carlotti, they use a multi-pronged approach of combining localized OCR with localized image search and Windows object recognition. This application, written in C++ and running on Windows, is essentially the flight radar system displayed in front of air traffic controllers. An important detail of this radar system is the algorithm by which a flight shows onscreen. As Mr. Carlotti noted, there are hundreds of rules to take into account to display a flight properly on a radar screen. Even the color of the flight data follows certain rules to avoid drawing attention away from the air traffic controller looking at the screen. One benefit brought by Squish was that the process to test this application became streamlined via automation. “It’s impossible to test all these cases manually, so this is huge,” reported Mr. Carlotti. The engineers noted that, in general, the applications tested within the company are highly diverse, which in turn, made Squish standout to them for its ability to test such a varying set of applications within one framework. 

“Another huge benefit is that [with BDD] there is living documentation.”

Both technical and non-technical project stakeholders benefit from the BDD approach set up by the engineers at skyguide. At a fundamental level, Mr. Fletcher and Mr. Carlotti are developing the BDD framework to be available to both testers and those who write requirements. In this way, each person in the team can view the test results, understand them and react accordingly. A forward looking goal for these engineers is to involve more end-users and business people to interact with the BDD scenarios. That is, approach GUI testing in a way that is holistic in its setup and comprehensive in its involvement of all sides of the business. Mr. Fletcher noted that, while the team still does a good portion of manual tests, skyguide focuses increasingly on automation. As a summary to our insightful meeting, Mr. Fletcher and Mr. Carlotti noted an excellent level of customer support from our technical team, and that the two greatly looked forward to the next major release of Squish, version 6.5.

The post Creating Safer Skies: How skyguide Uses the Squish GUI Tester to Improve Safety and Efficiency of the Swiss Airspace appeared first on froglogic.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview