Continuous Testing Blog for Mobile And Web Apps - Eran Kinsbruner
Eran Kinsbruner is an influential blogger and speaker at global conferences like StarEast, StarWest, DevOps West, AndroidSummit, Eurostar, Automation Guild and QAI Quest, he is trying to provide valuable practices to developers and testers at organizations large and small.
We are already deep in 2019, and it is quite clear that 2019 is set to be a pivotal year around multiple growing trends and technologies.
In this short blog, I’d like to “unfold” (Hint Hint) some of them, and highlight both the benefits around them as well as the impact that they might have on software testing.
1. Foldable Smartphones 2. Android Q and iOS 13 releases 3. Transition to 5G Networks 4. The rise of AI and ML in Test Automation
This trend while debatable around its ability to deliver on the expectations is becoming reality with Huawei Mate X launch this June, and the formal support from Google within it’s upcoming Android Q release.
When coming to consider the tests to cover foldable smartphone, we ought to look into the following:
Multiple devices/OS combinations and form factors that your app needs to be installed, run, updated on
The varying level of support for functionalities around multi-window/multi-resume across Android OS families (below Figure)
App types: Native vs. Responsive web (especially for Android, tablets are not popular as iPads, hence the experience and app quality is unknown)
Competing resources, Battery and Memory consumption when 3 apps run in parallel in the foreground.
I was happy to contribute recently to an article published by Adrian Bridgwater from Forbes on the complexities Foldables will introduce to both app developers, testers, and the mobile space in general.
Some insights on what to cover in apps running on Foldables are below, and taken from a live webinar that i ran with Perfecto few weeks back.
Android Q and iOS13 Releases
After few years of stagnation from an innovation standpoint, iOS13 together with Android Q (10) are about to introduce changes that can boost the digital transformation.
Things like the above mentioned Foldable support, better AI/ML/AR/VR APIs and support, UI/UX usability and productivity features (below bubbles support in Android), tablet and smartphones apps consolidation (mainly iOS), and more.
Android Q will also allow users to create a shareable QR code to share their Wi-Fi networks with friends.
Additional privacy, security, performance and bug fixes are also going to be part of Android Q, as well as iOS13.
From an iOS13 specific changes, we should expect a bunch of new features, but also potential EOL for legacy devices as listed in the below summary visual
Transition to 5G Networks
While mobile networks have evolved, and LTE is a dominant network around the globe, we are seeing wide deployments of the new 5G standard.
The new network is set not only to boost network performance and speed, but also to enable new technologies around media streaming, smart home, IOT, AR/VR, and many more.
This introduces new opportunities for app developers across market verticals as well as challenges to ensure smooth UX/Performance and compatibility across the different generations of network platforms (3G, 4G, etc.)
The Rise of AI and ML in Test Automation
Testing is one of the biggest bottlenecks and challenges for teams who are trying to mature their Agile and DevOps processes. Tests are too slow to automate, flaky and inconsistent in their results, cannot cover sufficient functionalities and more.
We are seeing a clear adoption and growth in tools that support AI/ML algorithms aiming to stabilize the overall test automation through better object locator maintenance, ease of test creation through smart recording and more.
With all of the above, 2019 is setup perfectly to drive the market to new heights with regards to both test automation capabilities as well as digital technologies.
There’s no doubt that the test automation space is undergoing transformation. Machine Learning (ML), Deep Learning and Artificial Intelligence (AI) are being leveraged more and more as part of the test authoring and test analysis.
While the space is still growing from a maturity stand-point, it is a great time for practitioners (developers and test engineers) to start understanding the key use cases and implications of ML/AI on the overall test automation.
Organizations today look at ML and AI to help solve and remediate issues around the following:
Object recognition and handling dynamic object locators
Transcribe speech – Accessibility and similar use cases
Make quality related decisions based on data
Identify Trends and/or Patterns within the DevOps pipeline
Security use cases – Identify signatures in apps and documents e.g.
Not all ML/AI solutions offer codeless test authoring, there are solutions like Test.AI that offer AI capabilities for appium test coding through additional lines of code.
My colleague Uzi Eilon (CTO at Perfecto) and myself prepared the following comparison table showing some diffs in the workflow and skills that are required to use ML/AI codeless (in most cases) test automation tools and traditional test frameworks like Selenium, Appium, Espresso etc.
As can be seen in the above table, there isn’t one solution that is better than other, but rather a set of requirements a practitioner or a team needs to address prior to either switching to a new model or sticking with the current (or combining both).
To highlight few benefits for codeless testing we can look at the speed of test authoring, lower bar from a skillset perspective, more stable test scenarios due to self-healing code and object identification etc.
To highlight benefits from the traditional test automation space, we can look at mature space, better integrations and open-source hooks, higher skillset that enables greater coding flexibility, advanced testing types and support for all kind of apps.
In this blog there is no decision or bottom line other than – be familiar and aware to the market transformation, start thinking of the use cases that slows you down and see which tools in the AI/ML space can be used to aid these pains.
I’ve started running my ‘Continuous Testing Blog” in May 2012 with a key objective to focus on practices and guidelines for mobile app testing. As the market evolved and I gained more experience and accomplishments, I grew this blog to cover both mobile apps, as well as web apps (Responsive, Progressive).
So, what is so great about the 2nd book? – In one word – Everything!
This book was developed by myself in collaboration with some of the brightest leaders in the DevOps and Testing industry, including key experts from CloudBees, Tricentis, Testim.IO, Test.AI, Now Secure, Perfecto (Few experts from Perfecto – Brad Hart, Yoram Mizrachi, Tzvika Shahaf, Rotem Kaner, Genady Rashkovan, Roy Nuriel), and with individuals like Joe Colantonio, Jonathan Lipps, Nikolay Advodkin, Greg Sypolt, T.J Maher, Brian Reed, Mike Lyles, and Alan Page.
Another great thing about this book is that all of the profits from it goes to code.org organization to support great cause.
How To Read the Book?
The book addresses all the DevOps practitioners including software developers, testers, operation managers, and IT/Business executives. It covers almost any of the digital platforms for testing that includes Mobile real devices and Emulators/Simulators, Desktop Web (Responsive, Progressive), IOT, OTT, Chatbots, and touches on various testing techniques like BDD, ATDD, Exploratory Testing, Security Testing, with deep dive into frameworks like Espresso, XCUITest, Appium, Gauge, React Native app testing, and many more.
It consists of the 4 following sections:
Fundamentals of Continuous Testing The first section of the book is fully focused on the definition of CT, the ways to build a CT plan and measure it, the role of each testing methodology within the DevOps pipeline, the continuous operation place within DevOps, the ways that practitioners can leverage smart test data to drive business-critical decisions, and in addition, an advanced overview on orchestrating the entire DevOps pipeline
Continuous testing for web apps This section is fully dedicated to continuous testing of web applications that includes responsive web (RWD), progressive web apps (PWAs), leveraging headless browsers for testing web apps, accessibility testing 101, introduction to an innovative BDD framework (Gauge) for web testing, and more
Continuous testing for mobile apps This section is focused on advanced mobile native apps techniques that cover the leading frameworks like Appium, Espresso, XCUITest, and in addition, provides a practical guide to testing complex react native apps using Appium.
Advancing continuous testing The final section of the book is all about the future of continuous testing with a focus on machine learning and artificial intelligence and their roles in enhancing traditional testing practices. Also, this section addresses testing for IOT and OTT devices, and leveraging mock test data for CT.
Truly exciting times for me and the entire industry.
With this blog, I wish again to extend my great appreciation and thanks to the above mentioned companies and individual contributors who made this journey a success.
For those of you who are going to purchase and read the book, I would appreciate if you help by posting a review on Amazon.
As of today when writing this blog, the book is positioned as #2 in the Hot New Releases books under Software Development category in Amazon.
We keep hearing about new solutions for test automation and continuous testing. Such solutions aim to increase the test automation authoring as well as the maintenance associated with these tests as the product evolves.
With this trend, many software quality engineers, SDET, and test automation architects are asking themselves whether their job is at risk, and what’s the future holds for them?
Prior to answering these questions, and re-profiling the modern testers role, let’s examine the terms ML/AI, the algorithms behind these tools, as well as the tools landscape as of today.
Artificial Intelligence: Sometimes called machine intelligence, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans and other animals. In computer science AI research is defined as the study of “intelligent agents”: any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals. The term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.
Machine Learning: Is a subset of artificial intelligence in the field of computer science that often uses statistical techniques to give computers the ability to “learn” (i.e., progressively improve performance on a specific task) with data, without being explicitly programmed. In a recent blog post from Mabl, ML was also defined as “Machine learning is the process of continuously presenting a machine with a well defined data sample so that behavior can be developed“
Common Methods for Developing ML/AI
Gradient Decent – Is a first-order iterative optimization algorithm for finding the minimum of a function. To find a local minimum of a function using gradient descent, one takes steps proportional to the negative of the gradient (or approximate gradient) of the function at the current point. If instead one takes steps proportional to the positive of the gradient, one approaches a local maximum of that function; the procedure is then known as gradient ascent.
Convolutional Neural Networks – In machine learning, a convolutional neural network (CNN, or ConvNet) is a class of deep, feed-forward artificial neural networks, most commonly applied to analyzing visual imagery. The convolutional layer is the core building block of a CNN. The layer’s parameters consist of a set of learnable filters, that have a small receptive field, but extend through the full depth of the input volume. During the forward pass, each filter is convolved across the width and height of the input volume, computing the dot product between the entries of the filter and the input and producing a 2-dimensional activation map of that filter. As a result, the network learns filters that activate when it detects some specific type of feature at some spatial position in the input.
Backchaining is a technique used in teaching oral language skills, especially with polysyllabic or difficult words. The teacher pronounces the last syllable, the student repeats, and then the teacher continues, working backwards from the end of the word to the beginning. For example, to teach the name ‘Kinsbruner‘ a teacher will pronounce the last syllable: – ner, and have the student repeat it. Then the teacher will repeat it with ––bru– attached before: –bru-ner, after which all that remains is the first syllable: Kins-bru-ner
The lookahead–based algorithms is used for induction of decision trees, allowing trade-off between tree quality and learning times.
Forward-Backward Algorithm – In an inference algorithm that computes the posterior marginals of all hidden state variables given a sequence of observations. The algorithm makes use of the principle of dynamic programming to efficiently compute the values that are required to obtain the posterior marginal distributions in two passes. The first pass goes forward in time while the second goes backward in time; hence the name forward–backward algorithm. In the first pass, the forward–backward algorithm computes a set of forward probabilities which provide the probability of ending up in any particular state given the first observations in the sequence. In the second pass, the algorithm computes a set of backward probabilities which provide the probability of observing the remaining observations given any starting point. These two sets of probability distributions can then be combined to obtain the distribution over states at any specific point in time given the entire observation sequence
ML and AI Tools Landscape
In the mobile and desktop web testing landscape today, we see few vendors starting to operate.
Without drilling into each specific solution, here are the list of vendors that offer specific ML/AI set of solutions:
Mabl – desktop web ML test automation solution that leverages a chrome add-on trainer to build code-less test automation
Testim.IO – Desktop and Mobile Android code-less test automation solution that also uses a browser add-on to build robust test automation that aims to address the problem of dynamic objects
TestCraft – web based code-less selenium test automation solution for testing web apps
Applitools – visual test automation and monitoring for mobile and web apps
Perfecto – Cloud based solution for testing mobile native apps and web based applications, offers AI based reporting with error classifications, and analytics to optimize the entire CI/CD workflows.
What Does all Of the Above Means to Modern Testers?
As mentioned above, there are plethora of tools evolving these days aiming to solve test authoring, analysis, and maintenance problems. While these are all awesome initiatives that will position testing higher and smarter in the overall DevOps processes, this does not translate into the extinction of the tester. Each of the above tools, as well as new tools that will rise are rising to help the existing testers to become more agile, smarter, and efficient.
AI and ML today mean the following to Test Engineers:
A change in mind set – Aid Humans vs. Replacing Humans
Training on modern ML/AI tools and techniques is required TODAY
Use the tools to solve complex testing activities, don’t neglect existing testing methods, they are still relevant, not all are supported by ML/AI
Keep humans in control of these tools, evolve productivity – become a DevOps champion by embracing new tools and lead innovation
Modify working processes accordingly (Go/No GO criteria)
AI/ML tools are solving specific rather than holistic problems – keep that in mind
Match proper AI/ML tools to existing pains in test automation
iOS12 is already in Beta 2 for almost a week, and the industry is 2 months away for this major iOS GA release.
It’s a great time to start planning both the testing and development implications this new OS will bring to your project, and look at this from a continuous testing (CT) stand point.
Continuous Testing & iOS12
CT refers to the ability to test in an automated way each code change at any phase of your development life-cycle, from early design stages, through coding, build-acceptance, regression, and production.
So how does it relate to iOS12? –> If there is no solid automation in place today, than identifying issues that are specific to iOS12 Beta in advance will be quite hard to accomplish, regardless of the code changes the team is making to the app. Many changes in performance, UI, and functionality across devices and apps is being introduced in iOS12, and covering them in an Agile process without automation will simply be unrealistic.
Testing iOS native apps today is mainly doable through Apple’s XCUITest framework and Appium (that relies on XCUITest underneath), so for any existing iOS11, and iOS10 test code that your teams are currently using, they should start debugging these scripts against an iPhone running the iOS12 beta to identify any issues. Such tests might be running from within the continuous integration (CI) process per code commit or via a scheduled trigger today, therefore an early debugging and analysis of these processes can ensure a smooth transition to iOS12 once it is out.
What to Focus on From a Testing Perspective in iOS12
iOS12 will introduce a large set of innovation to both the platform itself, new devices, new watchOS (5) version, performance improvements, new look & feel and more. To help teams focus on the most important things as they prepare for the iOS12 GA, I would recommend considering the following:
Device coverage: iOS12 will be supported across all of the supported iOS11 devices that are in the market today, as well as new devices that will be launched in September like new iPhone X, iPhone 9, and potentially new iPads. Such thing means that from a lab perspective, the device coverage should be broader than today and cover few generations up until iPhone 5S to be able to assess the quality and performance of the apps across these devices. In addition, the adoption time of a new iOS takes between 4-6 months, hence having the proper combinations of iOS12 and iOS11 across the various devices is key for continuous testing and quality assurance. Do not forget about key devices that are still quite popular in the market and are running iOS10.3.3 (iPad 4th gen, iPhone 5C) and iOS9.3.5 (iPad Mini, iPhone 4S) – Validate based on your usage analytics and defect history whether these device families should still be supported or not.
New UI Enhancements: in iOS12, Apple will make some changes to the UI in various ways. Testing the application across devices from a UI perspective on iOS12, as well as on the previous iOS11/10 is obviously critical. in iOS12 Beta 2 and in its upcoming GA, especially on iPhone X there is a new way of terminating an app that is different from iOS11. Instead of a long press and a swipe up that was the traditional way for killing an app in iOS11, now only a swipe up is required. In addition, notifications, and app usage as well as battery usage screens were added and whenever relevant to your app, requires to be tested. In addition, the new “Siri Suggestions” has an implications on apps or recently used apps, and needs to be taken into consideration from testing perspective since there is another place in the UI to launch your apps from.
App Usage and UX Related: In a similar way as Google implemented in Android P its digital well being capabilities, so did Apple with iOS12. The new OS version will continuously measure and monitor the end-user time spent and battery consumption for his top used applications under a new UI feature called “Screen Time“. This enhancements has few implications:
There will be an highlighted and growing competition on time used per app, that means app need to perform well, fast, and with great quality.
Battery consumption will be a “thing” for the end users, hence, app developers should leverage the iOS12 new capabilities and optimize their apps accordingly.
Continuous Testing with the new “USB Restriction” mode: in iOS11.4.1 Apple already rolled out the new implementation of USB restricted mode, that was supposed to be launched as part of iOS12. Having this feature early shall allow developers and testers the opportunity to assess what goes on when testing the app continuously and than leaving the device idle for an hour. Once done, the USB port will be “blocked” from security reasons, however will have implications on the next testing cycle. Perfecto figured a way to bypass this limitation for testing purposed only, and allows the users to continue testing on the iOS devices regardless of the idle time, as well as unlock the devices via fingerprint and/or Face-ID.
There is time for practitioners (developers and testers) to get an hands on experience with the iOS12 dev preview and public beta versions, and assess the impact on their supported apps (B2B/B2C), and tune their testing and development roadmap, processes, and lab so when in September the GA is out, they will be in a perfect spot to announce full support of this new platform. This will reduce R&D pressure, will delight their customers, and prevent potential business loss.
We live in a competitive era where meeting customers expectations and demands is the key for winning over your competition. Digital media is the battlefield and fast release cycle if your weapon. The need to release software at a rapid pace requires continuous integration (CI) and deployment (CD) is key to drive the frequency in which code is pushed to production.
Throughout this process, it is important for testing to start right from the requirements phase and continue all the way till production deployment and monitoring. This is what we call “Continuous Testing” (CT). Automation is a vital component of CT and starts early in the software development lifecycle. here have been various challenges related to test automation in a CT/CI/CD environment such as skill set, authoring, execution and scaling of automated tests. But the biggest challenge that still continues to haunt teams is the challenge of Maintenance.
For example – Say the team has written 500 tests. The next day, they come in only to find out that, half of them have failed due to multiple factors. They slowly start realizing that, this is a regular occurrence and they have been spending about half their time just trying to fix these failed tests. Does this ring a bell?
These failures are a result of the agile methodologies organization use today and constant changes the developers makes to the application under test (AUT). They are usually related to the use of static locators in the application where a developer changes the name, id or another attribute of an element and the tests immediately break due to this change. As a result, testers have to try troubleshooting the problem by identifying which element has changed, then update the test scripts accordingly and re-run the test to ensure it passes. There is a lot of wastage of cost, time and effort.
I’ve recently seen an innovative approach to that challenge using “Dynamic Locators”. This is a strategy where we get multiple attributes for every single element of the AUT we interact through our tests and create a list of location strategies. So when the test runs and detects a change in the element property, it can look for the next best location strategy from the list; instead of failing. Thus making the test adaptive to the changes in the AUT. For example – If a developer changes the name of a “Login” button to “Submit”, instead of the test breaking due to this change, the test looks for the next best location strategy of the button which could be id, class, tag or any other attribute which has already been extracted from the Document Object Model (DOM). This is how Dynamic Locators can help in creating more stable tests. As a result of this, the authoring and execution of the test is really fast and far less time is spent on maintaining the tests.
Testim.io , for example, utilizes Dynamic Locators and helps in easy authoring, execution and maintenance of automated tests. The Artificial Intelligence (AI) underneath the hood, analyzes the DOM in real time and extracts all the object trees and properties. Finally, the AI decides the best location strategy to locate a particular element based on this analysis, as it has already extracted multiple attributes for each element. It also supports cross browser testing using the same location strategy. In fact, they have a webinar coming up and I will speaking on the same topic. Check it out here – Not Another Browser Version… Continuous Cross Browser Testing for a Changing World
In summary, as we go through the continuous testing cycle, it is important to pay attention to problems due to maintenance; take corrective actions immediately and prevent this being a bottleneck for faster release cycles.
Please feel free to share your thoughts and experiences you have had in a CI/CD/CT environment below in the comments section.
Those who continuously follow my blogs and webinars, know that i constantly follow the market trends, to recommend best coverage requirements.
The mobile landscape fragmentation isn’t new, and continuously presents a challenge to mobile application developers and testers.
In this post, I’ll reflect the most up to date coverage recommendation for mobile and web.
Coverage Within the DevOps Pipeline and Continuous Testing
While in this blog, I’ll provide the top mobile devices and tablets for iOS and Android, not all of them should be tested in each stage of the DevOps pipeline. As you progress your development, and do E2E and regression testing, your coverage from both test scenarios and target devices will obviously need to scale compared to few representative devices that you’d typically use for unit testing and basic build acceptance testing.
iOS and Android Landscape Overview
As the market is pending the GA of Android P, and next week the announcement and perhaps 1st dev preview of iOS12, it’s important to be on top of the updated market stats. iOS market share is fully dominated by iOS11, followed by nearly 20% of devices and tablets that are running iOS10.3.3. About 5% are running mostly on iOS9.3.5.
As iOS12 starts to roll out with dev previews and beta versions, it’s highly recommended to start catching up, understand the new features and changes to the platforms, as well as try and upgrade at least one device to the new iOS12 and validate the compatibility of your test automation code within and outside of your CI.
Android OS landscape is also divided between 4-5 major OS versions, with the new Android P that is just around the corner. ~30% of the Android devices are running android 7.x, 25.5% are on Android 6.x, 21% of the devices are on Android 5.x. While Android 8.x is only ~5% of the market, it is a must to cover as the latest GA OS version. Regarding Android 4.x (KitKat) that holds a bit more than 10% of the market share, it depends on your specific app, the geography in which your end users operate in, and of course the usage and analytics that your app/web is showing regarding this OS.
As mentioned above regarding iOS12, same goes for Android P. For Android P, google is making this version available for the first time to leading device vendors other than Nexus/Pixel.
A recent article that was published, also shows the varying reasons for iOS vs. Android application crashes (there are few debates regarding the objectiveness of the data, especially around iOS performance)
Top Android and IOS Devices to Test On (Globally)
Each geography has its own mobile usage patterns, and popular devices. For that I release almost each 1-2 quarters the Factors magazine referenced above, to guide the coverage across 17 different countries.
Globally, these are the most popular smartphones and tablets to include in your test lab.
Apple iPhone 7
Samsung Galaxy S7
Apple iPhone 7 Plus
Samsung Galaxy S8
Apple iPhone 6S
Samsung Galaxy S8 Plus
Apple iPhone 8
Samsung Galaxy S7 Edge
Apple iPhone X
Huawei P9 lite
Apple iPhone 5C
Samsung Galaxy Note 8
Apple iPad 4
Apple iPad Air 2
Google Pixel 2
Apple iPad Pro 9.7
Apple iPad Mini
Samsung Galaxy Tab S3, 9.7’’
Apple iPad Pro 12.9
Xiaomi Redmi Note 4
Apple iPhone SE
Apple iPhone 5S
Motorola Moto G5
Sony Xperia XA1
Android P Beta
Nokia 6 (NEW!)
As the market evolves, it is very important for dev and test teams to follow the above trends, and validate that their lab is fully aligned with the market from both devices and OS versions.
As we’re close to the first half of 2018, it is a great time for quality managers, developers in the mobile and web industry, and executives to freshen up their list of followers and blogs they follow or at least validate that they are not missing an important one – It’s free to follow :).
Why is that important? The market is undergoing dramatic changes towards DevOps, Continuous Testing and modern testing processes, and the below folks (and myself included (@ek121268) can be a great supporting asset in practitioners journey to mature CI, CD, and CT.
In the past years, I was humbled to be in the major lists of thought leaders and influencers to follow around continuous testing, test automation and mobile. Being in the industry for quite some time, I hope I have a close to ultimate list of people and blogs for you guys to take note.
This week, Appium 1.8 was released. This release isn’t just another release, and for practitioners who develop Appium test automation, being familiar with the changes is imperative.
Below is a nice high level summary of the changes in Appium 1.8, taken from a recent webinar I hosted this week where Jonathan Lipps from CloudGrey.IO gave a deep dive into the below changes and more.
Most of the above points are well articulated in the webinar recording, however, here are few changes to note and be familiar with:
Appium 1.8 and above is now based on W3C WebDriver, in opposed to the Jason wire protocol implementation that exists in the earlier versions. From a testing perspective, there is full backward compatibility, and Appium can distinguish between the relevant protocol. With that in mind, moving forward it would make sense to migrate and move to the new version (install Appium easily through npm install -g Appium
Enhanced support for app management features – that translates into a much easier way for test automation engineers to install, upgrade, remove apps as part of their test code
New ‘OtherApps’ capability added – this can allow test engineers to specify as part of the new capability additional apps that should be installed and reside on the device under test. Such use case is relevant for apps under test that uses 3rd party apps (like Facebook) and require them to be installed on the device
iOS Screen recording is now supported on iOS Simulators
Support for Android instant apps added in Appium 1.8, this can allow android test engineers to validate that an instant app (such app that includes a link and does not require installing the entire app) works as expected.
During the webinar, I also ran a short survey About what is missing in Appium, and practitioners would love to see coming in future versions – below are the results.
The majority of users, still struggle to implement a cross platform test automation using Appium, as well as automate specific device gestures and sensors – these comments were shared with Jonathan and will be considered.
I strongly recommend start learning from the above content what Appium 1.8 holds for you and your projects, and leverage some of the new capabilities.
I would like to thank again, Jonathan lipps for his great session and insights – if you’re not subscribed to his weekly newsletter, please do so https://appiumpro.com/