Loading...

Follow Software Testing Class on Feedspot

Continue with Google
Continue with Facebook
or

Valid

Data warehouse adopts a 3 tier architecture. The bottom layer is called the warehouse database layer, the middle layer is the online analytical processing server (OLAP) while the topmost layer is the front end user interface layer. We will discuss the data warehouse architecture in detail here.

Multitier Architecture of Data warehouse

The image below shows the 3 tier architecture of data warehouse.

Let us discuss each of the layers in detail.

  1. Database Layer: The bottom-most layer comprises of the warehouse database layer. It is mostly the relational database system. The data from various external sources and operational databases is fed into this layer. Before feeding this data, preprocessing techniques are applied. The data is extracted, cleaned and transformed before loading the data in the database layer using back end tools. Also, the cleaned data is loaded and refreshed to update the data warehouse.

The extraction of data from external sources is done using gateways which generate SQL code. Example of the gateways is ODBC (Open Database Connection) and OLEDB (Object Linking and Embedding Database). The database layer also contains metadata. Metadata stores information about data warehouse.

  • OLAP Server: The OLAP server is either a relational online analytical processor (ROLAP) or Multidimensional OLAP (MOLAP). The ROLAP server converts the multidimensional data into relational operations while the MOLAP directly implements the multidimensional operations.
  • User Interface Layer:  This layer provides necessary tools for querying and reporting.
The Architecture of A Data Warehouse
  1. The basic architecture of data warehouse

The image above shows a simple single tier architecture of a data warehouse. Various components of this architecture are:

  1. Data source: The operational systems are systems used for day- to day transactions. The data processing in these systems takes place in such a manner that data integrity is maintained. The data from these are operational data which contains a lot of information about the company. Some forms of operational data can be :
  • Warehouse: Warehouse contains Metadata, Raw Data, and Summary Data.
    • Metadata: It describes data about other data and data structures such as objects, business rules, and processes. In a data warehouse, metadata defines the warehouse objects. It is used to generate scripts that build and populate the data warehouse. Metadata is also used to locate the contents of the data warehouse. The functions of metadata are explained by the image below:
  • Raw Data: It is unprocessed data from data sources. It is converted to information using selection, extraction, and organization of data. Example of raw data, the POS (point of sale) in supermarkets generate loads of data every day. It is raw data. This needs to be processed to show interesting results. The useful information generated from raw data can be used for predictive analysis technology.
    • Summary Data: This component is very important as it precomputes long computations in advance and stores itself. Summary tables’ stores aggregated and summarized data for optimal performance. 80% of the businesses use summarized data for decision making. This summarization happens in multidimensional space using one or more dimensions. Aggregation happens by combining a large amount of detailed data together. Example: Analyzing the accounts with 4 dimensions: customer, region, month and service.
Data Warehouse Architecture with Staging

The Staging area of the data warehouse is a temporary space where the data from sources are stored. This area is required in data warehouses for timing. The staging component performs the functions of consolidating data, cleaning data, aligning the data to correct place.

Data Warehouse Architecture with Staging and Data Mart

The Data Mart is a subset of Data warehouse focusing on a single line of business. As data warehouse contains company-wide data of all department, data marts contain data of single departments such as Sales, Inventory, and Marketing, etc. Each department has owns a data mart including its hardware, software, and data. With Data Marts it is easier for departments to maintain their individual department data.

What are Data Warehouse Models?

 From an architecture point of view, the data warehouse has 3 models:

  1. Enterprise Data warehouse: This data warehouse constitutes data from all the departments of an organization. It spans the entire organization. This type of data warehouse integrates data from all operational systems and external sources. The data ranges from few gigabytes to hundreds of gigabytes and terabytes. It contains detailed information as well as summarized information. Implementation of this warehouse requires extensive modeling and many years of building. It is built on mainframes and parallel architecture platforms.
  2. Data Mart: Data Mart are subsets of a data warehouse that focus on a specific group. The scope of Data Mart is limited to particular subjects. These are implemented on Unix/Linux or Windows-based servers. Data marts implementation also requires complex business modeling but can be built in a few weeks. Data Marts are of 2 types:
    1. Dependent Data Mart: These are populated directly from the corporate data warehouses.
    1. Independent Data Mart: These are populated from operational systems or external data sources or directly from department data.
  3. Virtual Warehouse: A virtual warehouse is created by summarized views over operational databases. It is easier to build but requires additional capacity on operational database servers.
Data Warehouse Development

The data warehouses are developed with 2 approaches:

  1. Top-down Approach: This approach is systematic and minimizes integration issues. Maintaining consistency in the data model using this approach is a challenge. Also, this approach lacks flexibility and takes a long time to build and is costly.
  2. Bottom-up Approach: This approach design builds and deploys independent data marts. It is a flexible, low-cost solution but can create issues in integration when data from multiple data marts is sent to the data warehouse.
  3. Evolutionary Approach: This is an incremental and evolutionary model. In this model, a high-level data model is created in a span of 1-2 months. This high-level model showsa company-wide, consistent and integrated view of data from different departments. With this high-level model, the integration issues are reduced. Then independent data marts are implemented side by side data warehouse using the same data model. Thirdly, the distributed data marts can also be constructed by integrating different data marts. At last a multitier data warehouse is built which can then be used to populate dependent data marts. The image below shows the approach to data warehouse development.
Conclusion

A data warehouse is a subject oriented, non-volatile, time variant, integrated database giving information about the history and analysis of subject rather than transaction processing.  The data warehouse architecture has different components each playing an important role. Metadata keeps details about data warehouse data. Summarized data shows views of operational data for better performance. Data Marts are like data warehouses but focus only on a single subject. These are either populated directly from enterprise DW or externally from operational DB, flat file, etc.

⇓ Subscribe Us ⇓ If you are not a regular reader of this website then highly recommends you Sign up for our free email newsletter!! Sign up just providing your email address below:
Enter your email address:

Check email in your inbox for confirmation to get latest updates Software Testing for free.
  Happy Testing!!!    

The post Data Warehouse and It’s Architecture appeared first on Software Testing Class.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

In this tutorial we are going to discuss deep detail into data warehousing concepts, its architecture and components of a data warehouse.

Data warehouses are multidimensional databases which generalize and consolidate data. It is a data repository maintained at a different place from other operational databases. It provides an integrated platform for collection of data from variety of applications. Data warehouse is a platform for information processing and analysis of accumulated historical data.

Data warehouses contains historical data unlike transactional databases which contains current information.

According to William H. Inmon, a leading architect in the construction of data warehouse systems, “A data warehouse is a subject-oriented, integrated, time-variant, and nonvolatile collection of data in support of management’s decision making process”.

What is Data warehousing?

Data warehousing is a technique for businesses to use the data for decision making process. Data warehousing provides necessary tools and architecture for business executives to systematically understand their data and use it for customer insights and improve their businesses.

So we can say, data warehousing is construction and using of data warehouses.

Data warehousing is a powerful concept emerging in today fast evolving world. Many organizations are focusing of building large data warehouses and using it for making strategic decision and improving its customer base.

Features of a Data warehouse

The properties of data warehouse which distinguish it from other databases such as relational database, transactional databases, and file system are:

  1.  Subject –Oriented: A data warehouse focuses on analysis and modelling of collected historical data for decision making process, unlike relation databases which mainly focus on day to day information processing. Data warehouses give a clear, simple and concise view of a particular area such as sales, customer, supplier etc by excluding the unnecessary data not useful for decision making process.
  2. Integrated: Data in data warehouse comes from multiple heterogeneous data sources such as online transactions, flat files and relation databases. Since different data sources have different naming conventions, encoding formats therefore data warehouse use data cleaning and data integration techniques to maintain consistency. Data cleaning is applied to remove noisy data.
  3. Time-Variant: Data warehouse store historic information of an organization ranging from 5- 10 years data. Therefore the time element is present in the data warehouse.
  4. Nonvolatile: Data warehouse is different from operational database as it does not require the mechanisms such as concurrency control. Recovery and transactional processing. Only “data loading” and “data access” operations are done on a data warehouse. This is because it is stored separate from the operational environment.
How is information from Data warehouse used by organizations?

Businesses use data warehouse to support many decision making processes such as:

  1. Customer centric decisions: Companies focus on customer buying trends, spending trends, buying time, total time in store, liking and disliking of customer etc.
  2. Product analysis: Companies use Data warehouses to decide on how the products should be placed on shelves, the sale of product in year, quarter and month, sale of product in particular geographical area. Based on the data it makes strategic decisions.
  3. Increasing Profits: The companies analyses the transactions to increase their profits.
  4. Reducing Costs: Companies use data warehousing to improve their business models, strategies, correct environmental conditions and manage customer relationships to reduce unnecessary costs.
  5. Integration from multiple resources: Data warehouse is useful as the companies are able to collect data from multiple heterogeneous, autonomous data sources and integrate it on a single solid platform. Access to data is very efficient and it helps in important decision making process.
Query Driven Approach vs Update Driven Approach in Data warehousing

In traditional databases the integration from multiple heterogeneous data sources is very complex, expensive and inefficient. It requires filtering and complex integration processes. It requires building of “wrappers” and “integrators” over the databases. When the client queries the database, the metadata is used to translate the query from local to format understood by all the heterogeneous sites. These queries are then interpreted and sent to local query processor. The results which come from local query processors are collected and integrated into a global result. This approach is called query driven approach.

Data warehousing uses update driven approach. In this approach the information and data from multiple heterogeneous data sources is integrated in advance and stored. It can be used for direct analysis. This approach gives high performance as the data is copied, pre-processed, integrated and restructured and summarized into one data space. Also, the day to day query processing is not affected by data warehousing as it is maintained separately.

Operational database vs Data warehouse

Data warehouse is kept separate from transactional and relational databases so as to maintain high performance in both databases.

  1. The operational databases are simple queries maintained for tasks like searching records, indexing and optimizing queries while data warehouses are complex queries, handling large amount of data and multidimensional views. If such big tasks are done using operational databases it would degrade its performance.
  2. Operational database allows multiple transactions to run at a single instance. Mechanisms such as concurrency control, recovery, locking protocols are used to maintain consistency of database. The data warehousing analytical processing has read-only access of data.  It summarizes and aggregates the data. So these mechanisms of cannot be applied to data warehouse.
  3. The operational databases does not store historic data while DW stores only historic data. The DW has uses in decision making process as it contains huge data records which are cleaning, organized and integrated. Operational databases contain raw data which needs to pre-process before analysis.

Thus looking at the functionality, structure and kind of data present in operational database and data warehouse it is important to maintain separate database and data warehouse. Now, many RDBMS vendors are making their systems optimizable for analytical processing. In near future, this gap between RDB and DW is likely to reduce.

Conclusion

A data warehouse is a data store for implementation of decision making activities. It stores historical data which is used by enterprises for making strategies in their business processes.

DW is constructed from multiple heterogeneous data sources. The data from these sources is collected, cleaned, pre-processed, integrated and summarized. DW supports multidimensional views which can be accessed by complex queries. Data warehouse management and utilization is sometimes referred to warehouse DBMS.

⇓ Subscribe Us ⇓ If you are not a regular reader of this website then highly recommends you Sign up for our free email newsletter!! Sign up just providing your email address below:
Enter your email address:

Check email in your inbox for confirmation to get latest updates Software Testing for free.
  Happy Testing!!!  

The post What is Data Warehousing? appeared first on Software Testing Class.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Introduction

Usability testing is a must have software testing technique which is more of an advice from the end user to make a robust software design. An organization depending on its requirement can choose any of the technique to fulfill usability testing in order to yield a high quality software product. is conducted to ensure high quality software product to the end user. It ensures that software is performing well and fit to use by the end user post production. Often usability testing is ignored in the software testing world but it is a must have kind of testing which directly impacts the software quality and projects to meet the highest standard based on the future perspective. Usability testing can reduce the cost of project as it will mitigate the need of patching, or frequent bug fixes which otherwise remained unveiled and caught later in the production.

Usability testing can unveil many defects and detect problems in the software before they can be diagnosed by testing team or reported by the end user. Usability testing can easily be merged with integration testing and there an organization can fetch the combined benefits of both kind of testing in a very less time and ensuring the high quality of the software products. Usability testing is essential from the business reputation point of view as well. This is because nobody in the market desires a buggy product instead everybody embrace a stable and high quality end product.

The following are the best usability testing methods which are recommended to be used on a regular basis in order to ensure high quality software product.

CARD SORTING Technique:

It is one of the inexpensive method and used widely in the industry. It is a very effective method used for usability testing and design. In this technique, a special deck of cards is created by the project team where each card has details about specific topic related to the requirements and conception of the end user about the software product. Here, the end users are asked to analytically organize the cards based on their past experiences with a similar kinds of software products. Such tasks should be repeated by different end users’ multiple times in order to ensure alternative ideas.

After analysis has completed from the end users, they are expected to provide explanation about the choices made by them. The card sorting technique helps the project team to conceptualize the best design based on the inputs provided by the targeted end users. Such technique is very adaptable and fit to use at any stage of the software development life cycle.

COGNITIVE WALK-THROUGH Technique:

In this technique, a group of skillful developers or end users are tasked to analyze every possible condition provided by a software in a step by step approach as the process continues. Each of the step is analyzed thoroughly by the expert team and they create the success stories and the failure stories based on all of the possible scenarios. Lastly, the entire project team make a joint analysis to study why the success story has succeeded and why the failure story has failed. This input provides a clear picture towards the maximum usability based on every possible outcome.

HEURISTIC EVALUATION Technique:

The heuristic evaluation technique is as good as peer review. In this technique, a group of expert developers test the software interfaces in order to unveil the design issues based on their past experiences. These expert developers access the software interfaces, measure the usability, and conclude about the efficiency of the software interfaces based on the important benchmarks such as software interface compatibility with real-life problems and circumstances, using well-established signs and vocabulary, flexibility, capability for users to solve problems without any assistance from the technical support, software interface consistency, etc.

This method is quite expensive as it requires recruitment of experts’ developers who are employed specifically for the usability testing but this is one of the most reliable method recommended for the usability testing. This technique is widely practices in larger organizations where software quality is the key goal in order to maintain the organization reputation world wide.

PARTICIPATORY DESIGN Technique:

The participatory design technique allows the end users to directly participate into the software development process. Here, a group of expert archetypical end users are selected by the development team based of the project interfaces benchmarks. These end users act as consultants and provide their feedback throughout the development process. These consultants specify their requirements as the end users, raises alarm if they find the software interface inappropriate, access the aspects of the end software product in the real time. Not only this, these expert end users deliver from time to time their innovative ideas and share their perspective in order to create compatible user interfaces for the software product.

The end design suggested by the consultants and approved by the expert developers is the most efficient design for the software product which yields a very high quality software product. Such technique is widely practiced when the software is designed for a particular profession. E.g. in a banking application which has the online trading module need to be designed. Such module can be designed in the best way with the help of the traders’ inputs because the traders being the end users knows well about their requirements and perspectives to use the trading module into their day to day life and therefore the traders’ participation in the banking software development is the must.

TASK ANALYSIS Technique:

In this technique, the steps are observed which are taken by the end users in order to reach a certain goal using a software product. Such technique provides the developers an idea about the end users requirements and they can learn to implement the exact end goals required by the end user to use the current software in hand. After analyzing the complete task, there is always a scope to improve the over all steps that constitutes an end goal and same can be incorporated into the software under development. This technique is applicable to any software development stage. It is highly effective and very comprehensive technique as it does not need a lot of time and resources to analyze the complete scenario and design the software system.

Conclusion

Usability testing is a must have software testing technique which is more of an advice from the end user to make a robust software design. An organization depending on its requirement can choose any of the technique to fulfill usability testing in order to yield a high quality software product.

⇓ Subscribe Us ⇓ If you are not a regular reader of this website then highly recommends you Sign up for our free email newsletter!! Sign up just providing your email address below:
Enter your email address:

Check email in your inbox for confirmation to get latest updates Software Testing for free.
  Happy Testing!!!  

The post The Best Usability Testing Methods to Use on A Regular Basis appeared first on Software Testing Class.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Introduction

In this article, we are going to discuss the impact of Software Testing due to the introduction of machine learning and artificial intelligence. The introduction of machine-based intelligence will be the game changer to overcome the growing challenges to Software testing. It is the common belief of many software organizations that in the coming 5 years, AI and machine learning will have a significant impact on them.

AI-Enabled Testing can be thought of as an extension to the Test Automation due to the following reason.

Why AI Won’t Kill Software Testing?

  • We are not going to get rid off Manual testing completely as there is no software developed without any bug in this world. Though today we have many robust test automation tools in every organization, the manual testing is always a part of testing strategy through which we can ensure high-quality user experiences.
  • Over the period of time, as the software becomes more complex than before in its iterative releases, the test automation approach is the best approach in order to counter the need of frequent regression test on a large number of test cases in very less time. Automation test approach helps to unveil a large number of defects in very less time with the minimal testing effort.
  • In the future, when the testing will be AI-Enabled testing then the automation will become ever smarter than before as we can feed a large quantity of data and make the machine learn to generate accurate test results and unveils the defects in the system. AI plays a very important role in vulnerability assessments through enhanced security, automatic code reviews, and the creation of the test cases automatically. So, the QA engineers just need to feed algorithms along with the historical data in order to increase the defects discovery rates. It can also provide real-time testing feedback on both functional as well as non-functional aspects of the software system under test.

The above evolution of testing from manual to AI-Enabled testing can be on the high rise but in the true sense, it is not at all going to kill the actual need of the QA engineers. The following are the reasons.

#1: The organization can use AI based automation tools such as Eggplant AI, etc. to cover the basic testing aspects of the mobile apps and such tools can easily help in the discovery of the defects by auto-generating the test cases through the learning algorithms and executing them on the mobile app. Such an approach is going to cover only the basic testing aspects as a part of the product development life cycle. If every organization is going to choose this path then they are going to miss the marvelous value that the highly qualified QA engineers can add to the product testing in terms of assessing the system salability, the security and risk management, the system performance, test documentation for the project, compliance, and tracking of various metrics. These all are human jobs that suit highly qualified QA engineers but not an AI-based testing tool.

#2: On the other hand, if the highly qualified QA engineers start using AI-Enabled testing then it will further add more stars to the product testing as well as the software quality. AI can help QA engineers to mitigate the human errors, unveil the testing areas which are often missed while preparing the test cases, early discovery of the defects, etc. AI can help QA engineers in the creation of the automated test cases by feeding algorithm and the historical data to the system. As a result, the QA engineers can reconcile the test cases and add more values to the overall software testing. In other words, AI-Enabled testing can be used as an addendum to the QA testing but it cannot be the replacement of the QA engineers.

=> The actual change that AI Enabled testing is going to bring in is the need of the highly qualified QA engineers who can deal with the AI systems and machine learning. The machine can be feed with the algorithm and the historical data to generate the test cases and user experiences automatically but if the software system has undergone some change then how is the machine going to behave with that data and who is going to correct or review the decision made by the AI Enabled tools? The answer is very simple that only well-qualified engineer can make very good use of this technology. Therefore, it can be seen more of a collaboration between QA engineers and the AI Enabled tools then the replacement of the QA engineers with the AI Enabled tools.

=> The self-learning patterns through the neural network can help in the testing but again they cannot replace the experience of QA engineers. Neural networks can be trained when they are put into learning mode but it does mean they have accrued ample experience to replace highly qualified QA engineers. A neural network which is in continuous learning mode could not be expected to do the security testing which is better known to a QA engineer dedicated for this type of testing.

=> AI-Enabled testing, no doubts it is going to bring revolution in the traditional software testing into a new digital age. At digital age, the AI-enabled testing is going to become a core part of QA (Quality Assurance) to ensure the Software or product quality but still, human testers will be required because only a human understand the needs of other humans but not the machine completely. Machine learning is still very far from developing common sense which is actually known to humans.  Therefore, AI and machine learning in no way could be the replacement of Software testing by QA engineers in the product development lifecycle.

Conclusion

AI and machine learning are niche technologies and they are entering into every aspect of human life very rapidly. In the coming time, the use of AI enabled testing tools by highly qualified QA can add more value to the organization than before. There is a need of continuous enhancement of QA skills and this is the time to start learning AI and machine learning in order to use these technologies in the software testing to add more value towards the quality of the software product.

⇓ Subscribe Us ⇓ If you are not a regular reader of this website then highly recommends you Sign up for our free email newsletter!! Sign up just providing your email address below:
Enter your email address:

Check email in your inbox for confirmation to get latest updates Software Testing for free.
  Happy Testing!!!  

The post Why AI Won’t Kill Software Testing? appeared first on Software Testing Class.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

API stands for Application programming interface. It is a generic piece of software utility which accepts input parameters and provide desired output based on the specific business logic. When we talk about API development then such process requires a strict testing in terms of security, business logic processing, valid input data parameters, data type, etc. If testing for any API is not conducted thoroughly then such API will be flawed with number of issues and those issues can lead to malfunctioning of partner application and even security breaches throughout their lifespan.

In order to conduct API testing thoroughly, we are going to discuss on frequently occurring 9 common errors during
API testing . Not only we are going discuss about these errors but also provide the simple solution that may help to improve the API testing methodologies, health, and test results.

Misbehaving Entries:

Often it is observed that API works fine when it is individually tested with set of required input parameters but it starts misbehaving and malfunctioning when integrated with partner. It is because the partner may be sending ‘NULL’ values for certain required fields which may be difficult to figure out during integration mode. It has simple solution that during testing, we should have the test cases to cover the behavior of API when it receives ‘NULL’ or errant entries as input parameter. API should send the response back to the partner with appropriate error message in which it can state that input data from partner application was incorrect and API is working smoothly.

Invalid Response:

API responses could be successful like HTTP 200 code or failure like 404 i.e. resource not found. Sometimes, the format returned from API is not digestible to the partner application as it may vary in number of fields. Solution to test is very simple, the number of fields in response should be clearly defined for both success as well as failure response messages and should tested consistently across all kind of API responses.

Caching API Response:

API acts as a black box that accepts input parameters and provide response against the desired business function triggered. Partner application has the choice to cache the output response from API for the same repeating set of input parameters. Now, if the output from API is changing frequently for same input parameters then the cached output result at partner application is stale and conveys incorrect information. It has a simple solution, though API is working as expected but partner application has to decide on what results they need to cache and what not. If the result is changing frequently from API as in case of live data, then caching should not be done but if any result like product image, description, etc. which is not expected to change very frequently can be cached at partner application.

Handling False Negative Response:

API when returning response by HTTP as 200 is considered as success but such response could also have NULL values which is the case of False Negatives. Though partner application is going to read such responses as success but do those NULL values in response make sense to them? This is where the actual test coverage is required against false negative response.

Team Communication Failure:

As API grows based on user experiences and business changes, the API maintenance becomes very important. This is where the best team communication is required. It should not happen that API changes were made and it has started impacting all partner applications. Any kind of change to API or partner application should be well communicated, implemented, integrated, and tested. Also, the version of standard interface API document should be updated from time to time in order to avoid any bad development practices from developer.

Non-standard Coding Approach:

API development team should agree on a particular standard approach in terms of input parameters and output response parameters and any deviation from such standard should straight lead to rejection of input by API or response by partner application. Sometimes developers accept blank or null as input or output partner which may cause problem in long run. The data type, mandatory or not, range, thresholds, etc. should be clearly defined and testing should test the API against such standards and any deviation from such standards should not be acceptable at all by any means.

Ensure Character Set:

API should specify about the accepted character set such as ASCII, Unicode, etc. for input and output parameters. This is to make sure that partner application is interacting with API for the agreed character set and any character received outside the agreed range such lead to straight rejection. Also, the language such as English, French, Spanish, etc. in response should be well agreed in advance. Our test cases should complete all such requirements for agreed character sets as well as languages.

API Compatibility With Partner Application:

APIs are built keeping in mind about the partner applications compatibility. Any release from API side or partner application side should be regression tested against all existing test cases on top of new functionality added to the API or partner application. In other words, all release for API or partner application should always meet the compatibility criteria.

Use Your Testing Skills:

API can have many hidden issues which could be actually unveiled through the testing skills possessed by an experienced testing team. Testers are advised to execute negative scenarios in order to catch defects which could not be caught through traditional testing practices. Testers can do monkey testing to broke the API which will provide a great room to the developer to code an efficient API which is very robust and smart.

Conclusion

In API testing, defects are inevitable for any API if it not well coded as well as tested. It the call for the tester to design efficient test cases, avoid common error as discussed in this article and leverage their testing skills in order to deliver an efficient well tested end product for production.

⇓ Subscribe Us ⇓ If you are not a regular reader of this website then highly recommends you Sign up for our free email newsletter!! Sign up just providing your email address below:
Enter your email address:

Check email in your inbox for confirmation to get latest updates Software Testing for free.
  Happy Testing!!!  

The post 9 Common Errors Made During API Testing appeared first on Software Testing Class.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Introduction

Mantis Bug Tracker features configuration management for administrators to customize many of MantisBT features. In this tutorial, we will study each functionality covered under “Manage Configurations in Mantis”.

  1. To reach “Manage Configurations” Page, Go to “Manage”-> “Manage Configurations”.
  2. The tab which opens up is “Permissions Report”. This page shows the permissions awarded to various user roles by MantisBT. Default permissions are set which can be altered by admin:
  3. Attachments: Capability depicts the role and the checked boxes show the corresponding role is permitted to perform the action. For example: Delete attachments can only be carried by the developer, manager, and administrator.
  4. Filters: Using filters in “View Issues” Page, “Save filters” can only be used by developer and manager, administrator.
  5. Projects: Only admin has the privilege to create projects.
  6. Custom Fields: Manage custom field and link field to project feature is assigned to user role here
  7. Others: Other functions such as to send reminders, manage users, etc. are assigned here.
  • Configuration Reports: This tab allows the administrator to set the configuration for all projects as well as a specific project. The supported databases are MySQL, PostgreSQL, MS SQL Server, and Oracle. The configuration is set in config_inc.php. The filter option allows to search for configuration which has been set using “Username”, “project name” and “configuration options”.

The filtered configurations will be shown in “Database Configuration” area.

  • Workflow Thresholds: This tab represents the configuration of properties of bugs, privileges of user role, who can alter the user role privileges. These changes are reflected at the project level as well as globally.

The administrator is:

  • Able set the access levels for user role: For example ISSUES: To create an issue, user access level allowed to create an issue is reporter, developer, manager, administrator, and updater.
  • Able to set who can alter the access levels for user role: The administrator is able to set which user role can alter the value set in above point. For example, a Manager can be set from the dropdown. He will be able to select the user roles which can create an issue for this particular project.
  • Properties related to reporting the bug: The statuses to set initially when a new issue is created, the issue is reopened, the issue is resolved, and resolution to the reopened issue, when issue become read-only are selected here. The administrator also selects which user role can alter these values.
  • Notes: Function related to notes such as add note, edit, delete can be assigned to a user role, and also privileges to change these values can be assigned to the user role.
  • Others: Other function such as changelog, roadmap, and reminders can be set here.

After the above properties are set, the user clicks on “Update Configuration” button at end of the page.

  • Workflow Transitions: This tab will allow the administrator to set the workflow of bug life cycle. By workflow, it means the bug status order can be set here. These features are also set at project level and can vary for different projects. Main function performed in this tab are:
  • Threshold that affect the workflow: It reflects the status which will be displayed when a new bug is created, bug is resolved or bug is reopened. The user role who has privilege to change this status can be set here. For example: “Status to which a new issue is set” is selected as “new” and the “Who can change this value” is set to “administrator”.
  • Workflow: In this tab the bug cycle workflow will be set:
    • The next statuses which will be displayed after the current status to form a workflow are selected here.
    • The default value depicts which value will be shown on the top of dropdown, by default.
    • ”Who can change the workflow” will reflect the user role who can set the workflow.
    • For example: if the current status is “new” , the next status could be “feedback”, ”acknowledged”, ”confirmed”, ”assigned”, ”resolved”, ”closed, the default status is set to “feedback”.
  • Access levels: For the status, administrator can configure the user role who can change to this status. For example: status “new” can be changed by minimum access level of reporter. A “manager” can only see the status “feedback” to change the issue status to. This feature will also help in defining the workflow of bug cycle as the status can be changed only if the user role is allowed to do so. Admin can also set user role who can alter above value.

 Click on “Update Configuration” button to save the above made changes.

  • Email Notifications: This tab is provided in Mantis Bug Tracker for the administrator to set the Emails sent out from the Mantis account to the users. The settings done here are also on project level.  The administrator will check the boxes to whom
  • The notification should be sent for any update, delete, reopen, change of handler, note, relationship changed etc. of bug.
  • Email notification will also be sent if there is any change in status of bug such as “new”, ”feedback”, ”acknowledged”, ”confirmed”, ”resolved” and “closed”.
  • Email notifications will be sent to “user who has reported the issue”, “who is handling the issue”, “monitoring the issue”, “and added issue notes”, “category owner”. The admin can tick which users should receive the email.
  • Addition to this, certain user roles can be marked who will receive email notification namely “viewer”,  “updater”, ”manager”, ”reporter”, ”developer”, ”administrator”.
  • Administrator will also select the user role who can alter the above values from the various user roles.
  • For example: The Email notification on reopened bug is sent to user who reported the issue, user who is handling the issue, category owner (Category of the project).

The reporter, admin and manager of this project will receive email on reopened bugs for this project.

To save the changes made, “Update Configuration” button is clicked. Clicking on “Delete Project Specific Settings” will delete all the previous saved settings.

  • Manage Columns: This tab will show the columns available in mantis in their variable name format. The administrator will be able to customize columns for:
  • All available columns: Enter or delete the column name which he wants to show in hi mantis account.
  • View Issue columns: The “View Issue” page will show which columns.
  • Print Issue Columns: When an issue is printed, the entered columns will be printed.
  • CSV columns: When issues are downloaded in csv format.
  • Excel columns: When issues are downloaded in excel format.
  • These settings can be done globally for all projects or only for this project.
  • The user can also copy the above configuration from one project to other project.
  • Important points to be noted:
  • A specific user (user role other than administrator) can also set his email preferences from “My account” page. For this go to “My Account”-> “Preferences”. These preferences will override the admin settings for a project. Admin can also set these set individually for a user from “Manage User” Tab.
  • Users can also manage columns from their account, by going to “My Account”->Manage Columns”. The custom field names are also added in “All available columns” and the names are case sensitive. So to enter any custom field, user should reference all available columns and copy the name from there.
  • The configuration of “Workflow Thresholds”, ”Workflow Transitions”, “Email Notifications”, can be set at
    •  Project Specific Level and also
    •  “All Projects Level”.

Taking an example of “Email Notifications”,

  • “All Projects” configurations are depicted in blue color and
  • “Project specific” settings are shown in “green color”.
  • To achieve this configuration, go to top menu bar ->select “All projects” from drop down, now settings will be done for all projects at global level => “Update Configuration”.
  • Next again go to the top of window and select a project name “Test Project”, configure the notification->” Update Configuration”. These settings will be reflected at project level.

Conclusion

Manage Configurations in Mantis Bug Tracker provides a wide range of settings of workflows, email notifications, the access level of user roles, bug lifecycle statuses, etc. These settings are achievable at Global level for all Projects as well as Project Specific level. Additionally, users can manage email notifications and columns individually from their accounts. Thus, Mantis Bug Tracker is a powerful tool for customizations related to a single project and all projects.

⇓ Subscribe Us ⇓ If you are not a regular reader of this website then highly recommends you Sign up for our free email newsletter!! Sign up just providing your email address below:
Enter your email address:

Check email in your inbox for confirmation to get latest updates Software Testing for free.
  Happy Testing!!!  

The post Tutorial #8: Manage Configuration in Mantis Bug Tracker appeared first on Software Testing Class.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 
Introduction

Today, the software methodology has migrated from SDLC to Agile to make the model more adaptive to the last minute changes. Though the Agile methodology is practiced widely across the software industries but still testing phase is a very important phase and there cannot be any substitute to it. As we write the test strategy in SDLC, the same test strategy is expected for the Agile model. Test strategy in the Agile model cannot be unstructured but concise and to the point.

At the time of writing test strategy irrespective of SDLC or Agile model, it is very important to consider the following aspects of the project under test.


#. Need to understand the end users:

It is very important to know about the end users before writing the test strategy. After knowing about the end users in terms of target audience such as adults, old age people, etc. one can decide about the look and feel of the UI. It is something the test automation cannot fetch appropriate result but only an experience manual tester can do. It provides us an idea of the extent of automation testing and manual testing required for that project. Therefore, it is very important to know about the end users.


#. Exploratory Test is a good idea:

Exploratory testing is a good idea to understand an application under test in very less time and enables the fastest discovery of the defects. When an application is handed over for testing, the testers are expected to understand its end to end flow which can only be achieved through manual testing and exploratory testing. Once the application is known to the tester then it will be the quickest for them to write down the test automation scripts and create a balance in the overall testing aspect of the software application.


#. Scope prioritization of key features:

Test strategy document should have the priority decided for the testing of the key features. The product owner decides the budget for the project and the test strategy should be feasible within the budget irrespective of how the key features are tested through automation or manual testing. The testing should complete within the allocated budget without any compromise in the testing of the key features.


#. Knowledge of the system architecture:

It is very important to have the knowledge of the system architecture to the testing team. Does it provide an idea about how to test the system? And what to test in the system? It lays down the roadmap to test the entire system after thinking from the point of view of the end users. It is advised to define the test architecture as per the system’s high-level architecture so that the test architecture can be adapted easily to any new last minute change in the agile methodology.


#. Outline Test Automation Scope:

We cannot achieve 100% test automation for any project and therefore, it is important to outline the percentage of test automation and the manual testing required. Test automation is recommended from the regression point of view in the iterative releases of the software product. But there should be a strong focus on testing the functional aspect of the application for the first time. It can be done either by automation testing or manual testing. Sometimes both approaches are recommended where the manual testing is done for the first time which is automated for the future regression testing in order to save the test cases execution time and speed up the regression testing process.


#. Beware of the flaky tests in your test Automation:

Test automation results become unreliable when there are flaky tests in it. The test automation scripts should be diligently tested against the valid test data to make sure they are not flaky. In case any flaky test is observed in the test automation suite then it should be detected diligently and eliminated immediately as their invasion may fall our test scripts to become out of control.


#. Choose independent data for the testing:

There is a saying in the computer world i.e. GIGO (Garbage in and Garbage Out). Therefore, it is very important to choose an independent data which is reliable and can generate accurate test results. Incorrect test data is one of the contributing factors that make our test automation flaky. There are many options available in today’s world such as DbUnit, in-memory databases, etc. which can help in the generation of independent data for the unit testing or test automation scripts.

#. Automate only the eligible test cases but not all:

It is advised to refrain from automation of the test cases which are not required, as manual testing is not a bad option for those test cases. Forceful automation of the test cases where it is not required is one of contributing factor to an incorrect test strategy and it should be avoided.


#. Automation of the test environment:

Many organizations are now focussed on having a complete automated test environment which looks similar to the production environment. Such environment follows the release and builds process in CICD pipeline i.e. Continuous Integration and Continuous Deployment. Such a test environment is very useful in the test automation process where the deployed build gets automatically subjected to the test automation suite and generate the automatic test results. Not only this, such a test environment can be used to conduct performance testing as the hardware configuration of the test environment is chosen similar to the production environment hardware configuration.


#. Quick insight into the quality of your software application:

The test environment should be able to provide quick feedback on the overall software application quality for any iterative release by following the stable tests and green build policy. The generated test report should be able to capture a list of test cases executed with the summary of the past and failed test cases. The report should also record the cause of the test failures to make sure that if they are introduced in the new build due to the recent code changes or due to some environmental issues.

Conclusion

Test Strategy is a must-have document for the testing of any software application irrespective of the test model adopted for the software testing.

⇓ Subscribe Us ⇓ If you are not a regular reader of this website then highly recommends you Sign up for our free email newsletter!! Sign up just providing your email address below:
Enter your email address:

Check email in your inbox for confirmation to get latest updates Software Testing for free.
  Happy Testing!!!  

The post Define Your Test Strategy Because Agile Does Not Mean Unstructured appeared first on Software Testing Class.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Good Luck with your Selenium Test scripts!

Introduction

Execution of the test script using Selenium API for automation are not always smooth and the test script developer encounters scenarios which are inevitable and often result in the breaking of the test script that generates unexpected test results. Such unprecedented test results or scenarios are known as exceptions. In this article we are going to discuss in detail about the 15 common Selenium exceptions which are frequently encountered by the test script developer during execution of the test automation scripts using Selenium API. 

What are the exceptions in the Selenium?

The exceptions in the Selenium can be defined an uncommon or unmatched event that occurs during the execution of a test script or test suite. An exception happens at runtime error due to an unexpected event or result that influence and disturb the usual control flow in the test script. An exception is also referred to as a fault.

15 Common Selenium Exceptions:

The following are the classification of the 15 common Selenium exceptions which are frequently encountered by the test automation developers.

=> ElementClickInterceptedException:

This exception occurs when the test script in the Selenium is unable to locate the element which is coded for the click event. Such element may be concealed at the given XPath or any other locator and hence the driver is unable to locate it and hence this exception is raised on the click event. This exception can be avoided after locating the correct XPath of the element in the DOM.

=> InvalidElementStateException:

This exception occurs in the Selenium when we execute a command at the element in the DOM which is in an invalid state and as a result, that command is unable to finish the required operation.

=> UnknownMethodException:

This exception occurs in the Selenium if the requested command is matched with a known URL but at the same time, it is not matching with a methodology for the same matched URL.

=> ElementNotInteractableException:

This exception occurs when the locator is asked to locate an element in the DOM which is not intractable such as an attempt to click a disabled button, attempt to enter text into the read-only text box, etc.

=> ConnectionClosedException:

This is one of the most common Selenium exception thrown by Selenium API when the driver gets disconnected while it is executing the current script. 

=> JavascriptException:

This exception occurs in the Selenium when the executing JavaScript supplied by the user has a problem in the syntaxes or coding semantics of the JavaScript

=> ElementNotSelectableException:

This exception occurs when the locator is asked to locate an element in the DOM which is actually not selectable such as an attempt to click a disabled checkbox or radio button, etc. 

=> InvalidCoordinatesException:

This exception occurs when the given coordinates in the interacting operation are invalid and cannot be located by the locator in the Selenium.

=> InvalidSessionIdException:

This exception occurs in the Selenium when the given session ID is not included in the active sessions list. It implies to the fact that the session is inactive or not supported for the current operation, therefore resulting in the invalid session exception.

=> JsonException:

This exception occurs in the Selenium when the developer attempts to get the session capabilities but the session cannot be actually created.

=> InvalidSwitchToTargetException:

This exception occurs in the Selenium when the target frame or window coded to be switched, actually does not exist. Therefore, in the absence of a target frame or window, the system raises this exception.

=> MoveTargetOutOfBoundsException:

This exception occurs in the Selenium when the target provided to the ActionChains move () methodology is invalid. E.g. throwing control out of a document that results in move target out of bounds exception.

=> UnreachableBrowserException:

This exception occurs in the Selenium when the browser through the Selenium script is unable to be opened or the browser has crashed because of some known or unknown reasons.

=> NoAlertPresentException:

This exception occurs in the Selenium when the developer attempts to switch to no presented alert in the test script.

=> NoSuchAttributeException:

This exception occurs in the Selenium if the attribute of the element could not be found in the current DOM of the element selected by the locator.

Conclusion

When you are working with the Selenium WebDriver and notice any of above 15 common Selenium exceptions then don’t panic but just read out the explanation provided for these exceptions and look for the remedy. These Selenium exceptions are very commonly received by the test script developers and after recognizing these exceptions, they can be solved very easily after fixing the actual root cause or making the selection of an element and its attributes from the DOM in the most appropriate way. A smart test script never fails but always yields very efficient test result throughout the test automation.

⇓ Subscribe Us ⇓ If you are not a regular reader of this website then highly recommends you Sign up for our free email newsletter!! Sign up just providing your email address below:
Enter your email address:

Check email in your inbox for confirmation to get latest updates Software Testing for free.
  Happy Testing!!!  

Good Luck with your Selenium Test scripts!

The post Top 15 Common Selenium Exceptions You’ve Probably Seen appeared first on Software Testing Class.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

The agile methodology does not follow the phased approach like SDLC (Software Development Life Cycle) and therefore it requires very less documentation in order to accomplish the project completion. But Agile methodology at the same time cannot be misunderstood with no Documentation in Agile model, instead, we need only those details documented for the project which are actually required to run the project and nothing more than this. The following documentation approaches are recommended for the Agile methodology.

1. Working software over all-inclusive documentation:

The end goal of Agile methodology is to get the project working in very less time and with very minimal project documentation. Agile methodology is adaptable with the ongoing changes in the project requirements. Therefore, all-inclusive documentation is not required to build the software product, but only the key information that impacts the project such as user stories, end user experience, tasks and processes to accomplish the project, etc. is required and thus do not require the structured documentation as it is followed in the SDLC project methodology.

2. Less project related artifact can be better:

The agile methodology does not require a complete library of the project documents but instead, it just requires less project-related artifacts i.e. the artifacts which are actually important. As we know, writing the detailed documents in the decided format takes time and it impacts the project end deliverables timelines for its various phases. In the Agile methodology, we can easily save this time by writing minimal documents as per the project need instead of writing all inclusive documents for various project phases as in the case of SDLC.

3. The balance between documentation and discussion:

In the Agile methodology, the prime focus has given to the discussion instead of documentation. The Scrum call set up by the scrum master to discuss the open issues daily and track down the status. Since all parties are present on the scrum call at the same time therefore, the issue resolution gets expedited instead of back and forth that happened when requirements are documented. In Agile methodology, the requirements can be changed at any point of time in case they are captured incorrectly. But, such flexibility is not available in SDLC and in order to deal with any last moment requirement change in SDLC it incurs lots of effort as well as time. Therefore, the Agile model keeps a balance between the documentation and discussion in order to fetch the best output for the project. Such measures make the project to run smoothly and the sprint can be completed within the planned timelines.

Documentation Criteria in Agile Methodology

While preparing the required project documents for the Agile Methodology, the following things should be kept in the mind.

  1. Essential: We should document only the essential details in the project document but not the detailed stories which may not be useful at all. The document should have good enough detail to run the project.
  2. Valuable: We should document only valuable information which are actually required now but not when we want it. In other words, we should be capturing details which are required in the near future but not all the details what we need at some alter point of time in the project. This is because the Agile model is adaptable to the requirement changes and therefore, we should only be worrying about the near future only for the project Documentation in Agile.
  3. Timely: We should not write document along with the planned date into phases but they should be done in a just-in-time (JIT) manner i.e. prepare the project document when we actually need them and make sure they are made available on time.
Process Description

In the Agile methodology, we should define the process from the end user perspective including the details about the inputs and outputs.  We can define the project processes by simply gathering the answers to the following questions.

  1. Who is the end user? We should know who is the end user of the software product. i.e. if the end user has the technical knowledge or not, age of the end user i.e. they fall under the adult category, veteran category, etc.
  2. What do the end users need? We should be clearly defining the expectation of the end users that defines the clear picture of the project requirements.
  3. How do you deliver it to the end user? Here, we should be providing the solution to the requirements of the end users in terms of look and feel of the product UI as desired by the end user, report desired by the end user, etc.
  4. How do you know when they’re ready for it? We should learn about the end user’s past experience and the way they deal with their existing software products. It provides an idea about their expectation for the new software product.
  5. How do you produce it? At this section, we should be defining about the required technologies, a book of work, and a modular approach to fulfill the end user’s requirements into solution.
  6. What inputs do you need to produce it? We should be defining a set of user inputs along with the expected outputs based on the end user’s requirements. Listing down the inputs, provide a crystal clear idea about the end user requirement in order to process these inputs into outputs. After all, we are building software product for the end users and not for us therefore, this step is very important and required to be documented diligently.
Conclusion

The documents for the projects in Agile methodology should be emergent. Whatever Documentation in Agile are created for the project development are useful for the entire team and therefore it is the responsibility of the whole team to maintain it at some centralized location such as SharePoint, etc. which is accessible to everyone in the project. The project documents created in Agile methodology should have a high return on the actual time invested in writing them as Agile model is very time sensitive.

⇓ Subscribe Us ⇓ If you are not a regular reader of this website then highly recommends you Sign up for our free email newsletter!! Sign up just providing your email address below:
Enter your email address:

Check email in your inbox for confirmation to get latest updates Software Testing for free.
  Happy Testing!!!  

The post Documentation in Agile: How Much and When to Write It? appeared first on Software Testing Class.

Read Full Article
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

MantisBT is an open source bug tracking tool. It is used to track bugs of various projects. In this tutorial, we are going to look at the complete demo of Mantis Bug Tracker Features.

Signup for new account

After successful installation of Mantis, administrator can configure the settings for signup of new accounts.

Signup Form

Signup form will be displayed when a new user clicks on “Signup for a new account”.

Successful signup

After successful signup an email will be sent to the registered user with a link to activate the account.

Account Registration Email

The registered email address will receive account activation email.

Password Reset

The password reset can be done by clicking on “change password”.

Password Reset Email

The email for password reset will be sent on registered email address.

Login with username and password.
My View Page

Upon Successful login, by default “My View” Page will open. It will be empty.

Create Project

Click on Manage ->Create Project.

Report Issue

The project is successfully created. Go to Report Issue.

View Issues

After successfully submitting an issue, you will be redirected to “View Issue” Page.

Invite Users

Click on Invite User at the top of the page. Fill in the details, an email invite will be sent to user on the enter email address.

Add Users to Project

Add the user to project from Edit user page.

Account Settings

To Change the account settings such as password, go to My Account.

Select Project

To change project, click on project name at the top bar and select the project from dropdown.

My View page

My View page will get updated as the projects are added and issues are reported. Each project issue is labelled with separate color to differentiate between the issues easily.

Change Status

To Change the status of any issue, click on issue link on My View Page. It will open the “View Issue Details”. User will also be able to change the “assign to” field, or edit any details of the bug here.

Manage

To view the “Site Information”, click on “Manage”. The administrator will be able to see the site details.

 Create Version

The user can create version for the project from, Manage Projects. Click on the link of the project, scroll down the “Edit Project” page to find,”Versions”.

Custom Field

To create custom fields for any project, go to Manage Custom Field Page and a custom field.

Link Custom Field to Project

Once a custom field is successfully created, the user will be able to link the custom field to project from, Edit Custom field page.

Settings of Custom Field

The default settings of the custom field can be set from the Edit Custom Field page.

Summary

Summary of the complete account, for all projects and all issues can be seen from the summary page. Click on summary tab.

Search Issues

To search an issue, user can directly search from the search tab at the top of the page. The search can done only by Bug id.

Send Reminder

To send a reminder regarding any particular the user can click on, “send a reminder” button. Select the users and enter message.

Issue History

To view issue history, go to, “View Issue Details” Page, Click on “Jump to History”. The page will scroll down to see the history of the bug. The history will show all the activities performed on the bug from creation, assignment, status change and so on.

Email Notifications

The administrator can configure the email notifications from config_inc.php file present in Bitnami folder. The file can be accessed from “C:\Bitnami\mantis-2.19.0-0\apps\mantis\htdocs\config”. Link.

Configure the SMTP settings for your email provider by copying the email-related variable in the installdir/apps/mantis/htdocs/config/config_inc.php.sample file into the installdir/apps/mantis/htdocs/config/config_inc.php file

Conclusion

Mantis is an easy to use bug tracking tool, available for free on the internet. It offers many features to its user’s such as email notifications, access control, and customization, easy to install plugins, reporting and summarization, and many other features. It is easy to install tool with a lot of support forums available on the internet. It is highly recommended to use Mantis bug tracker web-based tool for logging and tracking bugs of software projects.

⇓ Subscribe Us ⇓ If you are not a regular reader of this website then highly recommends you Sign up for our free email newsletter!! Sign up just providing your email address below:
Enter your email address:

Check email in your inbox for confirmation to get latest updates Software Testing for free.
  Happy Testing!!!  

The post Tutorial #10: Demo of Mantis Bug Tracker Features appeared first on Software Testing Class.

Read Full Article

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview