Analyze the following highly simplified procedure:
Ask: "What type of ticket do you require, single or return?"
IF the customer wants "return"
Ask: "What rate, Standard or Cheap-day?"
IF the customer replies "Cheap-day"
Say: "That will be 11:20"
Say: "That will be 19:50"
Say: "That will be 9:75"
Now decide the minimum number of tests that are needed to ensure that all the questions have been asked,
all combinations have occurred and all replies given.
answer is 3, Please help on this how the answer is 3 i do not know to justify.
Test object: a big monolith application (~500k loc) developed in Java for the last 15 years. Big and (probably overly) complicated backend + web frontend. There are many business processes implemented in the app. Currently there are 10 teams working on it.
Tests: there are unit tests and integration tests implemented. Code coverage is at around 70%.
There are also MANY automated system and e2e tests implemented in a commercial test tool.
there are way too many system and e2e tests. They were added over the years as at first there were no other tests being written. Only after a while someone thought of adding Java-based unit and integration tests.
Apart of the code coverage of the unit und integration tests the test coverage is a big unknown. The knowledge about which functionality is tested by what system test is kept only in the heads of the testers. There is no real way to know which tests need to be fixed, when a new story is implemented, other than running the regression test suite and to look what broke.
Question: In my opinion we need to get rid of a big chunk of the system tests ASAP, as they are not maintainable any more, flaky and take waaay to long to run (~24h concurrently on 50 VMs). There are dead tests, that test removed functionality, many duplicates etc.
In order to do that first I would like to know the test coverage, to be able to tell which tests are useless and can be deleted safely.
I am looking for a way to determine/estimate the test coverage in such a project
The obvious way of connecting the test with the corresponding user story/requirement will not work, as for quite many of the implemented functions there are no documented requirements.
The monolith is right now in the process of being refactored into roughly independent modules. The current idea is sorting the system tests based on their belonging to these modules. Additionally the tests belonging to one module can then be sorted based on the business process they test inside that module. With this I should have database of tests broken down by the modules and then by business processes, and thus a rough test coverage estimation by process and module. This still does not convince me, as being the best approach though.
So does this make sense? How else could the test coverage be determined/estimated?
I'm implementing app authorisation at the minute. All pages will be protected except for a few, e.g. login, register. I won't be testing all pages, just one key one.
When a guest (not logged in) visits a protected page, e.g. the dashboard, they should be asked to log in and then be redirected to the dashboard upon successful login.
This is what I've managed to write in RSpec so far, but I don't feel it's clear or intuitive:
RSpec.describe "Attempting to access the dashboard", type: :system do
context "when not logged in" do
it "asks them to login"
context "with a subsequent successful login" do
it "shows the dashboard"
which results in:
1) Attempting to access the dashboard when not logged in asks them to login
2) Attempting to access the dashboard when not logged in with a subsequent successful login shows the dashboard
I am capturing URl and path from s3URL from previous response. Some time S3URL not generated then URL and Path not found. In this case i want to hide error because it is application issue not scripting issue.
Been in the industry for over a year, got the wonderful opportunity to join the team despite no prior background, loving and enjoying every moment so far actually. My only dilemma is that there were no senior QAs to pass down their knowledge to me.
Our teams are divided into frontend and backend, and I am part of the frontend team. As I mention, this is my first ever role, with no senior QAs passing down their knowledge. We have existing projects, but I felt like our automation weren't that great, in a sense that most of them are flaky, and I was fairly new to the role, so I didn't know much about automation strategy apart from the fact that - "Automation is good! You don't need to test things manually!"
After a while, I realise there is some truth to that, but that doesn't necessarily mean automate everything, which was my thoughts previously, to try to automate as much functionality each sprint.
Now we are starting a new project, and I am, again the sole QA for the frontend team. I would like to know how other teams approach their manual UI regression testing and also how you go about approaching automation
Manual: We all know that UI is very fragile, and there are lots of part and bits of pieces to it. From my experience, anything and literally ANYTHING could happen. So how do you guys keep track of every single thing?
Automation: From my learnings, it seems that UI automation are very flaky and should be kept to a minimum due to the always changing nature of the frontend. Since I am solely focusing on frontend, how would you guys decide on what to automate etc?
Any input would be appreciated. Would love to hear how other companies approach their frontend testing
can u pls any one give suggestions about benchmark metrics i.e average time, Throughput, 90th percentile values for any websites like industry standards meant that any application wouldnt exceeds these values.
default Average time =3 sec