Loading...

Follow Stack Exchange - Software Quality Assurance & T.. on Feedspot

Continue with Google
Continue with Facebook
or

Valid

So , I have a programme base and want to use it to login to a website. Currently this is what I'm dealing with :

//Send a Post request to the (URL, postdata, contenttype) - store the HTTP Response into Response
var Response = Request.Post(LOGIN_URL, "{post data goes here}", "content type goes here");

So if I understand correctly , a post request is when you request a website to accept certain data you're giving it. To complete this post request the programme needs a few things :

Login URL - The URL where you login (simple)

Post Data - Data relating to login boxes etc..? Bit Unsure on this one

Content Type - The format of a page. I'm guessing its something similar to this : "application/x-www-form-urlencoded"

Many thanks :)

Edit : My question is , I mainly wanted to clarify my understanding , but also :

What format do I need to put the Post Data in , furthermore how can I filter out the unnecessary post data (if necessary).

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Analyze the following highly simplified procedure:

Ask: "What type of ticket do you require, single or return?" IF the customer wants "return" Ask: "What rate, Standard or Cheap-day?" IF the customer replies "Cheap-day" Say: "That will be 11:20" ELSE Say: "That will be 19:50" ENDIF ELSE Say: "That will be 9:75" ENDIF

Now decide the minimum number of tests that are needed to ensure that all the questions have been asked, all combinations have occurred and all replies given.

answer is 3, Please help on this how the answer is 3 i do not know to justify.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Test object: a big monolith application (~500k loc) developed in Java for the last 15 years. Big and (probably overly) complicated backend + web frontend. There are many business processes implemented in the app. Currently there are 10 teams working on it.

Tests: there are unit tests and integration tests implemented. Code coverage is at around 70%. There are also MANY automated system and e2e tests implemented in a commercial test tool.

Issues:

  1. there are way too many system and e2e tests. They were added over the years as at first there were no other tests being written. Only after a while someone thought of adding Java-based unit and integration tests.

  2. Apart of the code coverage of the unit und integration tests the test coverage is a big unknown. The knowledge about which functionality is tested by what system test is kept only in the heads of the testers. There is no real way to know which tests need to be fixed, when a new story is implemented, other than running the regression test suite and to look what broke.

Question: In my opinion we need to get rid of a big chunk of the system tests ASAP, as they are not maintainable any more, flaky and take waaay to long to run (~24h concurrently on 50 VMs). There are dead tests, that test removed functionality, many duplicates etc.

In order to do that first I would like to know the test coverage, to be able to tell which tests are useless and can be deleted safely.

I am looking for a way to determine/estimate the test coverage in such a project

The obvious way of connecting the test with the corresponding user story/requirement will not work, as for quite many of the implemented functions there are no documented requirements.

The monolith is right now in the process of being refactored into roughly independent modules. The current idea is sorting the system tests based on their belonging to these modules. Additionally the tests belonging to one module can then be sorted based on the business process they test inside that module. With this I should have database of tests broken down by the modules and then by business processes, and thus a rough test coverage estimation by process and module. This still does not convince me, as being the best approach though.

So does this make sense? How else could the test coverage be determined/estimated?

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I'm implementing app authorisation at the minute. All pages will be protected except for a few, e.g. login, register. I won't be testing all pages, just one key one.

Behaviour Required: When a guest (not logged in) visits a protected page, e.g. the dashboard, they should be asked to log in and then be redirected to the dashboard upon successful login.

This is what I've managed to write in RSpec so far, but I don't feel it's clear or intuitive:

RSpec.describe "Attempting to access the dashboard", type: :system do
  context "when not logged in" do
    it "asks them to login"

    context "with a subsequent successful login" do
      it "shows the dashboard"
    end
  end
end

which results in:

  1) Attempting to access the dashboard when not logged in asks them to login

  2) Attempting to access the dashboard when not logged in with a subsequent successful login shows the dashboard

How could it be improved?

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

I am capturing URl and path from s3URL from previous response. Some time S3URL not generated then URL and Path not found. In this case i want to hide error because it is application issue not scripting issue.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Let me start with my background,

Been in the industry for over a year, got the wonderful opportunity to join the team despite no prior background, loving and enjoying every moment so far actually. My only dilemma is that there were no senior QAs to pass down their knowledge to me.

Our teams are divided into frontend and backend, and I am part of the frontend team. As I mention, this is my first ever role, with no senior QAs passing down their knowledge. We have existing projects, but I felt like our automation weren't that great, in a sense that most of them are flaky, and I was fairly new to the role, so I didn't know much about automation strategy apart from the fact that - "Automation is good! You don't need to test things manually!"

After a while, I realise there is some truth to that, but that doesn't necessarily mean automate everything, which was my thoughts previously, to try to automate as much functionality each sprint.

Now we are starting a new project, and I am, again the sole QA for the frontend team. I would like to know how other teams approach their manual UI regression testing and also how you go about approaching automation

Manual: We all know that UI is very fragile, and there are lots of part and bits of pieces to it. From my experience, anything and literally ANYTHING could happen. So how do you guys keep track of every single thing?

Automation: From my learnings, it seems that UI automation are very flaky and should be kept to a minimum due to the always changing nature of the frontend. Since I am solely focusing on frontend, how would you guys decide on what to automate etc?

Any input would be appreciated. Would love to hear how other companies approach their frontend testing

Thanks in advance

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Is there a way to get scripts in Java when I record using Katalon studio ?

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

According to istqb foundation level, a test charter is the following:

"An instruction of test objectives and possible test ideas on how to test. Test charters are often used in explorative testing. See also explorative testing".

But the question I ask myself is, how should a test charter be built at all?

Which points result from the explorative test ?

Can the decision on the charter also take place during the test? Or should the charter only be defined before the explorative test?

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

How to handle Dynamic Request URL Path in Jmeter. This request is not recorded in Script, it dynamically appears only in Login page response where in two places.

  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

can u pls any one give suggestions about benchmark metrics i.e average time, Throughput, 90th percentile values for any websites like industry standards meant that any application wouldnt exceeds these values. i.e default Average time =3 sec

default throughput and 90th percentile values??

Read for later

Articles marked as Favorite are saved for later viewing.
close
  • Show original
  • .
  • Share
  • .
  • Favorite
  • .
  • Email
  • .
  • Add Tags 

Separate tags by commas
To access this feature, please upgrade your account.
Start your free month
Free Preview