nice to meet you all here. I have question about software testing, i am quite new around 6 months in software testing.
My question is:
Can we make automated test case if the test case requires the device to reboot?
You can split your test into two pieces: one before reboot, and the other one after reboot. If you need a specific data for the second piece, you can save the output of first test and then call it during your second test piece.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
If I have UI automation tests, why do I need to write unit tests?
If I need to check that a method returns some output for a given input, for example a result of an addition which is then displayed in a view, why do I need unit test if I can confirm that output in the view is correct (or not correct) through UI automation test
Unit test and end to end test (UI tests) have two different purposes
Unit test tell you when unit of code (module, class, function, interface) has an issue
End to end tests tell you how that failure affects end to end output.
Lets use an analogy to understand why we need both.
Suppose you were manufacturing a car by assembling different components like carburettor, gear box, tyres, crankshaft etc. All these parts are being made by different vendors(think developers).
When car fails to work as expected, will you need to test individual components to figure out where the problem originates from ?
Will testing components before assembling the car, make you save time and effort ?
Typically what you want to do is to make sure each component work as expected (unit tests) before you add them to your car.
When the car does not work as expected, you test each component to find the root cause of the problem.
This typically works by creating an assembly line (CI pipeline). Your testing strategy looks like
test individual components
test if they work when interfaced with other components
test the car once all components are assembled together.
This testing strategy is what we call a testing pyramid in programming.
Reading this might give you more insight : https://martinfowler.com/bliki/TestPyramid.html
Two reasons immediately come to mind as to why you would want unit tests despite having automation tests.
Unit tests make ruthless code refactoring a much less daunting challenge, and mitigate that risk
Unit tests provide invaluable documentation of the code, what each module does (automation tests don't give you this), when the code changes the unit tests change also unlike stale documentation in some wiki or doc that never gets updated later as code continues to change and evolve over time.
In addition to Nishants and James' answers: With UI/End-to-End tests it is much harder to test for certain error conditions.
First of all, you need to understand that unit test cases and user interface (UI) test automation are two different concepts. In unit test cases, you write test cases per unit and test them module by module---you're actually testing each module separately.
Test automation, on the other hand, covers end-to-end testing. It tests your end-to-end inputs and their respective outputs. Both have their own advantages, so you need to use both on your product to make sure it is bug-free. Let's better understand the need for unit tests with the help of an example:
You're building a chatting app. For the app, you're integrating different modules like login, registration, send and receive a message, message history etc. Now, suppose there are multiple developers working on this product: each developer has worked on a different module. In this scenario, you need to join all the modules into the system flow to make the complete product. When you integrate all the modules, you find that the product is not able to store messages. So, now you need to test each module independently because you can't tell which specific module didn't work.
To avoid such cases, it's better to test each module before merging it with the others. This is called unit testing. If unit testing is done correctly, you will get the bug immediately. Once all the unit test cases pass, you can finally start integrating modules.
Unit testing is generally executed through the use of an assembly line (CI pipeline). Your product usually works if you create a good testing strategy and write the best test cases. The flow is a bit like this:
Test individual modules
Start integrating and testing each functionality and see if it's working or not
Run UI automation test cases on the product once you have integrated all the modules
In the end, if all test cases pass, that means your system is ready to work flawlessly.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 months ago.
Improve this question
I have a web application which I need to continuously and rigorously test and automate the testing process on JIRA.
I use JIRA Cloud subscription.
How do I implement the below requirements on JIRA:
1 - Writing Use Cases (user stories) and have them saved as items on JIRA so that I can easily find, search, and filter them on JIRA (same as I can do with Issues for example).
2 - Creating Test Cases by recording them while I test the test case manually the first time (same as recording a macro in Excel) and then be able to re-run the test cases anytime I want again and recording the output each time I run them.
Each Test case created should be linked to its parent Use Case.
Each Use Case can have many Test Cases linked with it.
a Test Case could be associated with multiple Use Cases.
3 - Running all recorded Test Cases in batches and capturing the output for each run of each test case, and then manually judging whether the test case succeeded or failed for that run.
Kindly advise.
Some, but not all of what you describe is possible with out-of-the-box JIRA.
It is possible to create a custom issue type of 'Test Case'. You can give this issue type all the fields that are appropriate for a test case.
Having the custom issue type makes it easier to do searches (e.g. search for all open issues of type 'Test Case' on a project).
JIRA allows you to have many-to-many relationships using issue links. Unfortunately searching using issue links is a pain unless you have a plugin like Script Runner. Script runner gives you functions like hasLinks, linkedIssueOf and epicsOf.
If you want to do more sophisticated linking of actual tests with JIRA then it would be worth considering some of the test plugins, such as Zephyr. This plugin allows you to create and execute tests from within JIRA.
Another thing that is worth considering is JIRA integration with source control systems. For example, JIRA has good integration with GitHub. It would be possible to store your test cases under source control and then link them to JIRA issues as a part of the commit process.
New tickets in JIRA can also be created using REST API call, below are few links which refers sub ticket creation call with examples. Hope this helps!!
https://developer.atlassian.com/jiradev/jira-apis/jira-rest-apis/jira-rest-api-tutorials/jira-rest-api-example-create-issue
https://docs.atlassian.com/jira/REST/cloud/
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
What is Unit testing, Black-box testing and White-Box testing? I googled but all the explanation I found was very technical. Can anyone answer this question in a simple way with an appropriate example?
In black box testing, you don't care how the internals of the thing being tested work. You invoke the exposed API and check the result; you don't care what the thing being tested did to give you the result.
In white box testing, you do care how the internals of the thing being tested work. So instead of just checking the output of your thing, you might check that internal variables to the thing being tested are correct.
Unit testing is a way of testing software components. The "Unit" is the thing being tested. You can do both black and white box testing with unit tests; the concept is orthogonal to white/black-box testing.
A very non technical explaination lacking any details.... Here comes..
Blackbox Testing : Testing an application without any knowledge of how the internal application works
Whitebox Testing: Testing an application with knowledge of how the internal works, such as by having the source code side by side while you are doing your test.
Unit Testing: This is where you create tests which interact directly with your application. You would check a function in your application and assert that the response should return with value X. Unit Tests are usually, but not always created by the developers themselves as well, whereas if a company does whitebox and blackbox testing, it can be done by anyone.
This is a very basic explaination.
Black Box Testing:
Tester is a human and not the developer
Tester does not know how system was implemented *
Tester will report an issue when the response from the system to any step of the test is not the expected result.
White Box Testing:
Tester is a human and not the developer
Tester does know how system was implemented *
Tester will report an issue when the response from the system to any step of the test is not the expected result and is more likely to detect an issue with the test case itself or with the system despite receiving expected results.
Unit Testing:
Tester is usually code that tests a particular module within a system. For example, in Java, a project may have a class named Student and a test class named StudentTest. For each of the functions in Student (like getGrades), StudentTest might have 0 or more functions to test them (like getGradesTest). This is just one such way to go about it.
Testing code typically only knows the expected output for various input for a portion of a system.
Unit tests are often run before submitting code or run automatically when building an application to deploy. The goal is to prevent as many bugs from being introduced into the system when adding, changing or removing functionality.
* The amount of knowledge known between a black box tester and a white box tester varies from organization to organization. For example, what I consider usability testing, another company might call black box testing. A white box tester in some companies might be another developer (developer QA), whereas another organization may not allow any testing sign-offs to be completed by a developer. A black box tester could be someone who just has a list of instructions they need to follow and validate, or it could be someone who generally knows how the system works, but just not at a particularly detailed level. For example:
A black box tester may or may not identify an issue despite a test case that matches expectations, like an e-commerce test case that omits the step of collecting a guest checkout shipping address.
Essentially, white box and black box testing is rarely implemented strictly. Most organizations have unit tests, developer testing (that may or may not be formally documented - depends upon the implications of a failure), QA testers (black, white, and every shade of gray in between), and user testing / business sign-off (the people who should be involved throughout the project, but in poorly run organizations only show up at the beginning and end, and send a completed project back to design right before deployment).
Blackbox Testing: This is always user or client based testing where testing is done based on the requirement provided. This testing is done by testers only.
Whitebox Testing: This is to verify the flow of the code base. Testing the flow of condition statement, loop statement etc. This mainly from developer prospective.
Unit Testing: This is part of white box testing as you test each methods in code with your test data and assert that. Now a days this done by testers and company looks this skill from tester where they are able to understand the code and algorithms.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I am working on a C++ application that uses computer vision techniques to identify various types of objects in a sequence of images. The (1000+) images have been hand-classified, so we have an XML file for each image containing a description of where the objects are actually located in the images.
I would like to know if there is a testing framework that can understand/graph results from tests that are numeric, in this case some measure of the error in the program's classification of the images, rather than just pass/fail style unit tests.
We would like to use something like CDash/CTest for running these automated tests, and viewing over time how improvements to the vision algorithms are causing the images to be more correctly classified.
Does anyone know of a tool/framework that can do this?
I think you should distinguish between Unit testing and algorithm performance (=accuracy and/or speed) evaluation. You should apply both, but separately.
Unit testing should tell you whether your code does what it's supposed to be. Not sure if/how you can unit test the whole chain from a raw image to extracted objects, but you should be able to test the "units" (modules/methods/classes) individually that are combined to do the job. Unit tests should give you "fail" or "pass". If a speed optimization changes the code's behavior, the unit test should tell you this. For unit testing there are plenty of frameworks available (I like Google Test, but there are many others.)
Your question seems to aim more at the second part: evaluate the quality of your algorithm. I personally love TeamCity which is mainly intended as Java/.net Continuous Integration Server, but you can easily use it with C++ too. I wrote a few lines of code in our shop to output Google Test results in a TeamCity format making use of their service API. Each time someone checks in a new revision, TeamCity executes the build (which can be a Visual Studio solution, Ant, command line script or others.) The results are visible to all team mates through a nice web ui. Furthermore, you can report custom build statistics. This can be used for anything like perfomance testing of your algorithms. You simply output a line like
##teamcity[buildStatisticValue key='detectedObjectsPercent' value='88.3']
on the console from your application (which must be configured to run in each build) and TeamCity will store these values and provide a nice graph (values over time) on the web user interface.
Don't forget to setup your custom chart as described here.
I think TeamCity is really simple to setup, so just give it a try! I even like it if I work on a project just by myself!
What you are describing is a typical computer vision/ image processing testing application framework. Although I have designed and used several such systems over the years, they were/are all proprietary.
Such a general purpose testing tool should have variable tolerances, different measures of Type I/II errors and error rates, total summaries and also case-by-case identification of problems. It should also provide different views to different users - for example, while debugging, a programmer needs different data than the release/project manager.
A DB driven back-end and automated test suits enhanced with statistical plots would be great too!
Unfortunately, I do not know of any such testing frameworks available.
I have always had it in my mind to start an open-source project for such a system, but time and resources are scarce, and I was never sure of the actual desirability of such a system (though I am quite sure that it can be made general purpose to suite the needs of many applications).
I would be very interested to know if there is real interest in such a system, it may get the wheels of this project moving...
I think you will have to write your own code at this time.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I work as a tester for an organisation that has a web service as it's critical application. Currently we load massive amounts of test data through the web front end, as this is how it would be done in the real world.
This gives the data an amount of legitimacy and prevents errors in the format of the data. However it is very time consuming to load the data in this way and I often wonder if loading data directly to the database would be more productive.
Have other people had this decision, which option did you opt for? Is there another solution that would give both speed and legitimacy of the data.
This comes from a developer's perspective rather than a tester's, so it may or may not apply.
I can't speak for the organization as a whole, but in our project we have spent some time creating "real-world-like" data that we load into the test database using SQL scripts. This data is a combination of real data from the production environment and data that is tailored to represent specific "problem situations" in our product.
The scripts are run automatically as part of building our software and are used by automated integration tests, driven by a unit testing framework. These tests will test finding, creating, editing and deleting data through various interfaces that are available.
During such a build and test run the test database is reset and reloaded with data on a number of occasions. This is done in order to remove dependencies between tests; one test should not rely on data created or modified by another test, and also because the data for some tests might differ from that data of other tests. A majority of all tests are executed based on the same test data though.
Setting up this test data (and maintaining it) has been (and is sometimes) somewhat of a headache, but in the long run it has worked well in our case.
In most of same situation, Tester prefers to load test data using scripts cause not possible to load the data [time consueming] using UI part.And for tester key-point, each test should be perform on only single row data or whole db data. So for better testing follow way to make scripts to load data.
and one more pont here, Make script is once time investment to load data for whole project.
In the end we decided to migrate from a system where data is setup via the front end to a data insert system. But keep an eye on the data to make sure it is real world. This has worked well and the tests run much faster.
I had the same problem with test data. In our organization, there is a batch job which populates the database with production like data.
I co-ordinate with functional testers to get the testdata for my loadrunner scripts.