Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm not very familiar with a concept of testing in programming languages although I know the basic idea and some of the principles to test your code like unit tests and stuff. I haven't written any tests myself yet but the general idea is more or less clear. But when it comes to Robotic Process Automation I get stuck with how I should properly test my workflows.
If I have modules which don't interact with any interface then I can clearly create a test environment, that is a function and this function will pass some arguments and get the result which will be compared to the expected one.
But what are best practices to test the parts of a workflow which interact with interface and contain clicks, type into and all those things?
If anyone has any experience of creating automated tests in RPA, for instance, in UiPath, I would be grateful to see it explained. Any ideas, irrespectively to the proper experience lack or presence, would be highly appreciated anyway.
By the way if anyone worked in UiPath he could notice that they developed the so called ReFramework which follows best practices in RPA deployment according to their words. In this Framework they got a test folder and some test modules but I don't get how they work and how I should adjust them in order to match a program developed by myself.
Thanks for the question.
I am a RPA developer, and also tested the workflows but don't as a "Tester" perspective.
If you look there are many things to test.
case#1
As you said you are dealing with web portal, you might use click activity. There is one property called selector which is auto generated. Selector identify the UI element. There are many attribute in selector that may be static.This is wrong practice
lets take example
Submit
in this idx and uipath_custom_id attribute are a static, this might change accordingly but the name Submit and class never change, So as a tester you can find this type of mistakes by the Developer...
Keep in mind that never give static values or numbers to any attribute in the selectors...Instead of that use
(* and ?)
https://studio.uipath.com/v2017.1/docs/selectors-with-wildcards
It also happen that there are two buttons in web page having same name, same class so the selector which is generated is also some what same except ID so you need to take care of this also as considering ID always changes.
Always keep your workflow small, use proper activity and keep business logic in separate sequence activity Such things you can test. Also you can test the Optimization of the flow.
If you are dealing with other application like Excel or SAP check that you can close it after your work done.
Such things you can test
Thanks
It will be better if you tell the scenario so that community can help you tell about the test cases...:)
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 months ago.
Improve this question
I have a web application which I need to continuously and rigorously test and automate the testing process on JIRA.
I use JIRA Cloud subscription.
How do I implement the below requirements on JIRA:
1 - Writing Use Cases (user stories) and have them saved as items on JIRA so that I can easily find, search, and filter them on JIRA (same as I can do with Issues for example).
2 - Creating Test Cases by recording them while I test the test case manually the first time (same as recording a macro in Excel) and then be able to re-run the test cases anytime I want again and recording the output each time I run them.
Each Test case created should be linked to its parent Use Case.
Each Use Case can have many Test Cases linked with it.
a Test Case could be associated with multiple Use Cases.
3 - Running all recorded Test Cases in batches and capturing the output for each run of each test case, and then manually judging whether the test case succeeded or failed for that run.
Kindly advise.
Some, but not all of what you describe is possible with out-of-the-box JIRA.
It is possible to create a custom issue type of 'Test Case'. You can give this issue type all the fields that are appropriate for a test case.
Having the custom issue type makes it easier to do searches (e.g. search for all open issues of type 'Test Case' on a project).
JIRA allows you to have many-to-many relationships using issue links. Unfortunately searching using issue links is a pain unless you have a plugin like Script Runner. Script runner gives you functions like hasLinks, linkedIssueOf and epicsOf.
If you want to do more sophisticated linking of actual tests with JIRA then it would be worth considering some of the test plugins, such as Zephyr. This plugin allows you to create and execute tests from within JIRA.
Another thing that is worth considering is JIRA integration with source control systems. For example, JIRA has good integration with GitHub. It would be possible to store your test cases under source control and then link them to JIRA issues as a part of the commit process.
New tickets in JIRA can also be created using REST API call, below are few links which refers sub ticket creation call with examples. Hope this helps!!
https://developer.atlassian.com/jiradev/jira-apis/jira-rest-apis/jira-rest-api-tutorials/jira-rest-api-example-create-issue
https://docs.atlassian.com/jira/REST/cloud/
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have a web application in Vaadin. It has got some forms, search fields, buttons, etc and supported by a SQL database. I have been using Selenium, Sahi Open Source and some other tools for automated GUI testing.
Problem is: With recording GUI actions for automated testing isn't really useful because it seems more manual work than automated because I need to record the tests manually anyway.
Question is: Is there any better way to test a Web Application? How do you test your web application? Is there any free tool which automatically detects bugs in my web application?
This won't be possible unless someone will invent sentient AI and even that might not be enough. In our company we have separate QA department (they're intelligent human beings) and they keep asking questions like "how we're supposed to test this flow" and "is this the expected response".
Without test that is aware of the business flow that you're limited to what bots do - randomly crawl trough site and try to get "500 page" and that is not enough. If you're tired of writing tests you can:
Use static code analysis tool (like jshint) to check if your code is written in "best practices" way
Use your users as testers (have streamlined release process, and mechanism of reporting error so you can address production bug as quickly as possible)
Hire someone to write the test for you
There are lots of ways to test software. None of them are fully automatic; they all require that actual behavior be compared against expected behavior, and expected behavior cannot be automatically inferred by a machine but must instead be prescribed and defined by humans, and then translated to a language that a machine can understand and use to determine whether it matches the actual behavior.
Here is a starting point to start reading about other ways to test your software:
https://en.wikipedia.org/wiki/Software_testing
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I am working on a C++ application that uses computer vision techniques to identify various types of objects in a sequence of images. The (1000+) images have been hand-classified, so we have an XML file for each image containing a description of where the objects are actually located in the images.
I would like to know if there is a testing framework that can understand/graph results from tests that are numeric, in this case some measure of the error in the program's classification of the images, rather than just pass/fail style unit tests.
We would like to use something like CDash/CTest for running these automated tests, and viewing over time how improvements to the vision algorithms are causing the images to be more correctly classified.
Does anyone know of a tool/framework that can do this?
I think you should distinguish between Unit testing and algorithm performance (=accuracy and/or speed) evaluation. You should apply both, but separately.
Unit testing should tell you whether your code does what it's supposed to be. Not sure if/how you can unit test the whole chain from a raw image to extracted objects, but you should be able to test the "units" (modules/methods/classes) individually that are combined to do the job. Unit tests should give you "fail" or "pass". If a speed optimization changes the code's behavior, the unit test should tell you this. For unit testing there are plenty of frameworks available (I like Google Test, but there are many others.)
Your question seems to aim more at the second part: evaluate the quality of your algorithm. I personally love TeamCity which is mainly intended as Java/.net Continuous Integration Server, but you can easily use it with C++ too. I wrote a few lines of code in our shop to output Google Test results in a TeamCity format making use of their service API. Each time someone checks in a new revision, TeamCity executes the build (which can be a Visual Studio solution, Ant, command line script or others.) The results are visible to all team mates through a nice web ui. Furthermore, you can report custom build statistics. This can be used for anything like perfomance testing of your algorithms. You simply output a line like
##teamcity[buildStatisticValue key='detectedObjectsPercent' value='88.3']
on the console from your application (which must be configured to run in each build) and TeamCity will store these values and provide a nice graph (values over time) on the web user interface.
Don't forget to setup your custom chart as described here.
I think TeamCity is really simple to setup, so just give it a try! I even like it if I work on a project just by myself!
What you are describing is a typical computer vision/ image processing testing application framework. Although I have designed and used several such systems over the years, they were/are all proprietary.
Such a general purpose testing tool should have variable tolerances, different measures of Type I/II errors and error rates, total summaries and also case-by-case identification of problems. It should also provide different views to different users - for example, while debugging, a programmer needs different data than the release/project manager.
A DB driven back-end and automated test suits enhanced with statistical plots would be great too!
Unfortunately, I do not know of any such testing frameworks available.
I have always had it in my mind to start an open-source project for such a system, but time and resources are scarce, and I was never sure of the actual desirability of such a system (though I am quite sure that it can be made general purpose to suite the needs of many applications).
I would be very interested to know if there is real interest in such a system, it may get the wheels of this project moving...
I think you will have to write your own code at this time.
This question is more or less programming language agnostic. However as I'm mostly into Java these days that's where I'll draw my examples from. I'm also thinking about the OOP case, so if you want to test a method you need an instance of that methods class.
A core rule for unit tests is that they should be autonomous, and that can be achieved by isolating a class from its dependencies. There are several ways to do it and it depends on if you inject your dependencies using IoC (in the Java world we have Spring, EJB3 and other frameworks/platforms which provide injection capabilities) and/or if you mock objects (for Java you have JMock and EasyMock) to separate a class being tested from its dependencies.
If we need to test groups of methods in different classes* and see that they are well integration, we write integration tests. And here is my question!
At least in web applications, state is often persisted to a database. We could use the same tools as for unit tests to achieve independence from the database. But in my humble opinion I think that there are cases when not using a database for integration tests is mocking too much (but feel free to disagree; not using a database at all, ever, is also a valid answer as it makes the question irrelevant).
When you use a database for integration tests, how do you fill that database with data? I can see two approaches:
Store the database contents for the integration test and load it before starting the test. If it's stored as an SQL dump, a database file, XML or something else would be interesting to know.
Create the necessary database structures by API calls. These calls are probably split up into several methods in your test code and each of these methods may fail. It could be seen as your integration test having dependencies on other tests.
How are you making certain that database data needed for tests is there when you need it? And why did you choose the method you choose?
Please provide an answer with a motivation, as it's in the motivation the interesting part lies. Remember that just saying "It's best practice!" isn't a real motivation, it's just re-iterating something you've read or heard from someone. For that case please explain why it's best practice.
*I'm including one method calling other methods in (the same or other) instances of the same class in my definition of unit test, even though it might technically not be correct. Feel free to correct me, but let's keep it as a side issue.
I prefer creating the test data using API calls.
In the beginning of the test, you create an empty database (in-memory or the same that is used in production), run the install script to initialize it, and then create whatever test data used by the database. Creation of the test data may be organized for example with the Object Mother pattern, so that the same data can be reused in many tests, possibly with minor variations.
You want to have the database in a known state before every test, in order to have reproducable tests without side effects. So when a test ends, you should drop the test database or roll back the transaction, so that the next test could recreate the test data always the same way, regardless of whether the previous tests passed or failed.
The reason why I would avoid importing database dumps (or similar), is that it will couple the test data with the database schema. When the database schema changes, you would also need to change or recreate the test data, which may require manual work.
If the test data is specified in code, you will have the power of your IDE's refactoring tools at your hand. When you make a change which affects the database schema, it will probably also affect the API calls, so you will anyways need to refactor the code using the API. With nearly the same effort you can also refactor the creation of the test data - especially if the refactoring can be automated (renames, introducing parameters etc.). But if the tests rely on a database dump, you would need to manually refactor the database dump in addition to refactoring the code which uses the API.
Another thing related to integration testing the database, is testing that upgrading from a previous database schema works right. For that you might want to read the book Refactoring Databases: Evolutionary Database Design or this article: http://martinfowler.com/articles/evodb.html
In integration tests, you need to test with real database, as you have to verify that your application can actually talk to the database. Isolating the database as dependency means that you are postponing the real test of whether your database was deployed properly, your schema is as expected and your app is configured with the right connection string. You don't want to find any problems with these when you deploy to production.
You also want to test with both precreated data sets and empty data set. You need to test both path where your app starts with an empty database with only your default initial data and starts creating and populating the data and also with a well-defined data sets that target specific conditions you want to test, like stress, performance and so on.
Also, make sur that you have the database in a well-known state before each state. You don't want to have dependencies between your integration tests.
Why are these two approaches defined as being exclusively?
I can't see any viable argument for
not using pre-existing data sets, especially particular data that has
caused problems in the past.
I can't
see any viable argument for not
programmatically extending that data with
all the possible conditions that
you can imagine causing problems and even a
bit of random data for integration
testing.
In modern agile approaches, Unit tests are where it really matters that the same tests are run each time. This is because unit tests are aimed not at finding bugs but at preserving the functionality of the app as it is developed, allowing the developer to refactor as needed.
Integration tests, on the other hand, are designed to find the bugs you did not expect. Running with some different data each time can even be good, in my opinion. You just have to make sure your test preserves the failing data if you get a failure. Remember, in formal integration testing, the application itself will be frozen except for bug fixes so your tests can be change to test for the maximum possible number and kinds of bugs. In integration, you can and should throw the kitchen sink at the app.
As others have noted, of course, all this naturally depends on the kind of application that you are developing and the kind of organization you are in, etc.
It sounds like your question is actually two questions. Should you exclude the database from your testing? When you do a database, then how should you generate the data in the database?
When possible I prefer to use an actual database. Frequently the queries (written in SQL, HQL, etc.) in CRUD classes can return surprising results when confronted with an actual database. It's better to flush these issues out early on. Often developers will write very thin unit tests for CRUD; testing only the most benign cases. Using an actual database for your testing can test all kinds corner cases you may not have even been aware of.
That being said there can be other issues. After each test you want to return your database to a known state. It my current job we nuke the database by executing all the DROP statements and then completely recreating all the tables from scratch. This is extremely slow on Oracle, but can be very fast if you use an in memory database like HSQLDB. When we need to flush out Oracle specific issues we just change the database URL and driver properties and then run against Oracle. If you don't have this kind of database portability then Oracle also has some kind of database snapshot feature which can be used specifically for this purpose; rolling back the entire database to some previous state. I'm sure what other databases have.
Depending on what kind of data will be in your database the API or the load approach may work better or worse. When you have highly structured data with many relations, APIs will make your life easier my making the relations between your data explicit. It will be harder for you to make a mistake when creating your test data set. As mentioned by other posters refactoring tools can take care of some of the changes to structure of your data automatically. Often I find it useful to think of API generated test data as composing a scenario; when a user/system has done steps X, Y Z and then tests will go from there. These states can be achieved because you can write a program that calls the same API your user would use.
Loading data becomes much more important when you need large volumes of data, you have few relations between within your data or there is consistency in the data that can not be expressed using APIs or standard relational mechanisms. At one job that at worked at my team was writing the reporting application for a large network packet inspection system. The volume of data was quite large for the time. In order to trigger a useful subset of test cases we really needed test data generated by the sniffers. This way correlations between the information about one protocol would correlate with information about another protocol. It was difficult to capture this in the API.
Most databases have tools to import and export delimited text files of tables. But often you only want subsets of them; making using data dumps more complicated. At my current job we need to take some dumps of actual data which gets generated by Matlab programs and stored in the database. We have tool which can dump a subset of the database data and then compare it with the "ground truth" for testing. It seems our extraction tools are being constantly modified.
I've used DBUnit to take snapshots of records in a database and store them in XML format. Then my unit tests (we called them integration tests when they required a database), can wipe and restore from the XML file at the start of each test.
I'm undecided whether this is worth the effort. One problem is dependencies on other tables. We left static reference tables alone, and built some tools to detect and extract all child tables along with the requested records. I read someone's recommendation to disable all foreign keys in your integration test database. That would make it way easier to prepare the data, but you're no longer checking for any referential integrity problems in your tests.
Another problem is database schema changes. We wrote some tools that would add default values for columns that had been added since the snapshots were taken.
Obviously these tests were way slower than pure unit tests.
When you're trying to test some legacy code where it's very difficult to write unit tests for individual classes, this approach may be worth the effort.
I do both, depending on what I need to test:
I import static test data from SQL scripts or DB dumps. This data is used in object load (deserialization or object mapping) and in SQL query tests (when I want to know whether the code will return the correct result).
Plus, I usually have some backbone data (config, value to name lookup tables, etc). These are also loaded in this step. Note that this loading is a single test (along with creating the DB from scratch).
When I have code which modifies the DB (object -> DB), I usually run it against a living DB (in memory or a test instance somewhere). This is to ensure that the code works; not to create any large amount of rows. After the test, I rollback the transaction (following the rule that tests must not modify the global state).
Of course, there are exceptions to the rule:
I also create large amount of rows in performance tests.
Sometimes, I have to commit the result of a unit test (otherwise, the test would grow too big).
I generally use SQL scripts to fill the data in the scenario you discuss.
It's straight-forward and very easily repeatable.
This will probably not answer all your questions, if any, but I made the decision in one project to do unit testing against the DB. I felt in my case that the DB structure needed testing too, i.e. did my DB design deliver what is needed for the application. Later in the project when I feel the DB structure is stable, I will probably move away from this.
To generate data I decided to create an external application that filled the DB with "random" data, I created a person-name and company-name generators etc.
The reason for doing this in an external program was:
1. I could rerun the tests on by test modified data, i.e. making sure my tests were able to run several times and the data modification made by the tests were valid modifications.
2. I could if needed, clean the DB and get a fresh start.
I agree that there are points of failure in this approach, but in my case since e.g. person generation was part of the business logic generating data for tests was actually testing that part too.
Our team confront the same question recently.
Before, we were using specflow to do integration testing. With specflow, QA can write each test case inside which populating necessary test data to DB.
Now, QA want to use postman to test API, how can they populate the data? One solution is creating Apis for populating them. Another is sync historical data from Prod to test env.
Will update my answer once we try different solutions and decide which one to go.