Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm not very familiar with a concept of testing in programming languages although I know the basic idea and some of the principles to test your code like unit tests and stuff. I haven't written any tests myself yet but the general idea is more or less clear. But when it comes to Robotic Process Automation I get stuck with how I should properly test my workflows.
If I have modules which don't interact with any interface then I can clearly create a test environment, that is a function and this function will pass some arguments and get the result which will be compared to the expected one.
But what are best practices to test the parts of a workflow which interact with interface and contain clicks, type into and all those things?
If anyone has any experience of creating automated tests in RPA, for instance, in UiPath, I would be grateful to see it explained. Any ideas, irrespectively to the proper experience lack or presence, would be highly appreciated anyway.
By the way if anyone worked in UiPath he could notice that they developed the so called ReFramework which follows best practices in RPA deployment according to their words. In this Framework they got a test folder and some test modules but I don't get how they work and how I should adjust them in order to match a program developed by myself.
Thanks for the question.
I am a RPA developer, and also tested the workflows but don't as a "Tester" perspective.
If you look there are many things to test.
case#1
As you said you are dealing with web portal, you might use click activity. There is one property called selector which is auto generated. Selector identify the UI element. There are many attribute in selector that may be static.This is wrong practice
lets take example
Submit
in this idx and uipath_custom_id attribute are a static, this might change accordingly but the name Submit and class never change, So as a tester you can find this type of mistakes by the Developer...
Keep in mind that never give static values or numbers to any attribute in the selectors...Instead of that use
(* and ?)
https://studio.uipath.com/v2017.1/docs/selectors-with-wildcards
It also happen that there are two buttons in web page having same name, same class so the selector which is generated is also some what same except ID so you need to take care of this also as considering ID always changes.
Always keep your workflow small, use proper activity and keep business logic in separate sequence activity Such things you can test. Also you can test the Optimization of the flow.
If you are dealing with other application like Excel or SAP check that you can close it after your work done.
Such things you can test
Thanks
It will be better if you tell the scenario so that community can help you tell about the test cases...:)
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 months ago.
Improve this question
I have a web application which I need to continuously and rigorously test and automate the testing process on JIRA.
I use JIRA Cloud subscription.
How do I implement the below requirements on JIRA:
1 - Writing Use Cases (user stories) and have them saved as items on JIRA so that I can easily find, search, and filter them on JIRA (same as I can do with Issues for example).
2 - Creating Test Cases by recording them while I test the test case manually the first time (same as recording a macro in Excel) and then be able to re-run the test cases anytime I want again and recording the output each time I run them.
Each Test case created should be linked to its parent Use Case.
Each Use Case can have many Test Cases linked with it.
a Test Case could be associated with multiple Use Cases.
3 - Running all recorded Test Cases in batches and capturing the output for each run of each test case, and then manually judging whether the test case succeeded or failed for that run.
Kindly advise.
Some, but not all of what you describe is possible with out-of-the-box JIRA.
It is possible to create a custom issue type of 'Test Case'. You can give this issue type all the fields that are appropriate for a test case.
Having the custom issue type makes it easier to do searches (e.g. search for all open issues of type 'Test Case' on a project).
JIRA allows you to have many-to-many relationships using issue links. Unfortunately searching using issue links is a pain unless you have a plugin like Script Runner. Script runner gives you functions like hasLinks, linkedIssueOf and epicsOf.
If you want to do more sophisticated linking of actual tests with JIRA then it would be worth considering some of the test plugins, such as Zephyr. This plugin allows you to create and execute tests from within JIRA.
Another thing that is worth considering is JIRA integration with source control systems. For example, JIRA has good integration with GitHub. It would be possible to store your test cases under source control and then link them to JIRA issues as a part of the commit process.
New tickets in JIRA can also be created using REST API call, below are few links which refers sub ticket creation call with examples. Hope this helps!!
https://developer.atlassian.com/jiradev/jira-apis/jira-rest-apis/jira-rest-api-tutorials/jira-rest-api-example-create-issue
https://docs.atlassian.com/jira/REST/cloud/
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I have a web application in Vaadin. It has got some forms, search fields, buttons, etc and supported by a SQL database. I have been using Selenium, Sahi Open Source and some other tools for automated GUI testing.
Problem is: With recording GUI actions for automated testing isn't really useful because it seems more manual work than automated because I need to record the tests manually anyway.
Question is: Is there any better way to test a Web Application? How do you test your web application? Is there any free tool which automatically detects bugs in my web application?
This won't be possible unless someone will invent sentient AI and even that might not be enough. In our company we have separate QA department (they're intelligent human beings) and they keep asking questions like "how we're supposed to test this flow" and "is this the expected response".
Without test that is aware of the business flow that you're limited to what bots do - randomly crawl trough site and try to get "500 page" and that is not enough. If you're tired of writing tests you can:
Use static code analysis tool (like jshint) to check if your code is written in "best practices" way
Use your users as testers (have streamlined release process, and mechanism of reporting error so you can address production bug as quickly as possible)
Hire someone to write the test for you
There are lots of ways to test software. None of them are fully automatic; they all require that actual behavior be compared against expected behavior, and expected behavior cannot be automatically inferred by a machine but must instead be prescribed and defined by humans, and then translated to a language that a machine can understand and use to determine whether it matches the actual behavior.
Here is a starting point to start reading about other ways to test your software:
https://en.wikipedia.org/wiki/Software_testing
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
My query is very basic one but just want to know the exact things which occur behind the bars and how. Lets say, I am given a question to code. User submits the code in any language(I'd like to go for C or C++ here specifically), now the code gets tested on various test files at the server side. How this happens? As I think and searched, there must be a code at the server side which must be accepting solutions(user's code) from the client in form of the file, then run that file on various test files(which will have all test cases according to the input and output specified in the problem description) and match the output. Is it? I think there is something else or something which I am mistaken.
If I have a very simple program to add two numbers, now I want to test the user's code, what exactly do I have to do? I am asking from the implementation point of view i.e. I want to actually do and test the same on my machine. Can someone please tell me from basic what all I should do?(Much the same way online judges do)
PS: I am not asking this for hosting any contest etc, just doing out of curiosity for learning.
I would divide this into two sub goals
learning automated testing
setting up some application which allows the user to submit testcases, run the automated tests and reports feedback
You could start to get some deeper insight by setting up automated tests for some program in your favourite programming language.
Use a search engine to e.g. look for "automated c++ testing".
If you have managed setting up a few automated tests on a local machine, your could then progress with the second goal.
For example you could set up a Jenkins instance and learn how to add automated tests to it.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
In another question I asked if mutation-testing is useful in practice. As I didn't get any answers that satisfy me, I want to check the current tools myself. So I need an overview about current existing mutation-test-frameworks. Which of them are most usable and why?
I program in Java, so I would prefer Java-tools, but I would risk a look at interesting frameworks for different languages.
I want to integrate in an automatic build-process, so I would prefer tools that can be executed through command-line.
There is also PIT which can be hooked into your build via a maven plugin or command line interface.
It provides much nicer reports than the other available tools with combined mutation and line coverage. It also runs considerably faster than the source based tools for Java such as Jester, and about twice as fast as Jumble.
Unlike the Jumble and Javalanche it also works with all the major mocking frameworks (Mockito, JMock, EasyMock, PowerMock and JMockit).
(disclosure I'm the author).
I know it's an old thread, but it's still an answer to the question. I'm working with some friends on an open source .NET mutation testing framework called NinjaTurtles, which you can find on CodePlex and on Nuget. The main project website is here.
I only know of two frameworks, but they're both for Java :)
Jester
Jumble
I haven't used either of them, I'm afraid.
CREAM is a tool for C#/.Net
http://galera.ii.pw.edu.pl/~adr/CREAM/index.php
For Ruby there is Heckle, and a newcomer called Boo_hiss.
For the .Net community, there is NesTer, but it has some serious limitations. E.g. only supports C# and NUnit.
Does not appear to be actively maintained either, but it might be a starting point.
I took a look at Jester (the actual source code) and it seems to me that it does not support too many mutations. There is a file in there where these mutations are specified. I might be wrong about the above but what I definitely did not like was the mix between launching the tool from command line and the little GUI feedback interface. Why not give feedback in the command line like JUnit does when run outside an IDE?
Jumble is another thing :). It has a simple command line interface and comes with an Eclipse plugin too. The feedback is all text in the console. I am happy with this tool and I plan to write some ANT target to add it in my project continuous integration.
I am also looking at Javalanche but did not try it yet.
I'll have news in a few weeks.
Might be of some interest. Microsoft Research's: https://pex4fun.com/
You can try µJava. I haven't used it, but it looks like mutation testing might be an interesting way to evaluate test suites.
MμClipse only supports JUnit 3 and is no longer maintained.
Jester as for it, is laborious and requires a complicated configuration; plus is not maintained anymore.
The best tool I could find is Javalanche
I had wrote a entire article about this !
Jester does provide a file for the mutations and they are limited. To some degree, you can add your own mutations to the file.
I've experimented with Jumble and Jester and I found that Jumble provides more mutations and better documentation. Additionally, I've had quick responses from the project owners when I've emailed them. One drawback to Jumble is that it operates on the bytecode using BCEL. That presents something of a learning curve for many developers.
My company, State Farm, wrote an Ant task that we may contribute back to the Jumble project. Based on what I've read in their mailing lists, others are working on an Ant task for Jumble too.
I'm looking at Javalanche as well. I’ll be glad to share what I know when I’m done.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am working on the issue of testing my GUI and I am not entirely sure of the best approach here. My GUI is built using a traditional MVC framework so I am easily able to test the logic parts of the GUI without bringing up the GUI itself. However, when it comes to testing the functionality of the GUI, I am not really sure if I should worry about individually testing GUI components or if I should mainly just focus on functional testing the system. It is a pretty complex system in which testing the GUI frequently involves sending a message to the server and then observing the response on the GUI. My initial thoughts are that functional testing is the way to go here since I need a whole system running to really test the UI. Comments on this issue would be appreciated.
Other GUI-testing tools I can offer are:
Thoughtworks White,
PyWinAuto,
AutoIt,
AutoHotKey.
One thing to keep in mind when trying to automate GUIs is that the only way you can do that is to build the GUI with automation in mind. Crush devs that think their GUIs should not support testability early on in the project and happily expose all the hooks that can help in automation on demand as your testing needs require that.
You have (at least) 2 issues - the complexity of the environment (the server) and the complexity of the GUI.
There are many tools for automating GUI testing. All of them are more or less fragile and require pretty much constant maintenance in the face of changing layout. There is benefit to be gained from using them, but it's a long term benefit.
The environment, on the other hand, is an area that can be tamed. If your application is architected using the Dependency Injection/Inversion technique (where you 'inject' the server component into the application), then you can use a 'mock' of the relevant server interfaces to enable you to script test cases.
Combining these two techniques will allow you to automate GUI testing.
Depending on where in the spectrum of MVC (that's an overused term) you sit, testing the view could be a mechanical process of ensuring that the correct model methods are called in response to the correct inputs to the view to testing some client side validation to who knows.
A lot of the patterns that have been evolved out of MVC (I'm thinking passive view, supervising controller) are striving to make the view require very little testing because it's really just wiring user inputs to the presenter or model (depending on the exact variant of the pattern you're using).
"testing the GUI frequently involves sending a message to the server and then observing the response on the GUI" This statement worries me.
I'm immediately thinking that the GUI should be tested using a mock or stub of the server to test that the correct interactions are occurring and the GUI responds appropriately.
If you need automated functional tests of the server, I don't see the need to have the GUI involved in those.
Mercury QuickTest Pro, Borland SilkTest, and Ranorex Recorder are some GUI testing tools.
If your application is web-based you can write tests using tools like WatiN or Selenium.
If your application is Windows .NET based, you could try White.
My advice: forget the traditional GUI testing. It's too expensive. Coding the tests takes a lot of time, the tools aren't really stable so you will get unreliable test results. The coupling between the code and the test is very strong and you'll spend a lot of time with the maintenance.
The new trend is to ignore the GUI tests. See the ModelViewPresenter pattern from Fowler as a guideline link text
The clearest way I can say this is:
Don't waste your time writing automated GUI tests.
Especially when your working with an MVC app - in your case, when you send a message to the server, you can make sure the right message number comes back and be done. You can add some additional cases - or another test completely to make sure that the GUI is converting the message id's into the right strings, but you just need to run that test once.
We do incorporate GUI testing in our project, and it has its side effects. The developers however have one critical design principle: Keep the GUI layer as thin as possible!
That means no logic in the GUI classes. Separate this in presentation models responsible for input validation etc.
For testing on a Unix machine we use the Xvfb server as the DISPLAY when running the tests.
Try the hallway usability test. It's cheap and useful: go to the nearest hallway, grab the first person that passes, make them sit at your computer and use your software. Watch over their shoulder, you will see what they try to do, what frustrates them, and so on. Do this a few times and notice the patterns.
What you're looking for is "acceptance testing." How you do it depends on the frameworks you're using, what type of application you are creating and in what language. If you google your particular technology and the above phrase, you should find some tools you can use.
I've found WinTask to be a very good way to do GUI testing. Provided you don't constantly change the way the OS refers to each element of the UI, WinTask addresses the UI elements by name, so even if the layout changes, the UI elements can still be pressed / tweaked / selected.
Don't miss the 'U' in 'GUI'
I mean: if what you're trying to test is all works right and works as it was planned to work, then you may follow Seb Rose's answer.
But please, don't forget a USER interface has to be made thinking about USERS, and not ANY user but the TARGET USER the application was made for. So, after you are sure all works like it have to work, put every single view/screen/form in a test with a team made of users representing every group of different users that may use your application: advanced users, administrators, MS Office users, low computer profile users, high computer profile users... and then, get the critiques of every user, make a mix, re-touch your GUI if it's neccesary and back again to GUI user's test.
For SIMPLE Web based GUI testing try iMacros ( a simple Firefox plug-in , has a cool feature to send the entire test to another person )
Note that SIMPLE was spelled with Initials ...