I have recently been getting back into python and I am a bit rusty. I have started working with the testing framework nose. I am trying to find out what possible functions I have available for use with this framework.
For example when I used rspec in ruby if I wanted to find out what "options" I have available when writing a test case I would simply go to the link below and browse the doco until I found what I needed:
https://www.relishapp.com/rspec/rspec-expectations/docs/built-in-matchers/comparison-matchers
Now when I try and do the same for nose, Google keeps sending me to:
https://nose.readthedocs.io/en/latest/writing_tests.html#test-functions
Although the doco is informative it's not really what I'm looking for.
Is there a python command I can use to find possible testing options or another place where good up to date documentation is stored?
All the assertions nose/unittests provides should be documented:
https://docs.python.org/2.7/library/unittest.html
In addition to doc, the code will always tell the truth. You could check out the library source code, or drop into a debugger inside your test method:
import pdb; pdb.set_trace()
And then inspect the test method for available assertions.
dir(self)
help(unittest.skip)
Related
I want to use image-based verification in one of my tests, I have read the froglogic How-To page and tried to use the functions explained there (waitForImage, findImage) but when I execute my test I get the next error:
NameError: global name 'waitForImage' is not defined
I tried to follow the tutorial in Youtube from froglogic to insert the image verification manually but my Squish GUI is different from the version used in the video.
In the page it explains you need to install the tesseract for squish package in order to find text, but it does not specify if you need it for image recognition as well. I am contacting the froglogic team and my IT team but it might take a few days to get a response.
I am using Squish for Java 6.2
I am wondering if that is the reason of my problem or I am skipping any other configuration step.
The image search feature was added in Squish 6.3. You will need to upgrade to at least Squish 6.3 to use waitForImage and findImage.
How to control output from Twisted-trial tests?
I have looked up for different solutions, but I'm quite new to testing, so I can't find a fitting solution or can't use it correctly.
In general, I try to make autotesting system for my project like BuildBot. But BuildBot doesn't fit me because it reacts only to "On change sources" hook from Mercurial and I want to use other hooks too.
On THIS page from BuildBot documentation I found this information:
One advantage of trial is that the Buildbot happens to know how to
parse trial output, letting it identify which tests passed and which
ones failed. The Buildbot can then provide fine-grained reports about
how many tests have failed, when individual tests fail when they had
been passing previously, etc.
Does it mean that there is no way but to parse information from test-output?
Other possible solutions?
Besides, I looked up in Twisted documentation and found this class IReporter.
Is it a solution and if it is, how can I use it?
If it isn't, are there any other solutions?
P.S. Please, note, that tests have already been written, so I can only start them and can't modify source code.
You can format output from trial arbitrarily by writing a reporter plugin. You found the interface for that plugin already - IReporter.
Once you write such a plugin, you'll be able to use it by adding --reporter=yourplugin to your trial command line arguments.
You can see the list of reporter plugins that are already available on your system using trial --help-reporters. If you have python-subunit installed then you'll see subunit which is a machine-parseable format that might already satisfy your requirements. Unfortunately it's still a subunit v1 reporter and subunit v2 is better in a number of ways. Still, it might suffice.
While coding using python's unittest module, I have found it useful to mark tests to be skipped on execution (see unittest.SkipTest exception in python)
Is there anything similar in Boost.Test?
I am implementing my tests using boost version 1.49.0 and I want to add something like:
BOOST_AUTO_TEST_CASE(test_wibly)
{
throw boost::???::skip_test("http://my::defect.tracking.software/#4321");
}
Basically this should not consider the test as passed or failed, but "skipped" and it should appear so in the output.
If there is nothing like it, where can I find some resources on how to implement it myself (on top of Boost.Test)?
The documentation has a section on skipping tests, but it refers to skipping a test suite if a previous test fails.
As far as I know there is no way to do this with Boost Test.
I have run across the NCBI C++ Toolkit, which has an enhanced version of Boost Test that adds these sort of capabilities. I haven't had an occasion to try it yet, so I can't vouch for it.
I am Unit testing on the client side of a GWT+SmartGWT application.
First I tested with GwtTestCase. Too long for unit testing a huge application. GwtTestSuite doesn't help. It still takes too much time to execute. (more, it asked me to start a browser when it's testing)
Then gwt-test-utils : Great framework. Sadly, my javassist version is 3.5 and need at least the 3.11. Javassist is used by gilead so I can't touch this. So, no gwt-test-utils...
I saw Selenium. That's just great. With htmlunit driver, it's fast and useful. Simplest way to test a webapp. Problem here is SmartGWT generates it's own IDs when it generates the web page. So I can't get the TextItems and fill them since those IDs are constantly changing. I found that it could be solved by using setID() before the initialization of the widget. But that's the ID of the scLocator and not an HTML ID. Selenium doesn't want to work with scLocator.
=> Is there a simple way to accept scLocator with Selenium ?
(And when I say simple, it must be simple... I'm not a specialist in Java...)
Could someone help me to unit test the application ? It's coded in Java, it's huge and I have to cover ~70% of the code (25k lines of code)
Some more specs :
Maven is used to compile.
I'm not touching at the server side.
Tests must be faster than GwtTestCase and Suite :/
I hope my problem is clear, I'm not a native english so sorry for mistakes I may do :x
We provide Selenium extensions and a user guide for them in the SDK, under the "selenium" directory at top level.
If you download 3.1d (from smartclient.com/builds) there's even more documentation including some samples for JUnit.
Do not use ensureDebugId() (won't have an effect at all). Never try to work with DOM IDs (won't work).
Best practices information in the Selenium user guide explains when you should use setID().
I found that it could be solved by using setID() before the
initialization of the widget. But that's the ID of the scLocator and
not an HTML ID.
Why don't you try:
widget.ensureDebugId("some-id");
From the Java docs for ensureDebugId():
Ensure that the main Element for this UIObject has an ID property set,
which allows it to integrate with third-party libraries and test tools
< defaultUserExtensionsEnabled>true< /defaultUserExtensionsEnabled>
< userExtensions>[path to user-extensions.js]< /userExtensions>
There we go. I managed to make it work. (With the selenium-maven-plugin in the < configuration> tag)
Thanks for your help though.
Is there an automated tool to generate reports containing information about unit tests when using sikuli? The data I want would be things such as pass/fail, a trace to where/why it failed, and a log of events.
Ended up using the HTMLTestRunner tool, it was far easier than anything else I found and met the criteria I needed. (There is also an XML version of this, XMLTestRunner)
HTMLTestRunner is an extension to the Python standard library's unittest module. It generates easy to use HTML test reports.
http://tungwaiyip.info/software/HTMLTestRunner.html
Another useful tool I found was RobotFramework, which can be integrated into Sikuli, but is more complicated and requires alot of research and reading of documentation.