Import (migrate) tests from TestLink to Zephyr - testlink

What is the best way to import tests (test cases, suites) from TestLink to Zephyr?

I found this which seems to answer your question.
https://answers.atlassian.com/questions/184845/importing-test-cases-to-zephyr-add-on-from-test-link
You need an importer plugin found here
http://community.yourzephyr.com/viewtopic.php?f=15&t=1048

You can download the Zephyr Testcase Importer to import Excel or XML files into Zephyr. As far as I know, this importer utility performs a 'flat' import of all testcases, so you will lose all information regarding test plans, test suites and the test execution history.

Related

Nosetests and finding out all possible library functions

I have recently been getting back into python and I am a bit rusty. I have started working with the testing framework nose. I am trying to find out what possible functions I have available for use with this framework.
For example when I used rspec in ruby if I wanted to find out what "options" I have available when writing a test case I would simply go to the link below and browse the doco until I found what I needed:
https://www.relishapp.com/rspec/rspec-expectations/docs/built-in-matchers/comparison-matchers
Now when I try and do the same for nose, Google keeps sending me to:
https://nose.readthedocs.io/en/latest/writing_tests.html#test-functions
Although the doco is informative it's not really what I'm looking for.
Is there a python command I can use to find possible testing options or another place where good up to date documentation is stored?
All the assertions nose/unittests provides should be documented:
https://docs.python.org/2.7/library/unittest.html
In addition to doc, the code will always tell the truth. You could check out the library source code, or drop into a debugger inside your test method:
import pdb; pdb.set_trace()
And then inspect the test method for available assertions.
dir(self)
help(unittest.skip)

Django unit tests failing when run with other test cases

I'm getting inconsistent behavior with Django unit tests. On my development machine using sqlite, if I run tests on my two apps separately the tests pass, but if I run manage.py test to test everything at once, I start getting unit test failures consistently on two tests.
On my staging server which uses Postgres, I have a particular test that works when testing it individually (e.g. manage.py test MyApp.tests.MyTestCase.testSomething), but fails when running the entire test case (e.g. manage.py test MyApp.tests.TestCase).
Other related StackOverflow questions seem to have two solutions:
Use Django TestCase's instead of the Python equivalent
Use TransactionTestCase's to make sure the database is cleaned up properly after every test.
I've tried both to no avail. Out of frustration, I also tried using django-nose instead, but I was seeing the same errors. I'm on Django 1.6.
I just spent all day debugging a similar problem. In my case, the issue was as follows.
In one of my view functions I was using the Django send_mail() function. In my test, rather than having it send me an email every time I ran my tests, I patched send_mail in my test method:
from mock import patch
...
def test_stuff(self):
...
with patch('django.core.mail.send_mail') as mocked_send_mail:
...
That way, after my view function is called, I can test that send_mail was called with:
self.assertTrue(mocked_send_mail.called)
This worked fine when running the test on its own, but failed when run with other tests in the suite. The reason this fails is that when it runs as part of the suite other views are called beforehand, causing the views.py file to be loaded, causing send_mail to be imported before I get the chance to patch it. So when send_mail gets called in my view, it is the actual send_mail that gets called, not my patched version. When I run the test alone, the function gets mocked before it is imported, so the patched version ends up getting imported when views.py is loaded. This situation is described in the mock documentation, which I had read a few times before, but now understand quite well after learning the hard way...
The solution was simple: instead of patching django.core.mail.send_mail I just patched the version that had already been imported in my views.py - myapp.views.send_mail. In other words:
with patch('myapp.views.send_mail') as mocked_send_mail:
...
This took me a long time to debug, so I thought I would share my solution. I hope it works for you too. You may not be using mocks, in which case this probably won't help you, but I hope it will help someone.
Besides using TestCase for all your tests, you need to make sure you tear down any patching that was done in your setup methods:
def setUp(self):
self.patcher = patch('my.app.module')
def tearDown(self):
self.patcher.stop()
I had the same thing happening today with a series of tests. I had 23 regular django.test.TestCase tests and then one django.contrib.staticfiles.testing.StaticLiveServerTestCase test. It was that final test that would always fail when ran with the rest of them but pass on its own.
Solution
In the 23 regular TestCase tests I really had implemented a subclass of the regular TestCase so that I could provide some common functionality specific to my application to the tests. In the tearDown methods I had failed to call the super method. Once I called the super method in the tearDown method, it worked. So the lesson here was to check to make sure you are cleaning up your methods.

parsing django test results

I recently wrote some tests for one of my django Projects. What I now want to do is to call the test command from a script.
I am looking to parse the test results and save them. Is that at all possible for django testing framework?
The easiest way would be to use a standard test output format, such as JUnit XML, for which there are already libraries. Right now, I'm using django-jenkins, which provides a nice output that I can view in our CI tool.
If you'd like to roll your own solution, I'd reccomend coding your own Test Runner, and customizing the suite_result method.

Exporting sikuli unit test data as a report

Is there an automated tool to generate reports containing information about unit tests when using sikuli? The data I want would be things such as pass/fail, a trace to where/why it failed, and a log of events.
Ended up using the HTMLTestRunner tool, it was far easier than anything else I found and met the criteria I needed. (There is also an XML version of this, XMLTestRunner)
HTMLTestRunner is an extension to the Python standard library's unittest module. It generates easy to use HTML test reports.
http://tungwaiyip.info/software/HTMLTestRunner.html
Another useful tool I found was RobotFramework, which can be integrated into Sikuli, but is more complicated and requires alot of research and reading of documentation.

In Django (on Google App Engine), should I call main.py when running Unit Tests?

I have a Django application on the Google App Engine, and I would like to start writing unit tests. But I am not sure how to set-up my tests.
When I run my tests, I get the following error:
EnvironmentError: Environment variable DJANGO_SETTINGS_MODULE is undefined.
ERROR: Module: tests could not be imported.
This seems pretty straight forward - my django settings have not been initialized. Setup of th django environment on Google App Engine happens in main.py (specified in app.yaml), but this does obviously not get called for unit tests. Should my unit tests start by calling main() in main.py? I am not sure.
You should probably just export the environment variable in the main entry point into your tests. Depending on your setup, you can probably just do that by importing your main.py file, but it's probably just as easy to add the os.environ['DJANGO_SETTINGS_MODULE'] line to the file you use to run your tests.
This might be a little hard depending on how you've got your tests set up. Are you using a testrunner like nose or Django's test suite tools?
My ultimate solution to this issue was to add the os.environ['DJANGO_SETTINGS_MODULE'] line IMMEDIATELY before using the only real Django function I use, template.render_to_string().
I kept having issues with it getting unset when I had it in the header of a given .py, so I realized just setting it each time would insure it'll always be right.
What a frustrating problem. I really wish there was a simple setting somewhere (perhaps in app.yaml) that would pick the Django version, and set this variable right.
Oh well.