I have to create database on the setup event and drop into teardown event,the flow is working when i have not used TestCaseSource.But the moment come i have to used TestCaseSource then the execution order of NUnit test case is changed.
My database is not created (you can say setup event not called),But I have to used TestCaseSource for pulling data from table which is created by the setup event and dropped into teardown .
Please suggest how to deal this type of scenario.I am using VS 2013
Thanks in advance
I think what you are saying is that using TestCaseSource results in attempting to pull data from a database that has not been created yet (in a SetUp method).
This is just the way NUnit works, see https://github.com/nunit/nunit/issues/141
Maybe you could have TestCaseSource return the query/queries you want to test (instead of the data), and execute the query in the test (after your SetUp has run)?
Related
I am using the library gozk to interface my application with a production zookeeper server. I'd like to test that the application create the correct nodes, that they contain the correct content for various cases, and that the DataWatch and NodeWatch are set properly:
i.e. the application performs exactly what should based on the node and data updates.
Can I have a mock zookeeper server to be created and destroyed during unit tests only, with capability to artificially create new node and set node contents?
Is there a different way than to manually create a zookeeper server and use it?
A solution already exists for java
I would recommend making the code of yours that calls zookeeper become an interface.
Then during testing you sub in a 'mockZookeeperConn' object that just returns values as though it was really connecting to the server (but the return values are hardcoded)
#Ben Echols 's answer is very good.
As further, you can try "build constraints".
You can configure different build tags on real-zk and mock-zk code.
For example, we configure "product" for real-zk code and "mock" for mock-zk code.
Thus there are two ways to run unittests:
go test -tags mock if there isn't a zk env.
go test -tags product if there is an available zk env.
I have designed a job in Talend. The job is fetching data from database and converting it into json and it uploads that json on server. I want to write test case for my job like we write unit test in java projects. I have searched a lot on how to write test case for talend job but did not find anything. If any one know how to test talend job please suggest.
You can simply create a job which call your job (either tRunJob or tSoap if your job is soap-exposed):
Init your database
call your job
check the result on the server (or mock the server call by overriding context parameters)
use tAssert to make your check
use tAssertCatcher->tLogRow to print test result
I made a CI (internal project) for our project with a basic Java application, which is a telnet wrapper around the Talend Command Line API (listJob, runJob...), then generates a Junit XML result file. Everything is called by Jenkins.
It seems that nothing really exist to perfectly test Talend jobs :-(
Good luck.
In talend 6.0.1 i found a tab named "Test Cases", it seems new to me. On https://help.talend.com/display/TalendRealtimeBigDataPlatformStudioUserGuide60EN/6.10+Testing+Jobs+using+test+cases you can find an explanation on writing such tescases. Im not sure if its what you wanted but i will have look on that.
For the end to end testing we are running two versions of the job asking the user which version he needs to compare with which version and dynamically creating the table on the fly and compare the result at the db side. This is just an attempt.
Yeah there is no Junit OOB(out of the box.)
I tried modifying test cases that were generated for my entities, but when I run the tests, it doesn't leave the data in the tables. I tried modifying the persistence.xml (changing the persistence from none to create-tables) but when I run the tests, it throws exceptions because it's trying to update/delete stuff that has foreign key dependencies.
Am I using the wrong tool for this? I was hoping I'd be able to run my tests and be left with a database in a known state. Am I using the tool wrong?
Its probably something to do with the fact that unit test transactions are rolled back after the unit test is done. Maybe having something to do with #TransactionConfiguration(defaultRollback=true)
I found this other post that might shed some light
How to rollback a database transaction when testing services with Spring in JUnit?
I'm having a hard time customizing the test database setup behavior. I would like to achieve the following:
The test suites need to use an existing database
The test suite shouldn't erase or recreate the database instead load the data from a mysql dump
Since the db is populated from a dump, no fixtures should be loaded
Upon finishing tests the database shouldn't be destroyed
I'm having a hard time getting the testsuiterunner to bypass creation.
Fast forward to 2016 and the ability to retain the database between tests has been built into django. It's available in the form of the --keep flag to manage.py
New in Django 1.8. Preserves the test database between test runs. This
has the advantage of skipping both the create and destroy actions
which can greatly decrease the time to run tests, especially those in
a large test suite. If the test database does not exist, it will be
created on the first run and then preserved for each subsequent run.
Any unapplied migrations will also be applied to the test database
before running the test suite.
This pretty much fullfills all the criteria you have mentioned in your questions. In fact it even goes one step further. There is no need to import the dump before each and every run.
This TEST_RUNNER works in Django 1.3
from django.test.simple import DjangoTestSuiteRunner as TestRunner
class DjangoTestSuiteRunner(TestRunner):
def setup_databases(self, **kwargs):
pass
def teardown_databases(self, old_config, **kwargs):
pass
You'll need to provide a custom test runner.
The bits your interested in overriding with the default django.test.runner.DiscoverRunner are the DiscoverRunner.setup_databases and DiscoverRunner.teardown_databases methods. These two methods are involved with creating and destroying test databases and are executed only once. You'll want to provide test-specific project settings that use your existing test database by default and override these so that the dump data is loaded and the test database isn't destroyed.
Depending on the size and contents of the dump, a safe bet might be to just create a subprocess that will pipe the dump to your database's SQL command-line interface, otherwise you might be able to obtain a cursor and execute queries directly.
If your looking to get rid of fixture loading completely, you can provide a custom base test case that extends Django's default django.test.testcases.TestCase with the TestCase._fixutre_setup and TestCase._fixutre_teardown methods overriden to be noop.
Caveat emptor: this runner will make it impossible to facilitate tests for anything but your application's sources. It's possible to customize the runner to create a specific alias for a connection to your existing database and load the dump, then provide a custom test case that overrides TestCase._database_names to point to it's alias.
I haven't found a way to test REST methods for my application automatically, while using setUp and tearDown methods to preserve the uniqueness of each test.
gaetestbed gives me a clean datastore in between tests. And httplib2 allows me to easily call REST methods and parse their responses; but in order to do so, a local instance of my application must be running on port 8080 and called each test. This defeats the purpose of a gaetestbed-like refresh of the datastore, since data is preserved in between calls.
I could start and stop the GAE server within the setUp and tearDown methods, but this seems wasteful and time-consuming. Is there a better way?
Using gaetestbed, or your own unittest code like this, simply instantiate your handler classes directly, using a mocked/fake environment dictionary, and call the methods (initialize() and get()/post()/etc in the case of webapp) directly.
When doing functional tests using REST methods we ended up writing helper calls to clean out the internal caches and force our engine to sync with the database.
I haven't used gaetestbed, but I would have thought that you could flush the datastore between tests?
"Use case: develop and test locally with real data
Restore to: your local development server.
Once you have restored the data to your local development server, I would highly suggest that you take a backup of your datastore.
You can find your datastore files in a temporary folder on your local machine (e.g., on my Mac, it's /var/folders/bz/bzDU030xHXK-jKYLMXnTzk+++TI/-Tmp-/). To find the folder on your own machine, flush the datastore (./manage.py flush) and you will see the path to your datastore folder printed in the resulting output."