I haven't found a way to test REST methods for my application automatically, while using setUp and tearDown methods to preserve the uniqueness of each test.
gaetestbed gives me a clean datastore in between tests. And httplib2 allows me to easily call REST methods and parse their responses; but in order to do so, a local instance of my application must be running on port 8080 and called each test. This defeats the purpose of a gaetestbed-like refresh of the datastore, since data is preserved in between calls.
I could start and stop the GAE server within the setUp and tearDown methods, but this seems wasteful and time-consuming. Is there a better way?
Using gaetestbed, or your own unittest code like this, simply instantiate your handler classes directly, using a mocked/fake environment dictionary, and call the methods (initialize() and get()/post()/etc in the case of webapp) directly.
When doing functional tests using REST methods we ended up writing helper calls to clean out the internal caches and force our engine to sync with the database.
I haven't used gaetestbed, but I would have thought that you could flush the datastore between tests?
"Use case: develop and test locally with real data
Restore to: your local development server.
Once you have restored the data to your local development server, I would highly suggest that you take a backup of your datastore.
You can find your datastore files in a temporary folder on your local machine (e.g., on my Mac, it's /var/folders/bz/bzDU030xHXK-jKYLMXnTzk+++TI/-Tmp-/). To find the folder on your own machine, flush the datastore (./manage.py flush) and you will see the path to your datastore folder printed in the resulting output."
Related
I'm the lead programmer for Unit Testing at my business and I would like to be able to create a copy of the database that will be accessed to run Unit Tests. I'm told I can export the database from phpMyAdmin or MySqlWorkbench (the later which I don't see an obvious way to export), but I'm not sure how to connect that copy to the Unit Test to reference when testing. If someone could explain the process of going from exporting all the way to how to make the Unit Tests make use of that exported copy, I would be very appreciative. Even if you only know some of the steps in between, that would still be helpful at this point.
Whomever suggested that you export the database was suggesting that you then import it to another server running in a completely independent testing environment. You would configure a MySQL instance as the QA or testing server and, when performing the Unit Testing, point the tests to the test server instead of the production data. How exactly you'd do that depends on the unit test system you're using and your network environment.
A much less robust solution would be to copy the data to a testing database running on the same server. Since it's a different database name, you can safely interact with that instead of the production data. Within phpMyAdmin, there is a copy database feature in the Operations tab. You'd have to modify your tests to connect to the new database name, in this case.
I'm tyring to make unit tests for my node application. I'm using a postgresql database for the development and SQLite for the tests. However Sqlite does not understand some features of postgresql such as to_tsvectorand sometimes I got a problem of SQLITE databse locked. So I think for a server to test the application on local and on build sever. Is it a good solution to do that? I found some alternatives that mention to use docker container
testing with docker.
So what are the suitable solution to run postgres test on local and server build without getting problem of databse lock?
I would avoid using the database in the unit tests as they are now dependant on an external system:
Part of being a unit test is the implication that things outside the
code under test are mocked or stubbed out. Unit tests shouldn't have
dependencies on outside systems. They test internal consistency as
opposed to proving that they play nicely with some outside system.
Basically, mock any calls to the database so you don't need to use one.
However, If you really must use a Postgres database you should use the official image in a compose file and initialize it with your schema. You can then connect to that with your tests in a known state, etc.
As for the Database lock, it may disappear after using Postgres instead of SQLite and you may want to check if you have any concurrency in your tests.
I am using the library gozk to interface my application with a production zookeeper server. I'd like to test that the application create the correct nodes, that they contain the correct content for various cases, and that the DataWatch and NodeWatch are set properly:
i.e. the application performs exactly what should based on the node and data updates.
Can I have a mock zookeeper server to be created and destroyed during unit tests only, with capability to artificially create new node and set node contents?
Is there a different way than to manually create a zookeeper server and use it?
A solution already exists for java
I would recommend making the code of yours that calls zookeeper become an interface.
Then during testing you sub in a 'mockZookeeperConn' object that just returns values as though it was really connecting to the server (but the return values are hardcoded)
#Ben Echols 's answer is very good.
As further, you can try "build constraints".
You can configure different build tags on real-zk and mock-zk code.
For example, we configure "product" for real-zk code and "mock" for mock-zk code.
Thus there are two ways to run unittests:
go test -tags mock if there isn't a zk env.
go test -tags product if there is an available zk env.
I want to run my tests against a distinct PostgreSQL database, as opposed to the in-memory database option or the default database configured for the local application setup (via the db.default.url configuration variable). I tried using the %test.db and related configuration variables (as seen here), but that didn't seem to work; I think those instructions are intended for Play Framework v1.
FYI, the test database will have it's schema pre-defined and will not need to be created and destroyed with each test run. (Though, I don't mind if it is re-created and destroyed with each test run, but I don't want to use "evolutions" to do so; I have a single SQL schema file I'm using at this point.)
Use alternative configuration files while local development to override DB credentials (and other settings) ie. like described in the other answer (Update 1).
Tip: using different kinds of databases in development and production leads fast to errors and bugs, so it's better to install the same DB locally for development and testing.
We were able to implement Play 1.x style configs on top of Play 2.x - though I bet the creators of Play will cringe when they hear this.
The code is not quite shareable, but basically, you just have to override the "configuration" method in your GlobalSettings: http://www.playframework.org/documentation/api/2.0.3/scala/index.html#play.api.GlobalSettings
You can check for some system of conf setting like "environment.tag=%test" then override all configs of for "%test.foo=bar" into "foo=bar".
We are using Redisco for our models, and I am writing some tests for our models, however redis keeps filling up, so for each test, more data is added to reddis.
Is there a way to clear Redis for each test, and what are the best practices when testing (using redis and redisco)
- EDIT -
This is the solution I went with in the end, and I want to share this with others who might have the same question
To make sure each test case is running on a clean Redis instance, start each test case by running
redis = Redis()
redis.flushall()
As people have commented below, make sure you don't run the tests against a production instance of Redis
I would recommend running a second redis instance for testing (eg. on a different port...) so you are also not able to accidentally drop any production data from your redis when running tests.
You could then use custom BaseTestClass which overrides your project's settings (in the setUp method - you can also emtpy your redis' dbs there) so that they point to another redis instance (hopefully you've defined your redis connections in your project's settings) and have all your test classes inherit from this base class.
The standard way of dealing with side-effects such as connecting to a database in unit tests is to provide a mock implementation of the data layer during the test. This can be done in many ways, you could use a different redis instance, or dynamically override methods to report to your test rather than actually manipulating the database etc.
Dependancy Injection is a pattern used for this kind of problem, more often in static languages like Java, but there are tools for Python, see http://code.google.com/p/snake-guice/