How to mock a Zookeeper server for unit test in golang? - unit-testing

I am using the library gozk to interface my application with a production zookeeper server. I'd like to test that the application create the correct nodes, that they contain the correct content for various cases, and that the DataWatch and NodeWatch are set properly:
i.e. the application performs exactly what should based on the node and data updates.
Can I have a mock zookeeper server to be created and destroyed during unit tests only, with capability to artificially create new node and set node contents?
Is there a different way than to manually create a zookeeper server and use it?
A solution already exists for java

I would recommend making the code of yours that calls zookeeper become an interface.
Then during testing you sub in a 'mockZookeeperConn' object that just returns values as though it was really connecting to the server (but the return values are hardcoded)

#Ben Echols 's answer is very good.
As further, you can try "build constraints".
You can configure different build tags on real-zk and mock-zk code.
For example, we configure "product" for real-zk code and "mock" for mock-zk code.
Thus there are two ways to run unittests:
go test -tags mock if there isn't a zk env.
go test -tags product if there is an available zk env.

Related

Seeding data for acceptance testing of AWS Serverless application

I'm starting to teach myself serverless development, using AWS Lambda and the Serverless CLI. So far, all is going great. However, I've got a snag with acceptance testing.
What I'm doing so far is:
Deploy stack to AWS with a generated stage name - I'm using the CI job ID for this
Run all the tests against this deployment
Remove the deployment
Deploy the stack to AWS with the "Dev" stage name
This is fine, until I need some data.
Testing endpoints without data is easy - that's the default state. So I can test that GET /users/badid returns a 404.
What's the typical way of setting up test data for the tests?
In my normal development I do this by running a full stack - UI, services, databases - in a local docker compose stack and the tests can talk to them directly. Is that the process to follow here - Have the tests talk directly to the varied AWS data stores? If so, how do you handle multiple (DynamoDB) tables across different CF stacks, e.g. for testing the UI?
If that's not the normal way to do it, what is?
Also, is there a standard way to clear out data between tests? I can't safely test a search endpoint if the data isn't constant for that test, for example. (If data isn't cleared out then the data in the system will be dependent on the order the tests run in, which is bad)
Cheers
Since, this is about Acceptance tests - those should be designed to care less of the architecture (tech side) and more of the business value. After all, such tests are supposed to be Black box. Speaking from experience with both, SLS or mSOA, the test setup and challenges are quite similar.
What's the typical way of setting up test data for the tests?
There are many ways/patterns to do the job here, depending on your context. The ones that most worked for me are:
Database Sandbox to provide a separate test database for each test run.
Table Truncation Teardown which truncates the tables modified during the test to tear down the fixture.
Fixture Setup Patterns will help you build your prerequisites depending of test run needs
You can look at Fixture Teardown Patterns for a
standard way to clear out data between tests
Maybe, you don't need to
Have the tests talk directly to the varied AWS data stores
as you might create an unrealistic state, if you can just hit the APIs/endpoints to do the job for you. For example, instead of managing multiple DynamoDB instances' PutItem calls - simply hit the register new user API. More info on Back door manipulation layer here.

SOAPUI ability to switch between database connections for test suite

A bit of background:
I have an extensive amount of SOAPUI test cases which test web services as well as database transactions. This worked fine when there were one or two different environments as i would just clone the original test suite, update the database connections and then update the endpoints to point to the new environment. A few changes here and there meant i would just re-clone the test cases which had be updated for other test suites.
However, I now have 6 different environments which require these tests to be run and as anticipated, i have been adding more test cases as well as changing original ones. This causes issues when running older test suites as they need to be re-cloned.
I was wondering whether there was a better way to organise this. Ideally i would want the one test suite and be able to switch between database connections and web service endpoints but have no idea where to start with this. Any help or guidance would be much appreciated.
I only have access to the Free version of SOAPUI
This is what the structure currently looks like:
Here is how I would go to achieve the same.
There is an original test suite which contains all the tests. But it is configured to run the tests against a server. Like you mentioned, you cloned the suite for second data base schema and changed the connection details. Now it is realized since there are more more data bases need to test.
Have your project with the required test suite. Where ever, the data base server details are provided, replace the actual values with with property expansion for the connection details.
In the Jdbc step, change connection string from:
jdbc:oracle:thin:scott/tiger#//myhost:1521/myservicename
to:
jdbc:oracle:thin:${#Project#DB_USER}/${#Project#DB_PASSWORD}#//${#Project#DB_HOST}:${#Project#DB_PORT}/${#Project#DB_SERVICE}
You can define the following properties into a file and name it accordingly. Say, the following properties are related to database hosted on host1 and have the details, name it as host1.properties. When you want to run the tests against host1 database, import this file at project level custom properties.
DB_USER=username
DB_PASSWORD=password
DB_HOST=host1
DB_PORT=1521
DB_SERVICE=servicename
Similarly, you can keep as many property files as you want and import the respective file before you run against the respective db server.
You can use this property file for not only for database, but also for different web services hosted on different servers such as statging, qa, production without changing the endpoints. All you need is set the property expansion in the endpoint.
Update based on the comment
When you want to use the same for web services, go to the service interface -> Service Endpoints tab and then, add a new endpoint ${#Project#END_POINT}/context/path. Now click on the Assign button. Select All requests and Test Requests from drop down. And you may also remove other endpoints
Add a new property in your property file END_POINT and value as http://host:port. This also gives you advantage if you want to run the tests agains https say https://host:port.
And if you have multiple services/wsdls which are hosted on different servers, you can use unique property name for each service.
Hope this helps.

django clear redisco for testing

We are using Redisco for our models, and I am writing some tests for our models, however redis keeps filling up, so for each test, more data is added to reddis.
Is there a way to clear Redis for each test, and what are the best practices when testing (using redis and redisco)
- EDIT -
This is the solution I went with in the end, and I want to share this with others who might have the same question
To make sure each test case is running on a clean Redis instance, start each test case by running
redis = Redis()
redis.flushall()
As people have commented below, make sure you don't run the tests against a production instance of Redis
I would recommend running a second redis instance for testing (eg. on a different port...) so you are also not able to accidentally drop any production data from your redis when running tests.
You could then use custom BaseTestClass which overrides your project's settings (in the setUp method - you can also emtpy your redis' dbs there) so that they point to another redis instance (hopefully you've defined your redis connections in your project's settings) and have all your test classes inherit from this base class.
The standard way of dealing with side-effects such as connecting to a database in unit tests is to provide a mock implementation of the data layer during the test. This can be done in many ways, you could use a different redis instance, or dynamically override methods to report to your test rather than actually manipulating the database etc.
Dependancy Injection is a pattern used for this kind of problem, more often in static languages like Java, but there are tools for Python, see http://code.google.com/p/snake-guice/

How do I unit test REST methods in App Engine?

I haven't found a way to test REST methods for my application automatically, while using setUp and tearDown methods to preserve the uniqueness of each test.
gaetestbed gives me a clean datastore in between tests. And httplib2 allows me to easily call REST methods and parse their responses; but in order to do so, a local instance of my application must be running on port 8080 and called each test. This defeats the purpose of a gaetestbed-like refresh of the datastore, since data is preserved in between calls.
I could start and stop the GAE server within the setUp and tearDown methods, but this seems wasteful and time-consuming. Is there a better way?
Using gaetestbed, or your own unittest code like this, simply instantiate your handler classes directly, using a mocked/fake environment dictionary, and call the methods (initialize() and get()/post()/etc in the case of webapp) directly.
When doing functional tests using REST methods we ended up writing helper calls to clean out the internal caches and force our engine to sync with the database.
I haven't used gaetestbed, but I would have thought that you could flush the datastore between tests?
"Use case: develop and test locally with real data
Restore to: your local development server.
Once you have restored the data to your local development server, I would highly suggest that you take a backup of your datastore.
You can find your datastore files in a temporary folder on your local machine (e.g., on my Mac, it's /var/folders/bz/bzDU030xHXK-jKYLMXnTzk+++TI/-Tmp-/). To find the folder on your own machine, flush the datastore (./manage.py flush) and you will see the path to your datastore folder printed in the resulting output."

Automate test of web service communication

I have an application that sends messages to an external web service. I build and deploy this application using MSBuild and Cruisecontrol.NET. As CCNET build and deploys the app it also runs a set of test using NUnit. I'd now like to test the web service communication as well.
My idea is that as part of the build process a web service should be generated (based on the external web services WSDL) and deployed to the build servers local web server. All the web service should do is to receive the message and place it on the file system so I then can check it using ordinary NUnit for example. This would also make development easier as new developers would only have to run the build script and be up and running (not have to spend time to set up a connection to the third party service).
Are there any existing utilities out there that easily mock a web service based on a WSDL? Anyone done something similar using MSBuild?
Are there other ways of testing this scenario?
I just started looking into http://www.soapui.org/ and it seems like it will work nicely for testing web services.
Also, maybe look at adding an abstraction layer in your web service, each service call would directly call a testable method (outside of the web scope)? I just did this with a bigger project I'm working on, and it's testability is working nicely.
In general, a very good way to test things like this is to use mock objects.
At work, we use the product TypeMock to test things like Web Service communication and other outside dependencies. It costs money, so for that reason it may not be suitable for your needs, but I think it's a fantastic product. I can tell you from personal experience that it integrates very well with NUnit and CCNet.
It's got a really simple syntax where you basically say "when this method/property is called, I want you to return this value instead." It's great for testing things like network failures, files not being present, and of course, web services.
Take a look at NMock2. It's a open-source mocking product and allows you to create "virtual" implementations for interfaces that support rich and deep interaction.
For example, if your WS interface is called IService and has a Data GetData() method, you can create a mock that requires the method to be called once and returns a new Data object:
var testService = mockery.NewMock<IService>();
Expect
.Once
.On(testService)
.Method("GetService")
.WithNoArguments()
.Will(
Return.Value(new Data());
At the end of the test, call mockery.VerifyAllExpectationsHaveBeenMet() to assure that the GetData method was actually called.
P.S.: don't confuse the "NMock2" project with the "NMock RC2", which is also called "nmock2" on sourceforge. NMock2-the-project seems to have superseded NMock.
This might also be something - MockingBird. Look useful.
At my work place we are using Typemock and nUnit for our unit testing.