i have two servers,one working as api and another used to retrieve the data from api(webui).If i perform any unit tests for views in webui,it is creating objects in api.How to delete the objects in api after the testing is complete?Can you suggest any ways to deal with this issue?
If you are testing then you should have a separate system that does not impact upon anything that is being used in production.
You could use mocks to create the intended responses from the api for example.
Related
I have a booking app that can deal with both local and remote API bookings. Our logic —for (eg) pricing and availability— follows two very different pathways. We obviously need to test both.
But running regular tests against a remote API is slow. The test environment provided manages a response in 2-17 seconds. It's not feasible to use this in my pre_commit tests. Even if they sped that up, it's never going to be fast and will always require a connection to pass.
But I still need to test our internal logic for API bookings.
Is there some way that within a test runner, I can spin up a little webserver (quite separate to the Django website) that serves a reference copy of their API. I can then plug that into the models we're dealing with and query against that locally, at speed.
What's the best way to handle this?
Again, I need to stress that this reference API should not be part of the actual website. Unless there's a way of adding views that only apply at test-time. I'm looking for clean solutions. The API calls are pretty simple. I'm not looking for verification or anything like that here, just that bookings made against an API are priced correctly internally, handle availability issues, etc.
for your test porpuse you can mock api call functions.
you can see more here:
https://williambert.online/2011/07/how-to-unit-testing-in-django-with-mocking-and-patching/
I have a standard web application, it contains a UI (which has an WebService API), which references a Business Layer which in turn references a SQL based DAL. These layers have a good coverage of unit tests which use mocking to replace their dependencies.
I also have an API library which allows users to access the WS via code (and handles a lot of the issues like credentials, urls etc). I want to write unit tests for my library.
Currently the only way I can do this is to write tests referencing the library and populate the database using the same mechanism I use to test the DAL. However this approach is clearly flawed as it doesn't test my API library classes - it tests the entire stack!
How can I insert a mock under the WS (which is in a different project)? I'm currently using MBUnit and MOQ to test.
EDIT: My unit tests currently test two things:
That the API is translating from WS objects to local objects I pass to the consumer correctly
That the fields of the transport objects were populated correctly
Your API really seems to be doing 2 things, so I'd place these responsibilities in 2 distinct modules and test them separately
A Translator module. Translates API calls into web service proxy calls and maps back responses from the web service to appropriate data structures defined in the API. To test it, use a mock of the Adapter module described below.
An Adapter / Proxy module whose job is to call the real web service. You can test it using integration tests. If the only way to do it is populate a database and exercise the full stack, you might want to move these tests out of your main test suite into a less frequent, long-running test suite.
Similar approaches are discussed here : http://blog.8thlight.com/eric-smith/2011/10/27/thats-not-yours.html and there http://nf2p.com/dot-net/mocking-web-service-proxies-using-the-adapter-pattern-to-allow-unit-testing/
I am trying to write some unit tests for testing my service layer which I am doing well I think, the service layer as a dependency on a repository so am mocking a repository using RhinoMocks, so I am testing the Service layer "WITHOUT" hitting the database which is great.
Now I need to test my repository layer, this has a direct connection to a database so I have to test it don't i? I have no other option but to test it?
If I test another implementation of the Repository that doesn't hit the database then this is no testing my implementation.
I have managed to mock out all lower layers so anything that depended on code that takes a while to run ie. The repository, then I mocked this out. The result is that all my tests for layers below the repository complete fast and do not hit the database.
The problem is what do I now do with the repository. I have to test it but it has a dependency on a SQL database.
Well, the general answer goes like this. I would write unit tests that verifies the logic of the repository layer and break out the sql dependency in a new class and mock it in the tests of the repo. If the repository layer contains only a sql connection and no logic there is nothing to unit test in my opinion. Then you are more suitable with integration tests with the database connected.
Thus mocking code you don't own is a bad practice, I think best option for you is to test repositories via acceptance/integration tests
You certainly can test your repository layer without talking to a database. Most ADO.net classes are mockable, if you are careful about how you create them and you are careful to couple to interfaces instead of concretions. Unfortunately, ADO.net was created before mocking was a very popular practice, and it is still a bit of a pain to do.
The real question in my mind is whether you should try to mock them. The benefits of mocking are twofold: they run faster, and they force you to encapsulate more details about your database (making it easier to switch out db technologies, if you ever want to do it). The benefits of functional tests are that they also test your database layer (stored procedures, etc), they are arguably easier to write, and they are easier to maintain in the sense that if a db change is made, integration tests notice automatically, instead of you hunting the mocked out tests down.
I would say the "best" approach would be to test them both with moqs and with the real database, since this gives you the best of both worlds. However, it is quite costly of course.
I am building a web application that uses the database for Users, Security/roles, and to store content.
It seems a little daunting to me to begin on the road of unit testing because I have to make sure my database has been initialized properly for my tests to run.
What are common practices to help in this regard?
i.e. while developing/testing, I might delete a user, but for my test to pass that user has to be in the database, along with his profile, security settings etc.
I know I can create a setup script, something to recreat the databas etc.
I don't want to end up spending my entire time maintaining my tests and ensuring my database is in sych
Or is that the cost of Unit Testing/TDD?
The solution is Mocking. Mocks "replace" the connection. The unit under test will "connect" to the Mock and executes its statement. The Mock returns normal resultsets o.s.e.
After the test, the mock can give you a list of all methods, that were called by the unit under test. Easymock.org
As the other said: DB connection aren't a unit test. So drop it and do it local with Mocking objects
It's not a unit test if you are testing more than one unit.
Usually you'll have one component (your page, or the business layer) talking to a data layer object that is responsible for actually connecting and querying the database. My recommendation is to develop a unit test for the first component, using dependency injection to pass in a mock version of the DataLayer (which acts on hardcoded data, or a List you pass in, etc). This way you are testing your higher level code in isolation from the other components.
Then you are free to develop other unit tests (and integration tests) for the data layer to ensure that it is handling it's job (writing to the database) correctly.
We use an in-memory database (hsqldb) for our unit tests. In the setup we populate it with test data and then before each test case we start a transaction and after each test case we roll it the transaction back. This means each testcase has a clean start of the db.
It sounds like you actually want functional/integration testing. For Web applications I recommend you look into Selenium or Canoo WebTest.
These are also great for automating tasks you do on the Web. I have a set-up suite and a tear-down suite that create business entities and testing users through the admin interface as well as tests for the customer-facing site.
Michael Feathers argues that tests that communicate with databases aren't unit tests by definition. The main reason for this is the point you bring up: unit tests should be simple and easy to run.
This isn't to say that you shouldn't test database code. But you don't want to consider them unit tests. Thus, if you do any database testing, you want to separate the tests from the rest of your unit tests.
Since I used Doctrine for my PHP database work, and since Doctrine has a query abstraction layer (called DQL), I can swap out back ends without having to worry too much about compatibility issues. So in this case for my unit tests I would just at the beginning of my tests load the schema and fixtures into a SQLlite db, test my models, and discard the SQLlite db at the end of testing.
This way I've tested my models and data access to make sure their queries are formed correctly.
Now testing the specific database instance to make sure the current schema is correct is a different story and IMHO probably doesn't belong in your Unit Tests, so much as it belongs in your deployment task list.
The cost of unit testing/TDD is that you have to alter your design so that you have a clean separation between the database and the domain layer, so that you can create a fake that will allow you to create tests that don't hit the database.
But that cleaner design is just the beginning of the cost. After that you have to create tests that both help you make the code work right the first time and alert you when anyone breaks something that used to work.
And after you have a good fundamental design with tests that protect your existing functionality, you'll find yourself cleaning up code to make it easier to work with, with confidence you aren't breaking things along the way.
And so on and so on... The costs of unit testing/TDD just keep piling up over time.
For Java, you may also want to look into dbunit. http://www.dbunit.org/
Often, I find myself wanting to write a unit test for a portion of code that accesses HTTP resources as part of its normal function. Have you found any good ways to write these kinds of tests?
Extract the part that accesses the HTTP resources out of your main code. Create an interface for that new component, In your test, mock the interface and return data that you can control reliably.
You can test the HTTP access as an integration test.
This is typically a function I would mock out for the tests... I don't like my tests depending on anything external... even worse if it is an external resource I have no control over (such as a 3rd party website).
Databases is one of the few external resources I often won't mock... I use DBUnit instead.
I recently had to write a component that accessed a wiki and did some basic text scraping. The majority of tests I wrote validated the correct HTTP response code. As far as validating the actual resource goes, I would save an offline version of a known resource and check that the algorithm is gathering/processing the correct data.
Depending on which language or framework you're using, it may be straightforward to start up a locally-running HTTP server which serves up the resources you want.