Suppose I have several services represented by classes ServiceA, ServiceB, ServiceC etc.
These services have configuration and other data in SQL, in for example a ServiceConfiguration table.
Are there any best practices for linking ServiceA to it's corresponding configuration and data in SQL other than hard-coding the Id in the class?
And any thoughts on testability?
I can verify that, for example, ServiceA returns true for IsValidServiceForId(1) -- but if it's configuration id were to change in SQL for some unknown reason, the test would still pass.
I could add to the test an assertion verifying that I can get a configuration record for a service matching name "Service A", but that would then introduce an extra dependency into my test class - I would then also be testing the ServiceConfigurationRepository (for example).
Edit
I thought this would be a platform agnostic question, but for info I'm using C#, .Net 3.5, NUnit for testing, NHibernate with SQL Server.
If these services are directly hitting the DB for their config, and you need to test them, abstract away the configuration.
When testing, you give your services a mocked configuration.
When actually running the app, you give your services the real configuration, fetched from the DB.
Sorry for being so vague, but without any code, it's hard not to be.
Related
I'm adding to an open source project that uses in this case some Azure cloud functionality, but the same general problem is applicable for any cloud API. I want to write tests for my code, but the results of the test are reliant on something having happened in the cloud service I'm using, and in order for this to happen, I need to supply credentials to the cloud service. In a private project, I could certainly just add my cloud credentials to the testing environment, but for public/open source projects, I can't do this. I can test locally easily enough, but this project uses CI (as do many OSS projects), so this can't really be done.
One approach seems to be using mock or something similar, but that doesn't actually seem to test that things are happening as they should be, and strikes me as a mostly pointless method to achieve 100% coverage.
Are there any 'virtual test cloud' environments that can be spun up to create an identical interface to the cloud service in question, but only for testing? How do these deal with side effects (the code in question creates a DNS entry, and ideally would test for the actual existence of a DNS entry using the system's resolver rather than another cloud call)?
How do people do this kind of testing?
I start with a spike solution to learn how to pass the required credentials. With this knowledge, I can TDD an acceptance test to call a simple API and get a "success" result.
I exclude the credentials from my repository. Instead, I include a template file with instructions.
From there, I drop down to unit tests to TDD sending requests and receiving responses. I don't test actual communication with any service. Instead:
Test the contents of requests.
Create responses and test how they're handled. This makes it really easy to test all sorts of error conditions.
Once I've TDD'd credentials, requests, and responses, I use what I call a spike test to confirm that everything is in fact working. Basically, this uses non-automated confirmation in anything I can quickly hack together.
I am using the library gozk to interface my application with a production zookeeper server. I'd like to test that the application create the correct nodes, that they contain the correct content for various cases, and that the DataWatch and NodeWatch are set properly:
i.e. the application performs exactly what should based on the node and data updates.
Can I have a mock zookeeper server to be created and destroyed during unit tests only, with capability to artificially create new node and set node contents?
Is there a different way than to manually create a zookeeper server and use it?
A solution already exists for java
I would recommend making the code of yours that calls zookeeper become an interface.
Then during testing you sub in a 'mockZookeeperConn' object that just returns values as though it was really connecting to the server (but the return values are hardcoded)
#Ben Echols 's answer is very good.
As further, you can try "build constraints".
You can configure different build tags on real-zk and mock-zk code.
For example, we configure "product" for real-zk code and "mock" for mock-zk code.
Thus there are two ways to run unittests:
go test -tags mock if there isn't a zk env.
go test -tags product if there is an available zk env.
We have a SOAP web service that is in production and contains a large number of methods. As part of a project we are adding new methods to that web service, note we are not amending the existing methods.
What I am trying to determine is whether I need to regression test the existing methods to test if they have been impacted by adding new methods?
Yes, if you change your webservice the only proper way to make sure none of the changes have impacted existing operations is a regression test.
If you use a testing tool like SOAPUI you can automate this for every build you make. (Regression) testing should be a standard step after any new build to ensure software quality.
A bit of background:
I have an extensive amount of SOAPUI test cases which test web services as well as database transactions. This worked fine when there were one or two different environments as i would just clone the original test suite, update the database connections and then update the endpoints to point to the new environment. A few changes here and there meant i would just re-clone the test cases which had be updated for other test suites.
However, I now have 6 different environments which require these tests to be run and as anticipated, i have been adding more test cases as well as changing original ones. This causes issues when running older test suites as they need to be re-cloned.
I was wondering whether there was a better way to organise this. Ideally i would want the one test suite and be able to switch between database connections and web service endpoints but have no idea where to start with this. Any help or guidance would be much appreciated.
I only have access to the Free version of SOAPUI
This is what the structure currently looks like:
Here is how I would go to achieve the same.
There is an original test suite which contains all the tests. But it is configured to run the tests against a server. Like you mentioned, you cloned the suite for second data base schema and changed the connection details. Now it is realized since there are more more data bases need to test.
Have your project with the required test suite. Where ever, the data base server details are provided, replace the actual values with with property expansion for the connection details.
In the Jdbc step, change connection string from:
jdbc:oracle:thin:scott/tiger#//myhost:1521/myservicename
to:
jdbc:oracle:thin:${#Project#DB_USER}/${#Project#DB_PASSWORD}#//${#Project#DB_HOST}:${#Project#DB_PORT}/${#Project#DB_SERVICE}
You can define the following properties into a file and name it accordingly. Say, the following properties are related to database hosted on host1 and have the details, name it as host1.properties. When you want to run the tests against host1 database, import this file at project level custom properties.
DB_USER=username
DB_PASSWORD=password
DB_HOST=host1
DB_PORT=1521
DB_SERVICE=servicename
Similarly, you can keep as many property files as you want and import the respective file before you run against the respective db server.
You can use this property file for not only for database, but also for different web services hosted on different servers such as statging, qa, production without changing the endpoints. All you need is set the property expansion in the endpoint.
Update based on the comment
When you want to use the same for web services, go to the service interface -> Service Endpoints tab and then, add a new endpoint ${#Project#END_POINT}/context/path. Now click on the Assign button. Select All requests and Test Requests from drop down. And you may also remove other endpoints
Add a new property in your property file END_POINT and value as http://host:port. This also gives you advantage if you want to run the tests agains https say https://host:port.
And if you have multiple services/wsdls which are hosted on different servers, you can use unique property name for each service.
Hope this helps.
We have a Web Application Project (dozens actually..) that has a testing project attached to it. In the testing project I have a simple unit test which exercises a couple of methods.
Running locally, the unit test executes and works.
However, when our TFS Build server attempts to execute the test, it fails with a error about an invalid path for the AspNetDevelopmentServerHost attribute. Other team members can execute it just fine.
The problem is that the root of my TFS Workspace is set to c:\projects\ One of the team members has theirs set to c:\tfs2008\ The TFS Build server on the other hand sets the pathToWebRoot variable to "c:\blahblah\Release_PublishedWebsites..." Which results in a bad path.
Due to the number of projects we have, I can't have everyone reset an environment variable every time they switch projects.
So, what are the best practices with regards to unit testing web projects in a team environment? The MSDN site article was in true microsoft fashion less than helpful.
you should specify the string %pathtowebroot%\\WebSiteName in the ,pathToWebApp parameter of the AspNetDevelopmentServer or AspNetDevelopmentServerHost attribute.
We use:
<TestMethod(), _
HostType("ASP.NET"), _
AspNetDevelopmentServerHost("$(SolutionDir)\\MyWebProject", "/"), _
UrlToTest("http://localhost:44444/")>
Public Sub Test()
AssertStuff();
End Sub
The $(SolutionDir) means it works fine in everyone's environment, and all the tests can be checked in. We do have to change it each time we automatically create a new test, but we're writing the test anyway, so it's not too much of a hardship.
It's possible to set environment variables during the build to alter the path and still have it work in a local environment.
How to: Parameterize the URL for a Web Performance Tests Web Server
Testing Web Sites and Web Services in a Team Environment