Web Service Unit Testing - web-services

This is an interesting question I am sure a lot of people will benefit from knowing.
a typical web service will return a serialized complex data type for example:
<orgUnits>
<orgUnit>
<name>friendly name</orgUnit>
</orgUnit>
<orgUnit>
<name>friendly name</orgUnit>
</orgUnit>
</orgUnits>
The VS2008 unit testing seems to want to assert for an exact match of the return object, that is if the object (target and actual) are identical in terms of structure and content.
What I would like to do instead is assert only if the structure is fine, and no errors exist.
To perhaps simplify the matter, in the web service method, if any error occurs I throw a SOAPException.
1.Is there a way to test just based on the return status
2. Best case scenario would be to compare the doc trees for structural integrity of target and actual, and assert based on the structure being sound, and not the content.
Thanks in advance :)

I think that this is a duplicate of WSDL Testing
In that answer I suggested SoapUI as a good tool to use

An answer specific to your requirement would be to compare the serialized versions(to XML) of the object instead of the objects themselves.
Approach in your test case
Lets say you are expecting a return like expected.xml from your webservice .
Invoke service and get an actual Obj. Serialize that to an actual.xml.
Compare actual.xml and expected.xml by using a library like xmldiff(compares structural /value changes at XML level).
Based on the output of xmldiff determine whether the webservice passed the test.

Related

Unit Tests: How to Assert? Asserting results returned or that a method was called on a mock?

I am trying to find out the best way to Assert, should I be creating an object with what i should return and check that it has equal to the expected result ?
Or should I be running a method against a mock to ensure that the method was actually called.
I have seen this done both ways, I wondered if anyone has any best practices for this.
Of course, it's quicker and easier to write a unit test to assert that a method was called on the mock but quicker and easier is not always the best way - although sometimes it can be.
What does everyone assert on, that a method has been called or assert the results that were returned ?
Of course its not best practice to do more than 1 assert in a unit test so maybe the answer is to actually assert the results and that the method was called ? So I would create 2 unit tests, 1 to check the results and 1 to check that the method was called.
But now thinking about this, maybe this is going too far, if I get a result that I suppose I can assume that my mock method was called.
In between testing that a method has been called and testing the value that it returns is another possibly more important test: that it was called with the correct parameters.
A very common case these days would a method you're writing that uses some HTTP library to retrieve data from a REST API. You don't want to actually make HTTP requests in your tests so you mock the HTTP client get() method. On the one hand your mock might just return some canned JSON response like (Using RSpec in Ruby as an example):
http_mock.stub(:get).and_return('{result: 5}')
You then test that you can properly parse this and return some correct value based on the response.
You could also test that the HTTP get() method is called, but it's more important to test that it's called with the correct parameters. In the case of an API, your method probably has to format a URL with query parameters and you need to test that it does that correctly. The assertion would look something like (again with RSpec):
http_mock.should_receive(:get).with('http:/example.com/some_endpoint?some_param=x')
This test is really a prerequisite to the previous test, and simply testing that get() was called wouldn't confirm much. For instance, it would tell you if you were incorrectly formatting the URL.

RESTful search. Return actual resources or URIs?

Pretty new to all this REST stuff.
I'm designing my API, and am not sure what I'm supposed to return from a search query. I was assuming I would just return all objects that match the query in their entirety, but after reading up a bit about HATEOAS I am thinking I should be returning a list of URI's instead?
I can see that this could help with caching of items, but I'm worried that there will be a lot of overhead generated by the subsequent multiple HTTP requests required to get the actual object info.
Am I misunderstanding? Is it acceptable to return object instances instead or URIs?
I would return a list of resources with links to more details on those resources.
From RESTFull Web Services Cookbook 2010 - Subbu Allamaraju
Design the response of a query as a representation of a collection
resource. Set the appropriate expiration caching headers. If the query
does not match any resources, return an empty collection.
IMHO it is important to always remember that "pure REST" and "real world REST" are two quite different beasts.
How are you returning the list of URIs from your query in the first place? If you return e.g. application/json, this certainly does not tell the client how it is supposed to interpret the content; therefore, the interaction is already being driven by out-of-band information (the client magically already knows where to look for the data it needs) in conflict with HATEOAS.
So, to answer your question: I find it quite acceptable to return object instances instead of URIs -- but be careful because in the general case this means you are generating all this data without knowing if the client is even going to use it. That's why you will see a hybrid approach quite often: the object instances are not full objects (i.e. a portion of the information the server has is not returned), but they do contain a unique identifier that allows the client to fetch the full representation of selected objects if it chooses to do so.

How to test save functions in CakePHP Models?

Just writing some tests for my CakePHP application and I am currently trying to work out the best way to test some functions in my models which end up saving data.
Should I just assertTrue or should I pull the data out of the database and assert the expected result against what is in the database?
There are some values that I have to generate programatically based on the users input so need to be sure they are write. Should I do a find operation within the test and check that the results of that are what I expect?
Use one of the expects to check what you save method returns. I assume you're not talking about save() because that is already tested within the core. So expect in your test whatever your method should return and see if it passes.
And yes, you can do a find within the test to check if your record was saved with the correct values. So find() it and expectEqual() on the result to whatever you expected.
By the way, it's always good to check the core tests to get an idea of how to test certain things or use mock objects. :)

(Java) How can I pass Schema-validated XML documents as parameters between distributed components (e.g. web services or sockets)?

Here is a description of the scenario and I would appreciate also any comments on the approach used
The core of my application is a set of web services backed by a P2P database. One service accepts a simple XML-based record (I have designed a generic schema for it). The service processes this data (mainly creating keys based on certain criteria) and pass the original data along with the created keys to a listening SocketServer in one of the listening P2P nodes. This key,data pair is routed to the proper node, which stores the data (associated with the key as an ID) in an XML database.
A second service accepts a query document that is structured based on the same schema, but with optional values that would be used for searching and matching from the previously stored ones. So the second service would pass this query (with the proper keys) to the P2P part, get back the results and pass them back to the service client.
E.g. if the original record submitted to the first service was < attr1 >value1 < /attr1 > < attr2 > value2 < / attr2 > (attribute list along with some other metadata mandated by the schema) then the second service should retrieve that record if the query received was < attr2 >value2 < / attr2 >
(I could later think about using more complex XPath or XQuery queries as the underlying XML database allows instead of exact matches for values here but that is not important at this stage. there is also a third service I am working on but it depends on getting the first two in proper shape first)
So my questions are:
1) What data type should I use as the parameters of the web services? How to utilize my schema for this usage? I was considering various XML binding frameworks (especially JAXB and SDO) for this but didn't know how to proceed.
2) How can I enhance the two services (call them store and search) to use dynamically created templates based on the original generic schema? The service would still accept documents of the main schema type but has the inner attribute list based on a template say template1 only requires whose values are ints while template2 require (float) and (string). The current JSP-based prototype manually creates this template but as an XML document that is assembled by hand (<>tags dispersed in text) and there is no type checking what so ever so I thought I could do better!
3) Is it possible to generate a quick web app prototype for simple access to this system (again by using the schema (&templates) to edit the appropriate XML message structures? What I am looking for is for the (human) user to choose a template and then just "fill in the blanks" and submit, no need for any fancy look and feel.
4) Can I or how can I also use this XML message type for communicating across sockets?
5) Does it matter if I deploy the services as stateless EJBs or not? Do I need them to be EJBs or servlets would be more than enough?
I currently have a rudimentary implementation (from previous developers) that were meant for a subset of my current requirements (I am improving on the services and adding new derived ones) but there was no schema nor validation and the data is passed all along as basic strings, thus providing weak typing and difficult to update manual parsing. The reason I want to update this to a stronger bound typing is to introduce changes in the data schema that would be passed along the whole system easily. Basically I want the system to be as less coupled to the data format/schema used as possible; the current prototype is too coupled to the data that I am finding it extremely difficult to change the data without breaking the system.
My initial investigation led me to consider JAXB but it supports only static typing (cannot create a schema/types dynamically at runtime that I want to persist for later usage). So I came across SDO which has both dynamic and static typing. The problem is just that there is not enough community and/or examples of using this approach so it seems risky (the examples of Apache Tuscany and Eclipselink implementations are very scarce and I could not find complete examples that are not 5+ years old (like this http://www.ibm.com/developerworks/java/library/j-sdo/) and also tackles the XML use case of SDO (most seem to focus on the relational usage of SDO).
This is my first time asking for programming help (here and elsewhere) so please bear with me. I searched a lot on the net but I could not find anything useful but pieces here and there that did not add up.
Any comment or hint is really appreciated.
trfndr
EDIT
I forgot one thing: how would the search service get back the results? Since it is opening a client socket connection, there is no way to get back any results synchronously. The current implementation tackles this by having the service client opening a listening socket on a random port and putting this contact info in the query document. After the search web service sends the query to the p2p part it finishes. The p2p sends the results as a WS call to another service which sends them back to the service client socket. I don't like this approach much, is there any more elegant solution?
I lead the EclipseLink JAXB & SDO implementations and represent Oracle on those specifications so hopefully I can help you out. This question is very similar to talk I'm giving at JavaOne in September.
1) What data type should I use as the
parameters of the web services? How to
utilize my schema for this usage? I
was considering various XML binding
frameworks (especially JAXB and SDO)
for this but didn't know how to
proceed.
This depend's on what web service framework you are using. JAXB is much easier to use with JAX-WS, and while JAXB is still easier to use with JAX-RS SDO, is a possible alternative.
2) How can I enhance the two services
(call them store and search) to use
dynamically created templates based on
the original generic schema? The
service would still accept documents
of the main schema type but has the
inner attribute list based on a
template say template1 only requires
whose values are ints while template2
require (float) and (string). The
current JSP-based prototype manually
creates this template but as an XML
document that is assembled by hand
(<>tags dispersed in text) and there
is no type checking what so ever so I
thought I could do better!
I'm not 100% what you mean here, but the following may be helpful:
Using #XmlAnyElement to Build a Generic Message
3) Is it possible to generate a quick
web app prototype for simple access to
this system (again by using the schema
(&templates) to edit the appropriate
XML message structures? What I am
looking for is for the (human) user to
choose a template and then just "fill
in the blanks" and submit, no need for
any fancy look and feel.
JAX-RS is a nice framework for creating quick prototypes. Below is an example I created:
Part 1 - The Database
Part 2 - Mapping the Database to Objects
Part 3 - Mapping the Objects to XML
Part 4 - The RESTful Service
Part 5 - The Client
4) Can I or how can I also use this
XML message type for communicating
across sockets?
I prefer frameworks like JAX-RS that communicate over the HTTP protocol.
5) Does it matter if I deploy the
services as stateless EJBs or not? Do
I need them to be EJBs or servlets
would be more than enough?
My preference is to use an EJB session bean for the service. If you are interacting with a database then you can leverage the Java Transaction API (JTA) to manage your database transactions.
Part 4 - The RESTful Service
SDO
EclipseLink is the SDO 2.1.1 (JSR-235) reference implementation. We have some examples posted below. If you are looking how to do something specific, I will try to post a relevant example.
http://wiki.eclipse.org/EclipseLink/Examples/SDO
JAXB
JAXB is static. It is also more popular than SDO. Recognizing this in EclipseLink we have implemented a dynamic JAXB feature. It gives you the dynamic aspect of SDO with a JAXB slant.
http://wiki.eclipse.org/EclipseLink/Examples/MOXy/Dynamic
EDIT #1
Since you are dealing with JAX-WS and your model is almost entirely dynamic, I think you should skip the JAXB binding altogether. In the following link see the section "Switching Off Data Binding"
http://java.sun.com/developer/technicalArticles/xml/jaxrpcpatterns3/
This will give us the body of the message as a javax.xml.transform.Source object. We will need to process the XML based on the dynamic templates. SDO would be a good choice here. You can constantly add new types to the HelperContext using XML schemas.
helperContext.getXSDHelper().define(schema1, null);
helperContext.getXSDHelper().define(schema2, null);
You wil be able to unmarshal the Source from the web service as follows:
XMLDocument doc = helperContext.getXMLHelper().load(source, null, null);
DataObject rootDataObject = doc.getRootObject();
String someValue = rootDataObject.getString("attr3/childAttr/anotherChildAttr");
You will also be able to use the XMLHelper to marshal your objects to XML when calling another service.

What's a good design pattern for web method return values?

When coding web services, how do you structure your return values? How do you handle error conditions (expected ones and unexpected ones)? If you are returning something simple like an int, do you just return it, or embed it in a more complex object? Do all of the web methods within one service return an instance of a single class, or do you create a custom return value class for each method?
I like the Request/Response object pattern, where you encapsulate your arguments into a single [Operation]Request class, which has simple public properties on it.
Something like AddCustomerRequest, which would return AddCustomerResponse.
The response can include information on the success/failure of the operation, any messages that might be used by the UI, possibly the ID of the customer that was added, for example.
Another good pattern is to make these all derive from a simple IMessage interface, where your general end-point is something like Process(params IMessage[] messages)... this way you can pass in multiple operations in the same web request.
+1 for Ben's answer.
In addition, I suggest considering that the generic response allow for multiple error/warning items, to allow the reply to be as comprehensive and actionable as possible. (Would you want to use a compiler that stopped after the first error message, or one that told you as much as possible?)
If you're using SOAP web services then SOAP faults are the standard way to return error details, where the fault messages can return whatever additional detail you like.
Soap faults are a standard practice where the calling application is a Soap client. There are cases, such as a COM client using XMLHTTP, where the Soap is parsed as XML and Soap faults cannot be easily handled. Can't vote yet but another +1 for #Ben Scheirman.