I have a bunch of Objects that are not annotated for JAX-WS and have XSD to go along with it. I am writing a Webservice Interface for them but do not wish to generate all of those JAX objects from XSD but rather like to use DynamicJAXBContext to create an bind objects at run time. My motivation is to keep a static memory footprint and also to use existing objects that are already available. I have about 100+ objects that are involved in the xsd and i dont want to duplicated object footprint for JAX.
Option
1) I am planning to use Logical Handler to intercept the JAX message and use Dynamic JAX Context to Marshal and unmarshal the payload.
2) Use Eclispe Moxy Libraries to wrap your JAX-WS
I am open to ideas. I can certainly try these out, wanted to make sure if i headed towards a stone wall, Hell if no response then i am bound to crash and learn from it. Any input is appreciated..
Related
I have a set of soap webservices that are tightly coupled to an application within the same architecture but I need it to also be an API for other applications to hook into.
At the moment, the services use a parameter (and method) structure like this
Entity GetEntity(int entityId)
Entity GetEntityByName(string entityName)
.... etc.
In the case of creates I use:
void CreateEntity(Entity entity)
I am wondering though would it be better to do it like this:
EntityResponse GetEntity(EntityRequest requestObj) .....
and in the requestObj I have id, entityName and depending what the user supplies, I can perform either function.
and for the create it would be:
CreateEntityResponse CreateEntity(CreateEntityRequest requestObj).
My thinking is that by doing it like this, the API can change internally...add new parameters etc without immediately breaking any integration that has already been done.
I think there are several design principles that you may want to consider:
1) Database Entity vs Data Transport Object DTO
Looks like those Entities come directly from a database mapping? Just exposing your Entities as API, is basically a fancy SQL Query browser. It's not necessarily wrong but you will achieve better de-coupling if you would expose DTO's in the API.
The DTO's could be more future proof then the Entities and more generic.
2) SOAP vs REST
If you want to achieve a maximum of future proofing you might want to consider REST. With the REST specification you have more options to extend the API later.
For instance if you look at APIs like Facebook they purely pass in parameters and then you receive in return a key-value map of the parameters you passed in. So very generic.
In SOAP you would always end up in defining all of those eventual attributes upfront. You basically need to introduce placeholders et cetera.
There is certainly a reason why SOAP is a good contract protocol and has advantages like code generating tools are more up to date and lots more. But with REST you could be even more flexible while loosing some of the goodies you had in SOAP.
This is also a very good read:
https://www.mulesoft.com/lp/whitepaper/api/secrets-great-api
or generally the RAML design spec from Mule is a very powerful tool when it comes to designing REST APIs.
I've been struggling with understanding a few points I keep reading regarding RESTful services. I'm hoping someone can help clarify.
1a) There seems to be a general aversion to generated code when talking about RESTful services.
1b) The argument that if you use a WADL to generate a client for a RESTful service, when the service changes - so does your client code.
Why I don't get it: Whether you are referencing a WADL and using generated code or you have manually extracted data from a RESTful response and mapped them to your UI (or whatever you're doing with them) if something changes in the underlying service it seems just as likely that the code will break in both cases. For instance, if the data returned changes from FirstName and LastName to FullName, in both instances you will have to update your code to grab the new field and perhaps handle it differently.
2) The argument that RESTful services don't need a WADL because the return types should be well-known MIME types and you should already know how to handle them.
Why I don't get it: Is the expectation that for every "type" of data a service returns there will be a unique MIME type in existence? If this is the case, does that mean the consumer of the RESTful services is expected to read the RFC to determine the structure of the returned data, how to use each field, etc.?
I've done a lot of reading to try to figure this out for myself so I hope someone can provide concrete examples and real-world scenarios.
REST can be very subtle. I've also done lots of reading on it and every once in a while I went back and read Chapter 5 of Fielding's dissertation, each time finding more insight. It was as clear as mud the first time (all though some things made sense) but only got better once I tried to apply the principles and used the building blocks.
So, based on my current understanding let's give it a go:
Why do RESTafarians not like code generation?
The short answer: If you make use of hypermedia (+links) There is no need.
Context: Explicitly defining a contract (WADL) between client and server does not reduce coupling enough: If you change the server the client breaks and you need to regenerate the code. (IMHO even automating it is just a patch to the underlying coupling issue).
REST helps you to decouple on different levels. Hypermedia discoverability is one of the goods ones to start with. See also the related concept HATEOAS
We let the client “discover” what can be done from the resource we are operating on instead of defining a contract before. We load the resource, check for “named links” and then follow those links or fill in forms (or links to forms) to update the resource. The server acts as a guide to the client via the options it proposes based on state. (Think business process / workflow / behavior). If we use a contract we need to know this "out of band" information and update the contract on change.
If we use hypermedia with links there is no need to have “separate contract”. Everything is included within the hypermedia – why design a separate document? Even URI templates are out of band information but if kept simple can work like Amazon S3.
Yes, we still need a common ground to stand on when transferring representations (hypermedia), so we define your own media types or use widely accepted ones such as Atom or Micro-formats. Thus, with the constraints of basic building blocks (link + forms + data - hypermedia) we reduce coupling by keeping out of band information to a minimum.
As first it seems that going for hypermedia does not change the impact of change :) : But, there are subtle differences. For one, if I have a WADL I need to update another doc and deploy/distribute. Using pure hypermedia there is no impact since it's embedded. (Imagine changes rippling through a complex interweave of systems). As per your example having FirstName + LastName and adding FullName does not really impact the clients, but removing First+Last and replacing with FullName does even in hypermedia.
As a side note: The REST uniform interface (verb constraints - GET, PUT, POST, DELETE + other verbs) decouples implementation from services.
Maybe I'm totally wrong but another possibility might be a “psychological kick back” to code generation: WADL makes one think of the WSDL(contract) part in “traditional web services (WSDL+SOAP)” / RPC which goes against REST. In REST state is transferred via hypermedia and not RPC which are method calls to update state on the server.
Disclaimer: I've not completed the referenced article in detail but I does give some great points.
I have worked on API projects for quite a while.
To answer your first question.
Yes, If the services return values change (Ex: First name and Last name becomes Full Name) your code might break. You will no longer get the first name and last name.
You have to understand that WADL is a Agreement. If it has to change, then the client needs to be notified. To avoid breaking the client code, we release a new version of the API.
The version 1.0 will have First Name and last name without breaking your code. We will release 1.1 version which will have the change to Full name.
So the answer in short, WADL is there to stay. As long as you use that version of the API. Your code will not break. If you want to get full name, then you have to move to the new versions. With lot of code generation plugins in the technology market, generating the code should not be a issue.
To answer your next question of why not WADL and how you get to know the mime types.
WADL is for code generation and serves as a contract. With that you can use JAXB or any mapping framework to convert the JSON string to generated bean objects.
If not WADL, you don't need to inspect every element to determine the type. You can easily do this.
var obj =
jQuery.parseJSON('{"name":"John"}');
alert( obj.name === "John" );
Let me know, If you have any questions.
okay this might be a pretty lame and basic question but its stuck in my head since i never had chance to work on web-services.
We can get the same "text bases" response(xml, json etc) from our server by very basic/simple implementations (lets say servlet) then why do someone has to develop a web-service.
What is the exception thing a web-service gives over simple http response?
At a basic level, you are quite correct, from a low level point of view, it's just text (XML) on a socket.
For simple web services, a servlet is adequate (I'm writing one of these as we speak).
When talking about something like SOAP and WSS-* web services, however, there is a lot of boiler plate processing and features available from the standards that web service toolkits expose as higher level transactions.
A simple example is data marshaling. If you treat it purely as XML, then your service basically gets to process the XML by hand -- parse it, evaluate it, populate your internal models, etc.
Contrast this to something like this from Java EE:
#WebService
public Person getPerson(String personId) {
Person p;
...
return p;
}
The web service stack will convert your Person object in to a SOAP compliant XML blob. It will also produce a WSDL that you can use to create client code (on many platforms: .NET, PHP, etc.) to make the web service code.
In the end, your client and server have only a few lines of code, while the frameworks do all of the grunt work parsing, marshaling, and publishing for you.
So, the value of the WS stack is that it handles much of the bureaucracy of writing WSS compliant web services.
It's no panacea, but for many modern implementations, SOAP <-> SOAP remote processing can be a, mostly, cross platform, drag and drop affair.
It depends. If your web service needs to answer a simple yes/no question like "does this username exist?", then return yes, no, 0, 1, etc may be enough. If you have one that returns all the faculty attributes, XML or JSON may be appropriate because of the structured nature. It's a little less prone to parsing errors than trying to parse plain text.
I have been playing around with Apache CXF, in particular the various data bindings it supports: JAXB (the default), MTOM, Aegis and XMLBeans. Since all of these are supported, I suppose each has its merits. I came up with these:
Obviously, MTOM is to be preferred where large attachments are involved.
JAXB depends on annotations, so it is less suited when modification of classes is restricted.
Aegis has no wsdl2java tool, so it is less suited for "contract-first" development, i.e. start with a WSDL and generate your Java code from that.
It appears that Aegis provides slightly more control over the mapping between Java classes and XML through its declarative syntax in Class.aegis.xml files. On the other hand, I couldn't devise of any scenarios where JAXB did not do the trick.
I found this question juxtaposing JAXB and XMLBeans, but it doesn't give a comprehensive overview:
JAXB vs Apache XMLBeans
Besides these naive, a priori considerations, do you have any blood-and-guts experiences that would support the use of any other binding besides JAXB? I'm asking from a CXF point of view, but if any other options come to mind (e.g. Castor) please don't hesitate to elaborate.
If starting from scratch to create a WSDL first web service, then I definitely would recommend sticking with JAXB 95% of the time (maybe even higher). It's definitely the best tested databinding in CXF and performs quite well.
Where the other databindings come in are usually for one of two cases:
1) Java first use cases where you have something already written in Java that you want to expose as a web service with little to no modifications to the code. Aegis has it's strengths here as it's designed to be able to handle a wider range of things than JAXB. However, if you CAN modify the code, adding JAXB annotations usually isn't that hard. If you have mostly normal "beans", it's not a big deal.
2) Existing applications that use a particular mapping. If you have exising applications that are expecting XMLBeans beans (or SDO beans if using 2.3-SNAPSHOT of CXF, or JiBX beans if following the GSoC project), then using the other databindings could help by removing any needed mappings from JAXB to those object models.
Hope that helps a little.
Remember that JAXB is a specification so there are multiple implementations: Metro (Reference Implementation, MOXy (I'm the tech lead), etc.
JAXB can be used starting from Java classes or XML schema. If you have classes that cannot be modified individual JAXB implmentations offer extensions to handle that. See MOXy's externalizable metadata:
http://wiki.eclipse.org/EclipseLink/Examples/MOXy/EclipseLink-OXM.XML
JAXB was designed to work with MTOM attachments see the attachment marshaller/unmarshaller.
MOXy has XPath based mappings which offers full control of your object-to-XML binding see:
http://bdoughan.blogspot.com/2010/07/xpath-based-mapping.html
parse google geocode with xstream
Instead of returning a common string, is there a way to return classic objects?
If not: what are the best practices? Do you transpose your object to xml and rebuild the object on the other side? What are the other possibilities?
As mentioned, you can do this in .net via serialization. By default all native types are serializable so this happens automagically for you.
However if you have complex types, you need to mark the object with the [Serializable] attribute. The same goes with complex types as properties.
So for example you need to have:
[Serializable]
public class MyClass
{
public string MyString {get; set;}
[Serializable]
public MyOtherClass MyOtherClassProperty {get; set;}
}
If the object can be serialised to XML and can be described in WSDL then yes it is possible to return objects from a webservice.
Yes: in .NET they call this serialization, where objects are serialized into XML and then reconstructed by the consuming service back into its original object type or a surrogate with the same data structure.
Where possible, I transpose the objects into XML - this means that the Web Service is more portable - I can then access the service in whatever language, I just need to create the parser/object transposer in that language.
Because we have WSDL files describing the service, this is almost automated in some systems.
(For example, we have a server written in pure python which is replacing a server written in C, a client written in C++/gSOAP, and a client written in Cocoa/Objective-C. We use soapUI as a testing framework, which is written in Java).
It is possible to return objects from a web service using XML. But Web Services are supposed to be platform and operating system agnostic. Serializing an object simply allows you to store and retrieve an object from a byte stream, such as a file. For instance, you can serialize a Java object, convert that binary stream (perhaps via a Base 64 encoding into a CDATA field) and transfer that to service's client.
But the client would only be able to restore that object if it were Java-based. Moreover, a deep copy is required to serialize an object and have it restored exactly. Deep copies can be expensive.
Your best route is to create an XML schema that represents the document and create an instance of that schema with the object specifics.
.NET automatically does this with objects that are serializable. I'm pretty sure Java works the same way.
Here is an article that talks about object serialization in .NET:
http://www.codeguru.com/Csharp/Csharp/cs_syntax/serialization/article.php/c7201
#Brian: I don't know how things work in Java, but in .net objects get serialized down to XML, not base64 strings. The webservice publishes a wsdl file that contains the method and object definitions required for your webservice.
I would hope that nobody creates webservices that simply create a base64 string
Daniel Auger:
As others have said, it is possible.
However, if both the service and
client use an object that has the
exact same domain behavior on both
sides, you probably didn't need a
service in the first place.
lomax:
I have to disagree with this as it's a
somewhat narrow comment. Using a
webservice that can serialize domain
objects to XML means that it makes it
easy for clients that work with the
same domain objects, but it also means
that those clients are restricted to
using that particular web service
you've exposed and it also works in
reverse by allowing other clients to
have no knowledge of your domain
objects but still interact with your
service via XML.
# Lomax: You've described two scenarios. Scenario 1: The client is rehydrating the xml message back into the exact same domain object. I consider this to be "returning an object". In my experience this is a bad choice and I'll explain this below. Scenario 2: The client rehydrates the xml message into something other than the exact same domain object: I am 100% behind this, however I don't consider this to be returning a domain object. It's really sending a message or DTO.
Now let me explain why true/pure/not DTO object serialization across a web service is usually a bad idea. An assertion: in order to do this in the first place, you either have to be the owner of both the client and the service, or provide the client with a library to use so that they can rehydrate the object back into it's true type. The problem: This domain object as a type now exists in and belongs to two semi-related domains. Over time, behaviors may need to be added in one domain that make no sense in the other domain and this leads to pollution and potentially painful problems.
I usually default to scenario 2. I only use scenario 1 when there is an overwhelming reason to do so.
I apologize for being so terse with my initial reply. I hope this clears things up to a degree as far as what my opinion is. Lomax, it would seem we half agree ;).
JSON is a pretty standard way to pass objects around the web (as a subset of javascript). Many languages feature a library which will convert JSON code into a native object - see for example simplejson in Python.
For more libraries for JSON use, see the JSON webpage
As others have said, it is possible. However, if both the service and client use an object that has the exact same domain behavior on both sides, you probably didn't need a service in the first place.
As others have said, it is possible.
However, if both the service and
client use an object that has the
exact same domain behavior on both
sides, you probably didn't need a
service in the first place.
I have to disagree with this as it's a somewhat narrow comment. Using a webservice that can serialize domain objects to XML means that it makes it easy for clients that work with the same domain objects, but it also means that those clients are restricted to using that particular web service you've exposed and it also works in reverse by allowing other clients to have no knowledge of your domain objects but still interact with your service via XML.