We would like to create a contract first ws with wso2brs based on a certain xsd. The object model of the ws generated by the wso2brs must stay conform to this xsd. The strategy we've tried so far is generating java libraries based on the xsd, and then have the brs reason on the java libraries in the brs-project.
The problem is that the object model of the resulting ws exposed by the brs is no longer conform to that of the original xsd. It seems that something goes wrong in the "translation" xsd->javaobject->xml. The java object generators I've tried so far have been jaxb and wsdl2java.
What do we need to do in order to create a "true" contract first ws with wso2brs?
Best regards,
Georg and Philip
when creating a rules service to be deployed on WSO2 BRS that service should contain three components which are (1.)JAR containing the Java classes of facts and results, (2.)Drools file that defines the rules for the sample use case (.drl) and (3.) a rule service configuration (.rsl). So if you want to follow a contract first approach then that would only be to create the java classes, hense the drools file and service config should be part of the end rules service. So the approach you have tried can be the only way. In case if you are having issues in translation I think its better you use an IDE like Eclipse to do the translation for you http://theopentutorials.com/examples/java/jaxb/generate-java-class-from-xml-schema-in-eclipse-ide/
Related
I am currently trying to implement a web-service assignment given by my college.
My Assignment is..,
Consider a case where we have two web Services- an airline service and
a travel agent and the travel agent is searching for an airline.
Implement this scenario using Web Services and Data base.
For that as a newbie I tried to follow the steps given in this link.
I opened the Netbeans beta 2, and exactly followed the steps as given in that link.
But while trying the steps,
Deploying and Testing the Web Service, I tried to run the CalculatorWSApplication, I noticed that javax.ejb.Stateless is undefined.
And I have three questions,
I have a basic knowledge of , JSP, HTML, WEBSERVICE. Please give me some basic idea/basic schema of the assignment such that I could proceed with the next steps and implementation.
How could I get rid-off from the missing ejb file.
Generally .java files will refer to the libraries present in jre and why in this program, CalculatorWS.java refers in this path C:\users\MuthuGanapathy\.netbeans\7.0beta2\var\cache\index\s3\java\14\gensrc\javax\
Let me try to answer your questions:
First of all: You don't really need knowledge of JSP and HTML for creating WebServices. If you are interested in additional knowledge rather have a look in subjects like SOAP, WSDL or XML (on which SOAP and WSDL files are based). You can find good informations at w3schools.
As said in your assignments requirement you'll have combine your service with a database, therefore you'll have to face the fact that WebServices aren't able to send every kind of data. For example, if you intend to use some kind of JPA you wont be able to send entities between Client and Server via WebService easily (though its possible).
For the reason of that my approach would be to send simple datatypes between client and server and, on server side, build my complex objects.
This would force me to code at least 3 classes (one for each webservice and one for communication with the database).
Airline WS:
#WebService
public class Airline {
#WebMethod
public String stuffToDo {
// do your stuff
persistOrSelect(complexObject);
return "success";
}
private boolean persistOrSelectData(Object complex) {
// Database stuff here
DBdao.doStuff(complex);
return true;
}
}
TravelAgent WS:
// same structure as shown above
DB class:
public class DBdao {
public static doStuff(Object complex) {
// get DB connection and INSERT, SELECT, UPDATE
}
}
In this scenario you didn't even have to use a class out of the javax.ejb package but I understand that this could be necessary :).
I don't really use Netbeans and therefore I can only speculate. I think that your problems 2.) and 3.) relate to each other.
The javax.* package normally is located in your JDK and should be specified in your IDE inside the server library/target runtime your using.
Do you have assigned the server library to your project?
Have you tried to point your Netbeans installation to your JDK path as shown here and here?
It could also be possible that your project do not have a reference to the Java System library.
Last but not least:
There are several ways for testing your webservice:
You use Netbeans therfore I assume that you deploy your project on an Glassfish server.
After deployment you can navigate to your project inside the admin gui and click the link pointing to view endpoints. In the next window your able to either follow a link pointing to the generated WSDL or to a tester
You can write your own client by either following the tutorial provided or, for a more general approach you can use this.
Use soapUI for testing (it's available as standalone application or as IDE plugin)
I hope this helpes, have Fun!
I've been struggling with understanding a few points I keep reading regarding RESTful services. I'm hoping someone can help clarify.
1a) There seems to be a general aversion to generated code when talking about RESTful services.
1b) The argument that if you use a WADL to generate a client for a RESTful service, when the service changes - so does your client code.
Why I don't get it: Whether you are referencing a WADL and using generated code or you have manually extracted data from a RESTful response and mapped them to your UI (or whatever you're doing with them) if something changes in the underlying service it seems just as likely that the code will break in both cases. For instance, if the data returned changes from FirstName and LastName to FullName, in both instances you will have to update your code to grab the new field and perhaps handle it differently.
2) The argument that RESTful services don't need a WADL because the return types should be well-known MIME types and you should already know how to handle them.
Why I don't get it: Is the expectation that for every "type" of data a service returns there will be a unique MIME type in existence? If this is the case, does that mean the consumer of the RESTful services is expected to read the RFC to determine the structure of the returned data, how to use each field, etc.?
I've done a lot of reading to try to figure this out for myself so I hope someone can provide concrete examples and real-world scenarios.
REST can be very subtle. I've also done lots of reading on it and every once in a while I went back and read Chapter 5 of Fielding's dissertation, each time finding more insight. It was as clear as mud the first time (all though some things made sense) but only got better once I tried to apply the principles and used the building blocks.
So, based on my current understanding let's give it a go:
Why do RESTafarians not like code generation?
The short answer: If you make use of hypermedia (+links) There is no need.
Context: Explicitly defining a contract (WADL) between client and server does not reduce coupling enough: If you change the server the client breaks and you need to regenerate the code. (IMHO even automating it is just a patch to the underlying coupling issue).
REST helps you to decouple on different levels. Hypermedia discoverability is one of the goods ones to start with. See also the related concept HATEOAS
We let the client “discover” what can be done from the resource we are operating on instead of defining a contract before. We load the resource, check for “named links” and then follow those links or fill in forms (or links to forms) to update the resource. The server acts as a guide to the client via the options it proposes based on state. (Think business process / workflow / behavior). If we use a contract we need to know this "out of band" information and update the contract on change.
If we use hypermedia with links there is no need to have “separate contract”. Everything is included within the hypermedia – why design a separate document? Even URI templates are out of band information but if kept simple can work like Amazon S3.
Yes, we still need a common ground to stand on when transferring representations (hypermedia), so we define your own media types or use widely accepted ones such as Atom or Micro-formats. Thus, with the constraints of basic building blocks (link + forms + data - hypermedia) we reduce coupling by keeping out of band information to a minimum.
As first it seems that going for hypermedia does not change the impact of change :) : But, there are subtle differences. For one, if I have a WADL I need to update another doc and deploy/distribute. Using pure hypermedia there is no impact since it's embedded. (Imagine changes rippling through a complex interweave of systems). As per your example having FirstName + LastName and adding FullName does not really impact the clients, but removing First+Last and replacing with FullName does even in hypermedia.
As a side note: The REST uniform interface (verb constraints - GET, PUT, POST, DELETE + other verbs) decouples implementation from services.
Maybe I'm totally wrong but another possibility might be a “psychological kick back” to code generation: WADL makes one think of the WSDL(contract) part in “traditional web services (WSDL+SOAP)” / RPC which goes against REST. In REST state is transferred via hypermedia and not RPC which are method calls to update state on the server.
Disclaimer: I've not completed the referenced article in detail but I does give some great points.
I have worked on API projects for quite a while.
To answer your first question.
Yes, If the services return values change (Ex: First name and Last name becomes Full Name) your code might break. You will no longer get the first name and last name.
You have to understand that WADL is a Agreement. If it has to change, then the client needs to be notified. To avoid breaking the client code, we release a new version of the API.
The version 1.0 will have First Name and last name without breaking your code. We will release 1.1 version which will have the change to Full name.
So the answer in short, WADL is there to stay. As long as you use that version of the API. Your code will not break. If you want to get full name, then you have to move to the new versions. With lot of code generation plugins in the technology market, generating the code should not be a issue.
To answer your next question of why not WADL and how you get to know the mime types.
WADL is for code generation and serves as a contract. With that you can use JAXB or any mapping framework to convert the JSON string to generated bean objects.
If not WADL, you don't need to inspect every element to determine the type. You can easily do this.
var obj =
jQuery.parseJSON('{"name":"John"}');
alert( obj.name === "John" );
Let me know, If you have any questions.
I have been playing around with Apache CXF, in particular the various data bindings it supports: JAXB (the default), MTOM, Aegis and XMLBeans. Since all of these are supported, I suppose each has its merits. I came up with these:
Obviously, MTOM is to be preferred where large attachments are involved.
JAXB depends on annotations, so it is less suited when modification of classes is restricted.
Aegis has no wsdl2java tool, so it is less suited for "contract-first" development, i.e. start with a WSDL and generate your Java code from that.
It appears that Aegis provides slightly more control over the mapping between Java classes and XML through its declarative syntax in Class.aegis.xml files. On the other hand, I couldn't devise of any scenarios where JAXB did not do the trick.
I found this question juxtaposing JAXB and XMLBeans, but it doesn't give a comprehensive overview:
JAXB vs Apache XMLBeans
Besides these naive, a priori considerations, do you have any blood-and-guts experiences that would support the use of any other binding besides JAXB? I'm asking from a CXF point of view, but if any other options come to mind (e.g. Castor) please don't hesitate to elaborate.
If starting from scratch to create a WSDL first web service, then I definitely would recommend sticking with JAXB 95% of the time (maybe even higher). It's definitely the best tested databinding in CXF and performs quite well.
Where the other databindings come in are usually for one of two cases:
1) Java first use cases where you have something already written in Java that you want to expose as a web service with little to no modifications to the code. Aegis has it's strengths here as it's designed to be able to handle a wider range of things than JAXB. However, if you CAN modify the code, adding JAXB annotations usually isn't that hard. If you have mostly normal "beans", it's not a big deal.
2) Existing applications that use a particular mapping. If you have exising applications that are expecting XMLBeans beans (or SDO beans if using 2.3-SNAPSHOT of CXF, or JiBX beans if following the GSoC project), then using the other databindings could help by removing any needed mappings from JAXB to those object models.
Hope that helps a little.
Remember that JAXB is a specification so there are multiple implementations: Metro (Reference Implementation, MOXy (I'm the tech lead), etc.
JAXB can be used starting from Java classes or XML schema. If you have classes that cannot be modified individual JAXB implmentations offer extensions to handle that. See MOXy's externalizable metadata:
http://wiki.eclipse.org/EclipseLink/Examples/MOXy/EclipseLink-OXM.XML
JAXB was designed to work with MTOM attachments see the attachment marshaller/unmarshaller.
MOXy has XPath based mappings which offers full control of your object-to-XML binding see:
http://bdoughan.blogspot.com/2010/07/xpath-based-mapping.html
parse google geocode with xstream
The situation:
We have a library project that houses much of our code for the various integrations we work on. Many of the integrations consume web service apis, and my supervisor doesn't want 5 gazillion web service references added to the project.
What we generally do, then, is add a reference to a new project and copy the References.vb to the solution and just call the generated code. Not terribly convenient if changes are made to the service, but it works.
Recently, I ran into a problem where we have to use 3 web services for the same integration. 2 of these contain the same class definitions, however, they're in different namespaces because they belong to different services. This became a problem for me because one of the services searches a user based on user ID, and the other pulls back blocks of users. Both return an object, or list of, that is exactly the same semantically. And I need to process the data the same, whether it came from one service or the other.
My solution, was to strip out the duplicated classes in the service and replace them with classes inherited from common base classes. This allowed me to work with both objects as if they were the same, however, it required modifying the generated web service proxy. Therefore this change will need to be made every time I need to regenerate the proxy.
I'm curious what you all might think a better solution to this would be.
You're going to regret playing games with copying Reference.vb and editing generated files.
Switch to WCF and you'll be able to tell it you want to reuse the types, instead of having multiple types that are more or less the same.
BTW, they would be "less" the same if not all of the web references are updated at the same time after a server change.
The other option would be to build an abstraction layer over top of the web service pre-generated proxies, such that when you make to the calls to the abstraction layer you can always use the same objects, as they are squeezed into (and out of) the web service proxies in the abstraction layer. This would also allow for unit testing :)
I think you really should be looking at WCF for 3.5+, but for .NET 2.0 look at something like WSCF (Web Services Contract First), which defines the contracts in XML and generates a set of libraries reusable across services. E.g You define a MyComany.WS.Common namespace and use that namespace in multiple projects. The code generation then builds a shared library of types which get used across all the web-services. We use this extensively in our .NET 2 solutions and it's great. We had to do some additional work around the code generation to get it to fit into our build process, but once that was done we never looked back.
We're migrating to .NET 3.5 over time, so the WSCF will become obsolete
Heres the link to the thinktecture site for WSCF.
wsdl.exe using the /sharetypes switch allows the same types to be used across multiple service definitions, provided the wire signatures are not correct. I was unable to use it in my situation, though, because the various wsdl contracts were carelessly namespaced.
Instead of returning a common string, is there a way to return classic objects?
If not: what are the best practices? Do you transpose your object to xml and rebuild the object on the other side? What are the other possibilities?
As mentioned, you can do this in .net via serialization. By default all native types are serializable so this happens automagically for you.
However if you have complex types, you need to mark the object with the [Serializable] attribute. The same goes with complex types as properties.
So for example you need to have:
[Serializable]
public class MyClass
{
public string MyString {get; set;}
[Serializable]
public MyOtherClass MyOtherClassProperty {get; set;}
}
If the object can be serialised to XML and can be described in WSDL then yes it is possible to return objects from a webservice.
Yes: in .NET they call this serialization, where objects are serialized into XML and then reconstructed by the consuming service back into its original object type or a surrogate with the same data structure.
Where possible, I transpose the objects into XML - this means that the Web Service is more portable - I can then access the service in whatever language, I just need to create the parser/object transposer in that language.
Because we have WSDL files describing the service, this is almost automated in some systems.
(For example, we have a server written in pure python which is replacing a server written in C, a client written in C++/gSOAP, and a client written in Cocoa/Objective-C. We use soapUI as a testing framework, which is written in Java).
It is possible to return objects from a web service using XML. But Web Services are supposed to be platform and operating system agnostic. Serializing an object simply allows you to store and retrieve an object from a byte stream, such as a file. For instance, you can serialize a Java object, convert that binary stream (perhaps via a Base 64 encoding into a CDATA field) and transfer that to service's client.
But the client would only be able to restore that object if it were Java-based. Moreover, a deep copy is required to serialize an object and have it restored exactly. Deep copies can be expensive.
Your best route is to create an XML schema that represents the document and create an instance of that schema with the object specifics.
.NET automatically does this with objects that are serializable. I'm pretty sure Java works the same way.
Here is an article that talks about object serialization in .NET:
http://www.codeguru.com/Csharp/Csharp/cs_syntax/serialization/article.php/c7201
#Brian: I don't know how things work in Java, but in .net objects get serialized down to XML, not base64 strings. The webservice publishes a wsdl file that contains the method and object definitions required for your webservice.
I would hope that nobody creates webservices that simply create a base64 string
Daniel Auger:
As others have said, it is possible.
However, if both the service and
client use an object that has the
exact same domain behavior on both
sides, you probably didn't need a
service in the first place.
lomax:
I have to disagree with this as it's a
somewhat narrow comment. Using a
webservice that can serialize domain
objects to XML means that it makes it
easy for clients that work with the
same domain objects, but it also means
that those clients are restricted to
using that particular web service
you've exposed and it also works in
reverse by allowing other clients to
have no knowledge of your domain
objects but still interact with your
service via XML.
# Lomax: You've described two scenarios. Scenario 1: The client is rehydrating the xml message back into the exact same domain object. I consider this to be "returning an object". In my experience this is a bad choice and I'll explain this below. Scenario 2: The client rehydrates the xml message into something other than the exact same domain object: I am 100% behind this, however I don't consider this to be returning a domain object. It's really sending a message or DTO.
Now let me explain why true/pure/not DTO object serialization across a web service is usually a bad idea. An assertion: in order to do this in the first place, you either have to be the owner of both the client and the service, or provide the client with a library to use so that they can rehydrate the object back into it's true type. The problem: This domain object as a type now exists in and belongs to two semi-related domains. Over time, behaviors may need to be added in one domain that make no sense in the other domain and this leads to pollution and potentially painful problems.
I usually default to scenario 2. I only use scenario 1 when there is an overwhelming reason to do so.
I apologize for being so terse with my initial reply. I hope this clears things up to a degree as far as what my opinion is. Lomax, it would seem we half agree ;).
JSON is a pretty standard way to pass objects around the web (as a subset of javascript). Many languages feature a library which will convert JSON code into a native object - see for example simplejson in Python.
For more libraries for JSON use, see the JSON webpage
As others have said, it is possible. However, if both the service and client use an object that has the exact same domain behavior on both sides, you probably didn't need a service in the first place.
As others have said, it is possible.
However, if both the service and
client use an object that has the
exact same domain behavior on both
sides, you probably didn't need a
service in the first place.
I have to disagree with this as it's a somewhat narrow comment. Using a webservice that can serialize domain objects to XML means that it makes it easy for clients that work with the same domain objects, but it also means that those clients are restricted to using that particular web service you've exposed and it also works in reverse by allowing other clients to have no knowledge of your domain objects but still interact with your service via XML.