Add SOAP to an existing GWT solution - web-services

I am looking for a clean way to add service oriented access to an existing GWT application (client + RemoteService based server). The thing is that all the services are already in place, described by the #RemoteServiceRelativePath notation. It would be nice to be able to actually add the #WebService notation and have access to them both with RPC and XML/JSON/..
The real problem is that extending a current application to support other clients than the existing GWT one is a bit hard because of the GWT obfuscation. This also leads to an unneeded coupling between client and server since they both need to be deployed at the same time, because of the .gwt.rpc generated files.
I would like to reuse the existing RemoteService interfaces to define web services and connect to them with new clients via a plain-text protocol. Additionally, I would like to port the existing GWT client to the same protocol.
Is it possible to do this while using the same interfaces and implementation just by annotation?
What would be the best way to port the existing client to use a plain text protocol, RequestBuilder? Or just inject a new serialization implementation that does xml / json?
I don't even know where to start with this, this is why I'm asking. Maybe it is better to rewrite all the services and port everything at once but it will break everything until this is finished.

We've had a different approach since GWT the coupling of GWT between server and client side is not all bad but gives you a nice integration and you don't have to think too much about communication issues etc.
For that, our application had a frontend tier which consisted of the full gwt stack (client + server-coupling) and on the server-side, we connected via spring and RPC to the service layer.
On that way you can use on the benefits of spring and you don't loose the comfort of GWT.
But I Would like to hear if somebody already has gone other ways ;)

This is rather late and GWT is not the wonderchild it once was. However, for the sake of tying loose ends here's the solution I went for:
create a Java generator that parses all model (shared client/server classes) files through reflection and generates a Java file that reads/writes SOAP objects
bootstrap the above into a generic Java handler that handles native objects + array, sets, maps
write the service that can deal with the generated XML from the files above
It sounds a bit terse and a bit complicated but it 'only' took ~1 month to write the code to reliably convert >200 objects to their XML representation, automatically. The added benefit is that it allows mocking and cross-platform clients/servers.
As a summary, the generated code creates new methods 'fromXML' and 'toXML' that feed the fields that are public members (get/set) in the given class. So, given MyClass it would generate the MyClassSerializer and MyClassDeserializer Java classes that implement those SOAP-specific methods and also publish themselves to a 'dispatcher'. So whenever that dispatcher sees 'MyClass' it would know where to get the ser/deser functions from.

Related

Biztalk and the best way to call web service

I am writing a biztalk orchestration that will need to call a web service, probably multiple web services, and probably more than once. I see two options before me; one, consume the wsdl in a separate code project, and call the web services from code in an expression shape, and two, consume it from Biz, get schemas, etc and call through request/response ports. What is the best practice here? On the one hand, if the wsdl is updated it will be easier to update the code than the schemas and ports, and it seems like a lot of clutter and work to build ports enough for multiple web service calls. On the other hand, all the tuning you can do at the port level(retries being one) makes it robust to call web service.
Also see this question here, which discusses a 3rd option, viz using add service reference in BizTalk as an alternative method to import XSD's.
IMO you would be defeating the point of using BizTalk by using .NET proxies to handle integration. For example:
You are hard coding the protocol (WCF), and now need to marshal request and response messages to / from your custom code. With a send port, any request-response mechanism can be configured at deployment time - especially useful for unit and integrating testing.
You will be losing all of the benefit of BizTalk's message delivery mechanisms, such as retries, backup transports, resume suspended messages, different maps for different ports, and arguably the whole pub sub ability, (e.g. what if multiple listeners want to listen to the responses from the called web services?)
Where will you store the WCF serviceModel config settings, such as the endpoint etc? i.e. You've lost the flexibility of binding files.
etc.
So, TL;DR Always use the WCF adapters in BizTalk
However, that said, am in agreement that updating generated items if the consumed service changes can be messy. FWIW, we mitigate some of this as follows:
Always create a separate, empty folder in which to import all the
imported generated artifacts.
Leave all the generated items 'as-is', i.e. don't be tempted to move the dummy .odx, or delete it (since it has the preconfigured Port Types)
Unfortunately this leaves the below actions which still need to be manually applied:
Remember to change the visibility of the Port Types to public if the artifacts are in a separate assembly to your orchestrations
Promoted and distinguished properties on the imported schemas need to be reapplied (e.g. remember to document screen shots after any change). Possibly this can be simplified or automated by saving and re-pasting in the <xs:annotation> section of the schma.
If you are using message contracts in your WCF service, and are reusing the same referenced messages across multiple applications, you will need to manually delete the duplicates created by the add generated items and then re-reference the existing schemas. (e.g. we have a standard 'response' message back to all BizTalk calls)
Interestingly, you can have a mixture of both infact. Check this out by Saravana Kumar!!!
It uses passthrough receive and consumes a webservice using the dll on the send port, without going through the pain of creating schemas and webports.
This gives all the power of Biztalk ( routing response, send port configuration, etc) and still the flexibility to change the schema without much fuss.

How to develop stateful servers in Java EE without Jax-ws

I'm developing a web service in Java EE using Apache Tomcat and so far I have written some basic server side methods and a test client. I can successfully invoke methods and get results but every time I invoke a method, the server constructor gets called again, and I also can't modify the instance variables of the server using the set methods. Is there a particular way to make my server stateful without using JAX-WS or EJB #Stateful tags?
This is a little bit of misconception here. The stateful EJB would maintain session between one client and server, so still the EJB state wouldn't be shared between various clients.
You can expose only stateless and singleton EJBs as a JAX-WS web service.
The best option is to use database for storing all bids and when the auction is finished choose the winning one.
If you want to use a file it is fine, as long as you like to play with issues like:
synchronizing access to that file from many clients
handling transactional reads and writes
resolve file corruption problems
a bunch of other problems that might happen if you are sufficiently unlucky
Sounds like a lot of work, which can be done by any sane database engine.

Socket Server vs. Standard Servers

I'm working on a project of which a large part is server side software. I started programming in C++ using the sockets library. But, one of my partners suggested that we use a standard server like IIS, Apache or nginx.
Which one is better to do, in the long run? When I program it in C++, I have direct access to the raw requests where as in the case of using standard servers I need to use a scripting language to handle the requests. In any case, which one is the better option and why?
Also, when it comes to security for things like DDOS attacks etc., do the standard servers already have protection? If I would want to implement it in my socket server, what is the best way?
"Server side software" could mean lots of different things, for example this could be a trivial app which "echoes" everything back on a specific port, to a telnet/ftp server to a webserver running lots of "services".
So where in this gamut of possibilities does your particular application lie? Without further information, it's difficult to make any suggestions, but let's see..
Web Services, i.e. your "server side" requirement is to handle individual requests and respond having done some set of business logic. Typically communication is via SOAP/XML, and this is ideal if you bave web based clients (though nothing prevents your from accessing these services via standalone clients). Typially you host these on web servers as you mentioned, and often they are easiest written in Java (I've yet to come across one that needed to be written in C++!)
Simple web site - slightly different to the above, respods to HTML get/post requests and serves up static or dymanic content (I'm guessing this is not what you're after!)
Standalone server which responds to something specific, here you'd have to implement your own "messaging"/protocols etc. and the server will carry out a specific function on incoming request and potentially send responses back. Key thing here is that the server does something specific, and is not a generic container (at which point 1 makes more sense!)
So where does your application lie? If 1/2 use Java or some scripting language (such as Perl/ASP/JSP etc.) If 3, you can certainly use C++, and if you do, use a suitable abstraction, such as boost::asio and Google Protocol buffers, save yourself a lot of headache...
With regards to security, ofcourse bugs and security holes are found all the time, however the good thing with some of these OS projects is that the community will tackle and fix them. Let's just say, you'll be safer using them than your own custom handrolled imlpementation, the likelyhood that you'll be able to address all the issues that they would have encountered in the years they've been around is very small (no disrespect to your abilities!)
EDIT: now that there's a little more info, here is one possible approach (this is what I've done in the past, and I've jused Java most of the way..)
The client facing server should be something reliable, esp. if it's over the internet, here I would use a proven product, something like Apache is good or IIS (depends on which technologies you have available). IMHO, I would go for jBoss AS - really powerful and easily customisable piece of kit, and integrates really nicely with lots of different things (all Java ofcourse!) You could then have a simple bit of Java which can then delegate to your actual Server processes that do the work..
For the Server procesess you can use C++ if that's what you are comfortable with
There is one key bit which I left out, and this is how 1 & 2 talk to each other. This is where you should look at an open source messaging product (even more higher level than asio or protocol buffers), and here I would look at something like Zero MQ, or Red Hat Messaging (both are MQ messaging protocols), the great advantage of this type of "messaging bus" is that there is no tight coupling between your servers, with your own handrolled implementation, you'll be doing lots of boilerplate to get the interaction to work just right, with something like MQ, you'll have multiplatform communication without having to get into the details... You wil save yourself a lot of time and bother if you elect to use something like that.. (btw. there are other messaging products out there, and some are easier to use - such as Tibco RV or EMS etc, however they are commercial products and licenses will cost a lot of money!)
With a messaging solution your servers become trivial as they simply handle incoming messagins and send messages back out again, and you can focus on the business logic...
my two pennies... :)
If you opt for 1st solution in Nim's list (web services) I would suggest you to have a look at WSO's web services framework for C++ , Axis CPP and Axis2/C web services framework (if you are not restricted to C++). Web Services might be the best solution for your requirement as you can quickly build them and use either as processing or proxy modules on the server side of your system.

Communication between client class library and web service / web service and server class library

Wondering what others do / best practice for communicating between layers. This question relates to communication between layers 2-3 and 3-4.
Our Basic Architecture (in order) as follows:
UI
Front End Business Classes
Web Services
Back End Business Classes
DAL
The web services are just a façade that include logging and authentication to back end class libraries.
As such, the web service is passed a request object that includes the parameters required by the web method along with the user credential (the user credential for example is stored in a base class as we will always need to pass this to the webservice) and responds with response objects (has things such as status and message, if failed etc along with the object required) both request & response use a custom generic class/or interface where only one result is returned, otherwise a class needs to be created.
Sometimes it makes sense to do this for the response object at layer 4 (though we don't use a request object unless a lot of parameters need to be pasaws), in which case we just have an adapter class in layer 3 which returns this to the client. For consistency I have considered doing this all the time, though think it may be overkill.
So to iterate the question, what are the best practices for communicating between layers? and should/do people use this method outlined above (it works well for us) and should layers 3-4 implement similar method to 2-3?
Possible considerations:
currently everything is coded in house by a team of developers, some client code may be outsourced in the future
future web services will be WCF based (not sure if that effects design other than coding to interfaces which I would prefer anyway).
We use .net
For the sake of completeness:
It seems a good idea to have the response / requests in the class library, that way if you want to change the web service to WCF, there is less work to do.

Testing a gSOAP server

In a normal client/server design, the client can execute functions implemented on the server-side. Is it possible to test a gSOAP server by connecting an extra client to it?
I have not used gSOAP, but from reading the documentation it allows you to write both clients and servers so you can write an test client to test the service.
However if you are planning to offer the service to clients written in .net or java I would recommend that you write the test client in one of these. This way you will know for certain that it is possible to use the service from one of these clients. You might also find that .net or java clients are easier to write if you server is designed in a specific way, your test client will help you find this out.
Sure it is, use SoapUI to generate client connections and data. Its free.
To add to the other comments: testing a gSOAP server can be easily done offline using IO redirect. When you invoke soap_serve() without any sockets set up prior to this call, the server engine will simply accept data from standard input and write data to standard output. This is a great way to hit an offline server implementation hard with XML data patterns for testing before deploying the server online. The gSOAP tool even generates example XML messages that you can use for this purpose.