When providing a web services API (well, let's say SOAP), do you provide a library wrapper along with it to make it "easier" for people to use? Or do you just package up a WSDL and documentation for it and let people figure out what to do with it?
What are people doing usually? I've seen a bunch of examples where the wrapper is provided, but it has always seemed counter-productive to me.
WSDL is easily discoverable (all functions & types as declared), so there is usually no need to offer any package with it, and minimal documentation (apply an XSL to the WDSL and it's usually enough :) ). My theory about the appearance of libraries/wrappers is that it is directly related to security measures / needed authentication & hashes (usually: concatenating some fields with a secret & hash it), about which one simply doesn't want to answer every single question anymore.
Audience matters I think: if you want you run-of-the-mill hobby coder to be able to use your service, providing a package can get you that much more users. If you're more in business to business services, the webservice usually has to be integrated in some larger package and most libraries would be futile.
That being said, I'd say of the webservices I came across: about 60% of the libraries provided were hopeless spaghetti code fit for the bin, 30% were not the code I'd use, but could clear up some questions not answered by the documentation, and only about 10% were fit enough to integrate in a project (or the project small and/or worse enough to be no worse for it).
How you going to support multiple web-service stacks - JAX-WS, AXIS2, CXF etc? My choice - WSDL/XSD. In practice I got service built with JAX-WS and a client with AXIS2. And I don't want to build a client wich you are going to use. I don't even know your preferable web-service stack and your JVM version limitations. For example, I can call web-service from java 1.4 - there are no annotations and not possible to use client lib built with annotations for java 1.5. So WSDL is right way to build ws-client instead of providing generated client library.
Related
I am building a server-client application that involves heavy signal processing (e.g. FFT). I have a working application written in C++/Qt, where everything (signal processing and other calculations) is done in client and server just sends raw data. Now I feel it would be easier to implement these features on the server. So, that maintenance becomes easier.
As I am doing signal processing, I think I should stick to C++ for performance. But I am open to new ideas.
Constraints:
I need type checking so javascript is out of discussion.
Scaling includes adding more server and each server will have at the max
10-12 users. So, Hardware cost is important. I cannot use x number of
i7 processors.
No option of using cloud services.
So, right now my question is as follows:
How can I create web services using C++ for Linux server? (Although cross platform is not important, I would appreciate if I can achieve it.)
EDIT [02:09:2015]
Right now, I think the choice is between poco and C++ Rest SDK. I feel I should go for C++ Rest SDK. Mainly because it has only those features that I need. And Also it is supported by microsoft and uses boost internally. So, I feel in future, this might be well integreated with standard.
You could use cross-platform Poco library to implement HTTP server, it is really straightforward with this framework, and they have a lot of examples. You can also use JSON serialization (like rapidjson library) to implement REST service on top of HTTP - this way your Web service will be accesable by most of the modern Web frameworks.
You might want to take a look at the C++ Rest SDK, an open source, cross platform API from Microsoft.
Like #nogard suggested, I also recommend POCO for now. It's the most serious and feature-full solution. Given you mentioned Qt, I suggest you to take a look at Tufão.
EDIT:
I forgot to mention one comparison of mine on the C++ HTTP server frameworks.
If you directly handle HTTP requests, you might loose the functionality what Web Servers does well what it was build to do. I had a similar issue, what I did was wrap up my Qt c++ code inside a PHP extension. In your case you can do the same. Wrap your logic inside what ever technology you are about to use, doesn't matter it's PHP, net , Java or anything else.
I have an interesting conundrum. I have been challenged with identifying the most suitable process in which to create a "browser front-end" to an existing multi-user application built within the TigerLogic/Pick D3 environment. My research indicates there are many ways to do this; but I am struggling to decide which method is best or where to start. I have "played" with a few technologies but a commitment to one is needed to get started.
These methods include:
Creation of a complex webservice using MVS Toolkit; and engineering a client from WSDL either from scratch or using maven/wsimport. Tests indicate there is a lot more to this process than originally though for a simply WSDL.
Development of a Java based web app that harnesses the MVSPJavaAPI - I am not a JAVA developer so this means learning a new language. Development would most likely take place in Eclipse.
Using TigerLogics FlashCONNECT - resulting in additional expenditure to clients so not preferable - and more or less ruled
out.
There is also the .NET option - but I have ruled this out on the basis of needing portability.
My question is, has anyone else out there done anything like this and could you share your experience? My first task is to build a web-app that will reliably give me the D3 TCL prompt in a browser that I can customise.
I am not sure there is a definitive answer here but would like peoples thoughts and will label the most useful as the answer.
What path you choose depends in some part on your existing skill-set and whether that fits in with your portability needs. It is very difficult to give you a concrete answer to your question becaue of not knowing in which part of the chain you need the portability.
It is however possible to develop a web-browser front-end using .NET which will run on Linux or Windows, so I don't see an issue with portability here. Your web server will have to be windows based but it shouldn't matter whether D3 is running on Linux or Windows at the server-end, or whether the client desktops are running Linux or Windows.
You could try TigerLogics MVSP .NET API but I do not know if it has the power to deliver based on your needs. I believe you may find mv.NET from Bluefinity could fulfill your needs. This is in my opinion the leading product on the MultiValue market for achieving the goals you have in mind. This will mean spending money of course. For this you will get a very powerful set of tools. Also, the cost of investing in a good tool could end up smaller than the cost in terms of time, effort and potential complications of trying to do this piecemeal without spending any extra money. I am sure Flashconnect would also do the job. You would have to weigh up the cost of the different options to find out which one is right for you both technically and financially.
Not knowing whether or not you have .NET in your skill-set, I don't know whether the .NET option would be easier for you. It is however technically possible.
I would suggest using Rocket's D3 (formerly TigerLogic D3) .NET APIs and create a Web API RESTful service that you can consume with JavaScript in any other web technology and if you need to call from a D3 subroutine (in case you would) then use the MVS Toolkit.
Requirements though are D3 9.0 or later.
I've used all of the technologies described, and many more to interface with D3. I agree with #Glenn and will add... I understand you're edging away from .NET. That's fine, you don't need it. But consider that most LAMP implementations separate the DBMS servers from the web servers. That topology introduces short delays between the tiers but decouples them in case you want to use multiple web servers or multiple databases - a common topology even with D3 / MV.
I have a client where we have a Java/Grails front-end over Linux, with all data queries filtered through a single, elegant data provider class that's abstracted from application logic. That uses a web service call which I wrote in Java, calling to a .NET web service. The service is easily generated/modified, as is the client from the WSDL. From there IIS carries inbound queries to D3 via mv.NET, and at this point it doesn't matter if the D3 DBMS is in Linux or Windows. My web service could have as easily been in Linux with Java but it would then lack a pooling mechanism - see below.
If you want all Linux then you can go with the MVSP Java library. TigerLogic (now acquired by Rocket Software) committed to a PHP binding for MVSP some months ago. Rather than wait, one of my clients created a PHP wrapper around mv.NET, though MVSP is as easy. So the resulting application is essentially LAMP, but with the M = Multivalue. I have written code like this too - we can write a wrapper in any language which exposes a useful API and abstracts both connectivity method and OS dependencies. In other words it doesn't matter what languages we want to use or what OS's are involved. That part is rather trivial and subject to change later. It's better to focus on the application than the communications.
You can also go off the menu, so to speak, and create your own Java/PHP wrapper around the OS-level d3tcl command, which is a script/wrapper around the d3 executable. This allows you to open a connection yourself and pass in commands.
Whatever option you select, you need to consider that opening and closing a DBMS connection is a slow process. You do not want to script a login around every data request. You do want to open a connection and keep that open persistently, while your client code accesses and releases that persistent connection as required. This is why we like mv.NET and FlashCONNECT. With MVSP and other mechanisms you need to create your own persistence model. You'll also need to manage a pool of connection resources - what happens when you get 10 simultaneous queries, or just 1 short one after one long one? You don't want queries to back up, you don't want to reject or timeout connections, and you don't want to fire up a connection for every client. You do want the proper number of DBMS sessions waiting for inbound connections. mv.NET and FlashCONNECT do this for you, the others do not.
Personally I'd shy away from FlashCONNECT. I was there for its initial development and testing and for years of end-user implementations. It's not as widely used as the other options and is more a tool for those who aren't familiar with other options. If you're talking about Java then you're probably not inclined to use FlashCONNECT. That said, if you have developers who are not familiar with anything outside D3 then FlashCONNECT is a decent server-side tool for them while someone else is focusing on the client-side with other technologies. Everyone should use their best skillset.
Finally, (already?) if someone is not familiar with external technologies, and more intimate with D3, then other options exist like DesignBAIS and Viságe, mostly removing the burden of communications and allowing developers to work on the client-side features and back-end rules in BASIC.
I discuss all of these topics plus mobile and telephony on my blog.
HTH
Does anyone know of any really good C++ Libraries for implementing a web services api over top of existing legacy code?
I've got two portions that are in need of it:
An old-school client/server api (No, not web based, that's the problem)
An old cgi application that it integrates with the client and server.
Let me know if you've had any luck in the past implementing something like this using the library.
Microsoft has put out native code webservices API (WWSAPI) that looks pretty decent. I haven't had a chance to use it yet. We had originally ignored it, since it required Windows 7 or Server 2008, but they've finally released a runtime library for older OSs.
I would advise staying away from Microsoft's old SOAP SDK. For one, it's been deprecated; two, it's not terribly easy to distribute; and three, it's terrible to code for compared to the .NET offerings.
What we've done is written a bit of C++\CLI to interface our existing C++ codebase with .NETs webservice framework. This turned out to be remarkably easy. .NET will generate all the classes and boilerplate code you need based of of a WSDL file. Then you just write some C++\CLI code to handle the incoming data as managed classes and fill in some managed classes as responses.
You can use the Apache AXIS/C interface to build a web services interface. It has plugins for Apache and IIS (and I think FastCGI), and lets you talk web services to your legacy code.
I used gSOAP in a project and it was quite straightforward. Compared to Axis/C, I found it easier to learn and use. I never used POCO, can't give you an opinion, but it's gaining popularity recently. This is the link for gSOAP
http://www.cs.fsu.edu/~engelen/soap.html
I know that some big players have embraced it and are actually exposing some of their services in APP compliant way, already. However, I haven't found many other (smaller) players in this field. Do you know any web application/service that uses APP as its public API protocol? What is your own take on AtomPub? Do you have any practical experiences using it? What are its limitations and drawbacks? Do you prefer AtomPub as your REST style or do you have some other favourite one? And why?
I know, these are many questions, not just one. The thing I'm interested here in is simple, though - how did the APP standard hit the market and particularly how does it seem with its adoption among web developers?
The company that I work for, is developing a lot of RESTful services.
However none of them expose public APIs.(In the sense that all services are internally consumed by our own clients). The reason why we went for REST architectural style was that we wanted our services to be easily consumable and more importantly scale well.
From my own practical experience I have come to the conclusion that HTTP + ATOM syndication format is a good idea, provided you want to keep things flexible(In terms of different content model, attaching and extending meta data associated with payloads, uniform parsing etc). ATOM ensures that everybody interprets the payload in an uniform manner without any scope for ambiguity.
However if one does not have any such complex requirements or does not forsee such requirements then the ATOM format could be a bit of an overhead. (For instance elements like Author,Title etc make sense more in the blogging/RSS world and may not make sense in your particular problem domain).
Also if the goal is to just serialize data structures at one end and reconstruct it at the other end, then most web frameworks(like WCF) have custom formats which are more appealing.
So in my opinion ATOM Pub is good if you need flexiblity in terms of data representation and if the playing field is huge with different kind of client.
However if you have a good knowledge of potential clients and server/client usage patterns then custom formats might be a good idea.
If the client is browser based then formats like JSON are very appealing.
Hope this answers your question.
My own research so far:
Wordpress supports AtomPub as its API protocol since version 2.3
GData is probably the biggest shot in the AtomPub field so far
Habari - new promising blogging system promotes APP as one of its main features
BlogSvc.net - an AtomPub
server, blog engine for .NET
platform, written in C#
Jangle - an open source project
designed to facilitate API access to
Library Systems
There's also mod_atom - an Apache module that stores entries in the filesystem.
Last time I checked (2007 or so) Atompub was fairly complex to implement. While you can whip together something that emits valid Atom feeds during the lunch break, implementing AtomPub was a fairly big undertaking.
That might have changed due to better libraries and tools but still it might be too complex to be implemented by smaller sides just because it's cool.
And the lack of killer AtomPub client applications puts little or no pressure on server operators to offer an AtomPub interface.
The situation:
We have a library project that houses much of our code for the various integrations we work on. Many of the integrations consume web service apis, and my supervisor doesn't want 5 gazillion web service references added to the project.
What we generally do, then, is add a reference to a new project and copy the References.vb to the solution and just call the generated code. Not terribly convenient if changes are made to the service, but it works.
Recently, I ran into a problem where we have to use 3 web services for the same integration. 2 of these contain the same class definitions, however, they're in different namespaces because they belong to different services. This became a problem for me because one of the services searches a user based on user ID, and the other pulls back blocks of users. Both return an object, or list of, that is exactly the same semantically. And I need to process the data the same, whether it came from one service or the other.
My solution, was to strip out the duplicated classes in the service and replace them with classes inherited from common base classes. This allowed me to work with both objects as if they were the same, however, it required modifying the generated web service proxy. Therefore this change will need to be made every time I need to regenerate the proxy.
I'm curious what you all might think a better solution to this would be.
You're going to regret playing games with copying Reference.vb and editing generated files.
Switch to WCF and you'll be able to tell it you want to reuse the types, instead of having multiple types that are more or less the same.
BTW, they would be "less" the same if not all of the web references are updated at the same time after a server change.
The other option would be to build an abstraction layer over top of the web service pre-generated proxies, such that when you make to the calls to the abstraction layer you can always use the same objects, as they are squeezed into (and out of) the web service proxies in the abstraction layer. This would also allow for unit testing :)
I think you really should be looking at WCF for 3.5+, but for .NET 2.0 look at something like WSCF (Web Services Contract First), which defines the contracts in XML and generates a set of libraries reusable across services. E.g You define a MyComany.WS.Common namespace and use that namespace in multiple projects. The code generation then builds a shared library of types which get used across all the web-services. We use this extensively in our .NET 2 solutions and it's great. We had to do some additional work around the code generation to get it to fit into our build process, but once that was done we never looked back.
We're migrating to .NET 3.5 over time, so the WSCF will become obsolete
Heres the link to the thinktecture site for WSCF.
wsdl.exe using the /sharetypes switch allows the same types to be used across multiple service definitions, provided the wire signatures are not correct. I was unable to use it in my situation, though, because the various wsdl contracts were carelessly namespaced.