I have a spring MVC application setup which provides web services. It has Authentication, ACl's, caching etc..
Key question: call of services from within other services.
I am required to implement a setup which will require a full cycle for such calls, including Access Control and Caching support.
Is it possible to implement this? if so please guide me through because I am stuck with getting a solution..
I am required to implement a setup which will require a full cycle for
such calls, including Access Control and Caching support
Both of these sound like cross cutting concerns which you can handle using Spring AOP. For example Spring 3.1 provides a cache abstraction which will allow you to annotate a service method as #Cache. Spring will then take care of looking up the result in your configured cache provider.
In addition Spring security will provide you with the #Secured annotation which can be used to limit who can call service methods.
If you use these aspects you can avoid creating a service layer filled with code which does the same thing in lots of different places.
For more information check out the docs here and here
Related
We have more than 2000 business methods which we want to expose as soap web service. We decided to use cxf with apache camel for this. We want to publish all these services from same url since we think management of them would be easier(especially for customers who call many web services). However we also have some requirements like:
method based log enabling
method based setting timeout
method based mtom/base64 setting and etc.
My question is, if its possible to publish all the services from same url(same SEI) and also being able to provide all these requirements. And also if we manage to do this, will it be a good and scalable solution?
method based log enabling
If you have 2000 methods in your business logic, I guess you also have logging. You can define multiple loggers for your webservice and use in every method at your convenience
method based setting timeout
CXF allow to configure ReceiveTimeout at server endpoint configuration, so if you use one endpoint, then the timeout will be the same for all your methods
method based mtom/base64 setting and etc.
MTOM is configured by JAX-WS server, enabling or disabling it. Also the methods will have an specific DataHandler to support it. One endpoint means one MTOM configuration
In this link you can see the http-transport variables set by server. Other utilities like interceptors, bus or fault handlers are also configured by JAX-WS server. Check here if any is of interest
will it be a good and scalable solution?
As stated #kolossus, 2k methods is a solution weird. I do not think you have performance problems, but it will difficult to develop and maintain. Think you can also provide a built client, instead of only the WSDL, that encapsulates several endpoint
I have a requirement to build a service endpoint to provide specific Sitecore 8.0 items (containing a given field value in a given branch of the content tree) to requesting mobile app clients. Encapsulating this logic (and perhaps some other calculations, etc) means the out-of-the-box API is not suitable.
I'd like to mimic an existing SOAP service exposed by another CMS, however I'm not above using a modified version of the RESTful itemWebApi if it confers greater code reusability or upgrade-safety.
Based on my research thusfar, it would appear my options are to build a custom handler, a completely separate asmx service (ala this approach), or to build a custom controller (similar to this custom Web API controller method).
Overriding or replacing the default pipeline processors for the itemWebApi does not seem viable, as I don't want to replace/modify the OOB API if I can avoid it.
Has anyone with the same type of requirement for Sitecore 8 found a better approach?
The approach I chose was to create a separate service "router" developed using the adapter pattern to be consumed by our mobile app clients. The router in turn calls the Sitecore ItemWebApi.
This fit my needs the best as it is completely decoupled from the Sitecore application and the client can be modified if necessary without impacting the endpoint.
It would also be worth looking to EntityService within Sitecore.Services.Client in Sitecore 8. Its a Web API based service but has more flexibility over the standard Sitecore Item Web API because you can define your model and the business logic yourself.
I can see you have mention my other blog post on adding a custom Web API controller. Enitity Service is different, it's a framework by Sitecore to achieve a standard way of creating custom web service for Sitecore.
I have written a blog post on EntityService. It has both a Javascript and standard rest based API to communicate with the service too.
http://mikerobbins.co.uk/2015/01/06/entityservice-sitecore-service-client/
Example Application here: https://github.com/sobek1985/EntityServiceDemo
And a few more posts on advanced features in Entity Service: http://mikerobbins.co.uk/category/sitecore/sitecore-service-client/
I am proxing huge amount of web services using JBOSS FUSE ESB.
using content based router for deciding the real web services. But, If there is a new services deployed in backend. I am forced to change the proxy details (WSDL) and expose the interface.
which leads to client regenerate the client code again.
Is there any other solution which will allow me to optimize this problem in design level.
Some general thoughts on this but I would need more detail to give some solid advise.
You are proxing the services thus you are not abstracting the services away. You are exposing the services rather directly to the outside via the service on FuseESB.
Typically you would use a ESB to abstract provider and consumers away from each other. This means that you wont expose/proxy the service directly. For example you would create generic operations and data structures. This will allow you to then map the generic interface to the web service implementation that you are providing.
Another approach would be to version the different WSDL's and thus have different versions of the services out there. This will allow you to have client consume the older WSDL's and then migrate them over bit by bit.
We're setting up a website that schedules video-conferencing sessions for end-users (using our own technology). We're interested in providing access to this functionality to "corporate clients" to use through their own site.
Initially, we were thinking of having an API key given to each corporate client, and modules could be built in any language to fetch the data from our site. However, our requirements are changing and we're exploring how the data should still be visible to the user of the 'corporate client' even if a network disconnection takes place between their server and ours.
What are the mechanisms by which a website can provide access to its data / functions to other websites?
I would suggest, REST. REST is a lightweight software architecture designed to facilitate access to resources over HTTP/HTTPS.
REST constraints state that there is a separation of components and language agnostic interfaces among others, so your clients won't have to worry about using Java because you're using Java, for example. Aditionally, REST web services are supposed to be cacheable, which may help fit your desire to avoid network issues.
You can learn more about REST here:
http://en.wikipedia.org/wiki/Representational_State_Transfer
I need to develop/design Coldfusion Web Service which uses few object calls and functions.
What is good source of samples to develop in terms of OOP?
What is best way to secure the web service?
how to authenticate external/internal users, any sample?
FYI, This web service is going to be used by multiple department.
thanks
A
OOP examples are all over the web. I don't have any handy, so I'll skip that part, and go straight to authentication and security.
First, authentication. There are several possible answers depending on what kind of users you are authenticating. For example, if you are authenticating users connnecting via a 3rd parth tool -- like a desktop or phone app posting to Twitter -- I would say that OAuth is a good solution. There is a good library for both publishing and consuming OAuth integrations at oauth.riaforge.com. If you are looking for someting lighter weight, we used a simple token creation scheme for a webservice that was only consumed by partner services. Basically, the partner service sends what amounts to a username and password pair, a token is created with a "last used" timestamp, and every time the webservice interacts after that, we do a check against the token store.
Security is, similarly, very dependent on your end goals. However, there are a few basic principles I've always tried to follow. First, build your basic CFCs as you normally would for constructing your objects. Entry points should be public, helper functions private, etc. This includes building an object to handle whatever authntication model you choose. On top of that, build your public API. These should all simply be access functions. They are called by outside applications, call the security object, then call the appropriate objects and methods to achieve the goal of the call. This way, you never have to bake the security layer into your base functionality calls, but you also have an easy way to include security. Remember, a single API call does not have to reflect a single base call -- you can build more complex routines if needed.
So, to recap.
Authentication
OAuth
Temporary Token Generation
Security
private/public (not remote) base layer
private/public (not remote) authentication layer
remote API layer