I am using the Staff Library built on top of the Apache Axis2c library to implement a service, which I intend to be stateless.
As I understand it, whether a service is stateful or not (i.e. "whether the interaction of client and server involves the server keeping track of the interaction-specific data as each subsequent interaction may depend upon the outcome of the previous interaction" or not) is something that will depend on the implementation of the service architecture.
As I understand it, it is perfectly possible for me to use Staff to create a stateless service. Why then does the 'features' page of staff explicitly mention "stateful Web services implementation"? Does it even make sense for the Staff library to do that?
Maybe it's not truly stateful.
The idea why it's called "stateful" is: server stores service instances per session. Your service can store local data as class properties and you will have the same data when you working within the session.
When you log in and use sessionId provided by Login service, server will create(lazily) service instances bound to this sessionId. So for two different sessions you will have two service instances with different values of properties. This may be very useful if you working with some large objects which can't be initialized and destroyed on each call, for example the geodesic information systems.
Stateless Web service frameworks usually create service instances per client call and nothing can be stored locally, in exception you use some global mechanism like shared memory or DBMS and pass some id to distinguish one client to another.
Related
Is there a common way to establish a network connection from a CloudFoundry-Service to a CloudFoundry App which the service is bound to.
In typical fashion apps receive their bind credentials and establish network connections to provisioned service for example databases.
It would be very handy to establish a connection from a service to an app, so the service could scrape endpoints that are provided by the app.
Any thoughts on this, why is it / or isn't it possible, why could it be a bad idea.
Normally, you have your service and the application receives credentials from the service through the service binding (i.e. VCAP_SERVICES).
You want to reverse this arrangement, which is fine, but the service will need to have some way to know how to reach the applications. The way to do this would be through routes bound to your application.
I have seen something like this done before, this is roughly the process. I'm sure you can adapt it to your requirements.
Create a service broker. The broker is responsible for managing service instances and service credentials. The broker is notified when an instance is created and when a binding occurs. Your broker will need to handle these requests.
The broker, in addition to its normal responsibilities, is going to need to maintain state indicating which applications have instances & bindings. In addition, the broker is going to need to use the org/space/app guids it's provided through the service broker API and talk to the CloudFoundry API to fetch the routes for the applications that are bound to it. You don't usually get these through the service broker API, but since you want to talk to the applications from the service, you need this information. It gives the service a way to communicate with the application.
Your broker may also provide the service in question (i.e. talking to applications), or it can delegate to some other process/container/VM to provide the service. If your service does the latter, then you need a way to a.) create the process/container/VM and b.) pass along the information it requires to talk to your application.
Obviously, you need to code the logic that will take the routes for applications that have created instances and bindings and communicate with them.
There can be some limitations with using the routes. First, not all routes are public. For internal routes, it would be kind of complicated to allow the broker/service to talk to the app. The broker/service would need to be an application on CF and you would need to specifically allow that communication (would require more API calls). Second, some apps just don't have routes. Perhaps this won't happen in your case, but it's worth considering. Lastly, not all routes are HTTP, some can be TCP as well. Your broker/service would need to handle both of those.
A variation on the above process, instead of using routes or talking to the API, you could have your broker/service provide some mechanism through the credentials to the application such that it registers itself with the broker/service. Thus when your applications start, they'll read the service info, register with the service and then go about their business. In this way, the application would have some additional flexibility about what information it provides when it registers with the broker/service. The downside is that the app has to do some work to be compatible.
Following is a point mentioned in a presentation slide related to SOA, and it confuses me with the concepts of service orchestration and service choreography. To enable service choreography, shouldn't a web service be able to call another web service?
SOA builds applications out of software services. Services comprise intrinsically
unassociated, loosely coupled units of functionality that have no calls to
each other embedded in them.
In theory, a service can do anything it needs to do to accomplish its job. So there doesn't seem to be a good reason to forbid using a second service to do your work. Why reinvent the wheel?
In practice, the issue is more complicated. If you start calling other services on your own web server, then you'll eventually starve it of resources. At best, "real" clients will have to wait a bit longer for their answers while your web service server plays with itself.
Another issue is recursive loops: Service A calls B calls C calls A calls B ... you get the idea. A small change in one service can introduce such a loop without anyone noticing and it can sit there for a long time until it suddenly kills your server.
That is why you should build micro services in a hierarchy inside the server (i.e. below the web service layer - this is not exposed to clients). Those micro services can use each other in a top-down manner (to avoid the loops). Unit tests then make sure they behave properly.
Lastly, such reuse is very slow. Each HTTP request takes a lot of resources to create, send, parse and process. Calling an internal method directly can be 10 - 10000 times faster.
These are the main reasons why the services exposed by a single server shouldn't reuse each other via the "public client API".
Note: There are web services which build new services by using existing ones. IFTTT - "If This Then That" is one such beast.
You could adopt every concept according to your needs. In my current project we have a separate module that is responsible for the Orchestration. This is required since in real life usage, scenarios can be very complicated. So in order to be close to the actual management of your system, you need to have such one.
Another advantage of this approach is that the Separation_of_concerns is kept. Also aligns the business request with the applications, data, and infrastructure that you have. It defines policies and service levels through automated workflows, provisioning etc.
Orchestration is critical in the delivery of Cloud services too. As they are networked to allow sharing of data-processing tasks, centralized data storage, and online access to services or resources.
We have internal services in our application, which are basically developed as Thrift RPC services. Now, I need to expose these services to the client applications, which are outside of the core system.
Now, the question is:
should I expose these Thrift services directly to the client? Advantages of doing so would be least amount of work required. Disadvantage would be that the clients need to connect to these Thrift APIs as well as another interface, which already exists, so actually the client applications need to open more than one socket to make connection to the core system.
An alternate option would be to wrap these Thrift services in another layer, which will be ultimately delivered to the end clients. Disadvantage of doing this: doing marshalling/unmarshalling the data twice, once with Thrift and next time with another interface.
What should be the preferred way of handling this situation?
We would not expose these services directly to outside clients. We would build or use an application to configure a proxy that the external clients could connect to.
The advantages to this are:
No need to punch a hole in your firewall
Possibility to do an extra security check
Possibility to throttle access to the internal service
Less chance of a hacker being able to exploit service
Wondering what others do / best practice for communicating between layers. This question relates to communication between layers 2-3 and 3-4.
Our Basic Architecture (in order) as follows:
UI
Front End Business Classes
Web Services
Back End Business Classes
DAL
The web services are just a façade that include logging and authentication to back end class libraries.
As such, the web service is passed a request object that includes the parameters required by the web method along with the user credential (the user credential for example is stored in a base class as we will always need to pass this to the webservice) and responds with response objects (has things such as status and message, if failed etc along with the object required) both request & response use a custom generic class/or interface where only one result is returned, otherwise a class needs to be created.
Sometimes it makes sense to do this for the response object at layer 4 (though we don't use a request object unless a lot of parameters need to be pasaws), in which case we just have an adapter class in layer 3 which returns this to the client. For consistency I have considered doing this all the time, though think it may be overkill.
So to iterate the question, what are the best practices for communicating between layers? and should/do people use this method outlined above (it works well for us) and should layers 3-4 implement similar method to 2-3?
Possible considerations:
currently everything is coded in house by a team of developers, some client code may be outsourced in the future
future web services will be WCF based (not sure if that effects design other than coding to interfaces which I would prefer anyway).
We use .net
For the sake of completeness:
It seems a good idea to have the response / requests in the class library, that way if you want to change the web service to WCF, there is less work to do.
What kind of server do you people see in real projects?
1) Web Services MUST be stateless: Basically you must send username/password with every request, every request must use HTTPS and I will authenticate and load the User object everytime if needed.
2) A Session for Web Services: like in a web container so I can at least save the authenticated User object and have something similar to a session ID so I don't need to authenticate, load and check the User on every request.
3) Sticky Service (persistent service across requests): https://jax-ws.dev.java.net/nonav/2.1/docs/statefulWebservice.html
I understand the scalability problems of stateful services (and of web application sessions), but sometimes you must have some kind of state, for example for a shopping cart. But you can also put this state in the database (use the back-end as a kind of session argh) or passing the entire state to the client (the client becomes responsible for resending the entire shopping cart).
The truth is, at least for web applications, the session helps a lot in many situations. Scalability issues can be ignored if your system accepts that "the user must start over doing whatever he is doing if his web server happens to go down" or you can try a session cluster if that's unacceptable.
How it is for web services? I am inclined to conclude that web services are very different than web applications and accept option 1) (always stateless), but it would be nice to hear other opinions based on real project experience.
While it's only a small difference but it should still be mentioned:
It's not state in web services that kill scalability, rather it's state on the App Server that's hosting the web services that will kill scalability. The moment you say that this user needs to access this server (as done in sticky sessions) you are effectively limiting your scalability options. The point you want to get to is that 'Any of your free load-balanced App servers' can handle this web service request and if I add 1 more App Server I should be able to handle % more users.
It's totally fine (and personally recommended) if you want to maintain state to pass in an authentication token and on each request get the service to retrieve your 'state' from a data store (preferably a redundant and partitioned one, e.g. distributed+replicated key/value data store). That's how Amazon does it with SimpleDb and Google with BigTable.
Ebay takes a slightly different approach and stores most of the clients state in a cookie so it gets passed in with every request. Although it generates a lot more traffic, it still scalable as any of their servers can still handle the request.
If you want a scalable data store I would recommend looking at redis it has speed and features that can't be beat in a key/value data store.
You should also check out highscalability.com if you want access to good material on how to build fast and scalable services.
Ideally webservices (and web sites) should be stateless.
Unfortunately this takes very well thought out problem domain, and clear separation of concerns.
I've found that in practice most real-world web sites depend on state even though this limits their scalability.
I've also found that many real-world web-services also rely on state.
Ultimately the 'right' decision is the one that works for the specific problem, so it's probably okay to write a webservice that relies on state, and refactor it later if scalability becomes an issue.
Highly dependent on whether the service is single transaction oriented (say getting stock quotes) or if the output from the service is dependent on a data provided from a particular client across multiple transactions(in that case state must be maintained.)
As far as scalability issues, storing state in a database isn't actually a bad way to go (in fact it's probably the only way to go if you're load balancing your service across a server farm.)
I think with Flex clients the state is moved out of the service and into the client tier. Keep the services stateless and let the clients maintain the state needed. The services stay simple, and the clients are free to mash them together as they wish.
You seem to be equating state and authentication. Perhaps you're accustomed to storing username and password in session state?
This is not necessary, even with old ASMX web services. Simply pass whatever information you need to your "Login" operation. This operation will be defined to return an "Authentication Ticket" header.
All other operations that require authentication will require this "Authentication Ticket" header. They will each check the header to see if it represents a valid, authenticated user. If so, then they will perform their task. If not, then they will return a SOAP Fault indicating that authentication is required.
No state is required. Simply make sure that the authentication ticket can be validated on any server your service runs on (for instance, in a web farm), and you'll be fine.