I know that a Stateful EJB can be accessed concurrently by a particular client. The container is serializing the requests and execute them one after another.
The question is - can you set the EJB container to let the concurrent access to the SFSB?
I know that I have a #AccessTimeout which allows me to configure that the SFSB may be accessed at the same time more than once by the particular client. However, it allows me to specify that the concurrent access is not allowed at all or let the container serialize the requests.
Does the EJB specification forbids such thing? I know I can achieve concurrent access with the Singleton EJB using #ConcurrencyManagement, but I'm just curious if it's possible to set some vendor-specific configuration property to allow such behavior for SFSBs.
Thanks in advance!
Just last month a JIRA issues was filed that proposes exactly this: http://java.net/jira/browse/EJB_SPEC-24
The EJB specification does not forbid vendor extensions, so in theory, a vendor could implement an extension to allow stateful session beans to be accessed concurrently. In practice, I'm not aware of any that allow that.
Related
I understand the basic differences between the two but specifically I was wondering, can you pass arguments to both or to just the stateless server? Also, can you use caching to improve the performance of a stateless server?
Yes, you can pass arguments to both stateless and statefull web-services (statefull services would be rather pointless without). For example you can use statefull services to implement a shopping cart where one method just adds a single item to the cart, the server already knowing what other items you have. You can use HTTP-Session to implement this.
An example for JAX-WS on Weblogic is described here
Caching is a wholly different issue, and it depends a lot on what you would like to cache. But in general, yes, it's possible to cache stateless services.
To what degree should web service providers limit implementation changes without creating a new service version? One view is that as long as the contract is upheld, the service owner should be free to update the implementation as needed. Schemas are not always air tight and it is foreseeable that changes within the service implementation affect the service output while still upholding the contract.
To what degree should consumers be notified of implementation changes? Its one thing to notify consumers of updates to your own web service implementation. How feasible is it to track implementation changes to all downstream dependencies? Should service owners create a new version when they know that a change may affect consumers? And try to be a good citizen and notify consumers of all other changes?
Lots of questions and I doubt there is one size fits all answer. It could just depend on the situation. Maybe this is what SLAs are for.
Good questions, and I think you've already answered it. Yes, these details would be in an SLA and I think that if the contract/WSDL is the same that why would the service need to notify its' consumers? Unless of course changes to the service impact response times and performance. Maybe the service would notify consumers when another contract is introduced (in addition to the original). Consumers become aware of any new capabilities and can adjust their clients accordingly if desired.
I'm in an environment where SLAs don't exist for internal clients, so absent an SLA, the following are some common sense guidelines
Attempt to limit number of modifications to services
Communicate service implementation releases so consumers can plan test cycles
Provide consumers with the list of direct downstream dependencies and location to find their schedules and release notes
Consider a new version if an implementation change will semantically affect consumer
A lot depends on your specific circumstances. Speaking generally, here are a few top considerations.
The service contract and schema are all that a service and client share in common. A service implementation change that does not change the contract or schema (e.g., fixing a bug in the implementation logic) should not necessitate notifying the clients, nor should it be considered a new version.
OTOH, if you have a poorly constructed, overly-loose contract, such as passing all of the data as one big string, where the client had to do extensive interpretation to consume the service, and now you're looking to exploit that overly-loose contract in a way that would likely break the client, you owe it to all parties to change the contract (and improve it!) and publish that as a new version of the service.
Since services are often used to enable loose coupling between services, it is sometimes not practical or even possible to identify all of the clients of a service. Producing a new version of a service in these situations often entails maintaining multiple versions of a service for some period of time, often as directed by some governance body.
Providing details about service implementations, implementation dependencies, etc., encourages creating tight coupling by disclosing non-contract related details that the client may then take a dependency on. That can limit the ability of the service to change independently of the client.
The book Web Service Contract Design and Versioning for SOA
by Thomas Erl is a good resource on the topic, and details several common scenarios.
Interoperability comes to mind (MS/Java).
Also, with EJB you need to distribute EJB interface, with WS you got WSLD (I know there's EJB extension for WSDL, but I'm not sure it's used).
Anything else?
EJB is mostly about a programming model for how you implement callable Business Logic. You code is running in a container which looks after management, clustering, transactions and security. Your component can be called by and number of different mechansims including local Java Calls, RMI/IIOP for remote invocation and also Web Services, so yes your EJB can indeed have a WSDL and be callable fro other non-Java envrionments.
If you start instead from the point of view of having a WSDL, which probably will specify SOAP/HTTP, then you are free to implement that in many different technologies, and of cource invoke it via that specified protocol, which very many different clients can use. The big question is how easily you can deal with those quality of implementation issues - your chosen implementation environment may give a lot of help or leave a lot to you.
Summary: you're not really comparing like-with-like. Web Services is very about the interface, EJB very much about the implementation.
I’ve been trying to wrap my head around how to expose my domain objects to the client. Whether I’m using a rich client or I’m using the web, I want to use the MVP and repository patterns.
What I’m trying to wrap my head around is how I expose my repository and model, which will be on the server. Is it even possible to expose complex business objects that have state via a web service, or will I have to use a proprietary technology that is not language/platform agnostic, like .Net remoting, EJB, COM+, DCOM, etc?
Some other constraints are that I don’t want to have to keep loading the complex domain object from the database or passing it all over the wire every time I want to do an operation. Some complex logic might be that certain areas of the screen might be disabled or invisible based on the users permissions in combination with the state of the object. Validation and error message information will also need to be displayed to the user. I want to be able to logically call a lot of my domain object operations as if it were running on the same machine.
With the web, you have free rein. You don’t have to expose your objects across service boundaries, so you can make them a rich as you would like. I’m trying to create an N-teir architecture that is rich and works when the client calling the model is on a different machine.
You can expose your domain objects like any other object through REST or web services. I think key is to understand that you will have to expose services that provide business value in a single call, and these do not necessarily map 1:1 to your repositories. So while you on the server may expect a single service call to use multiple repositories and perform various aggregations, the things you expose over any kind of web-service should be more or less complete results. The operations you expose on the service should not expose individual repositories but rather focus on meaningful operations that provide a given business value.
I hope this helps somewhat.
You can use a SOAP formater for .Net remoting,
but the resulting service will probably be hard
to consume as a service, and it will surly be very chatty.
If you want your domain model to be consumed as a service,it should be designed as a service.
As stated in domain driven design, a service is stateless, so it won't expose your objects directly. Your service should expose methods that provides meaningful business operations that will be executed as a single unit.
Usually consider that the model in your client is in a different bounded context because its concerns will be a bit different from the one on the server.
What I’m trying to wrap my head around
is how I expose my repository and
model, which will be on the server. Is
it even possible to expose complex
business objects that have state via a
web service, or will I have to use a
proprietary technology that is not
language/platform agnostic, like .Net
remoting, EJB, COM+, DCOM, etc?
A good domain model is going to be highly behavioral and designed around the problem domain (and your discussions with domain experts), I'd thus argue against designing it to be exposed to remote consumers (in the same way that designing it from the database or GUI first is a bad idea).
Instead I'd look at using a style like REST or messaging and decide on the interface you want to expose and then map to/from the domain. So if you went with REST you'd design your resources and API (URL's, representations, etc.) and then you'd need to fulfill it from the domain model.
If this becomes un-natural then you can always have multiple models, for example mapping a seperate read-only presentation specific model to the same data-source (or which wraps the complex behavioral domain model) is an approach I've used several times.
Some other constraints are that I
don’t want to have to keep loading the
complex domain object from the
database or passing it all over the
wire every time I want to do an
operation
Look at caching in HTTP and supporting multiple representations for a resource, also look at caching within your data-access solution.
Validation and error message
information will also need to be
displayed to the user. I want to be
able to logically call a lot of my
domain object operations as if it were
running on the same machine.
You can either represent this as a resource or more likely look at HTTP status codes and the response bodies you'd want to use in those situations.
a. What are the things I must consider?
b. I have several Stored Procedures being execute by the current application. If I create equivalent methods to execute these procedures, what would be the risk or the challenge.
Architecturally, one thing you must consider in transforming a web app to a web service is that local access to methods and data is not the same as remote access. Remote access should be designed so that invocations are more course-grained and exchange more information at once.
Another thing you would need to think about is what your serialization protocol you will use. For example, SOAP vs a REST-based protocol.
Also, think about security - the security considerations are different between a web application and a web service.
Finally, think about how others will know about your web service (or if they will at all).
One risk is ensuring that your code remain the same.
What I mean by this is that there is a distinct possibility of code duplication in this situation, and as such means that you may inadvertently forget to modify one of the places where the Stored Procedure is used (say if you add a new variable to the stored proc call).
Then you also must consider security. For example, exposing a web service call that provides a list of users to the wild is probably not that good of an idea. you need to plan for how you're going to pass/receive authentication & authorization information.
Managing your code base as Stephen said is going to be a big challenge if you create equivlant methods. Your much better off extrapolating the methods into a new library, that both the web application and web service will use. Your web apps shouldn't have any data access code in them.
With a web service you need to consider your clients. Who is going to access your data and from where. If for example its from a .net windows client on the same network or machine a TCP binding might be best. Or if you need to support older .net framework clients or even java clients you need to be careful about what technology you use.
You will also want to choose between WCF or ASMX. Which the previous paragraph shouuld help answer.
It seems to me that the greatest challenge will be that you are obviously tempted to do this. I think you're making a mistake.
Your web application, and the web service you propose, have different requirements. By "transforming" the application into the service, you will burden the service with the requirements of the application.
Here's a "thought experiment": what if you were to write the service from scratch, ignoring the application. How similar would the service and application be? If they would wind up alike, then transformation would make sense. Otherwise, not so much.