Save data through a web service using NHibernate? - web-services

We currently have an application that retrieves data from the server through a web service and populates a DataSet. Then the users of the API manipulate it through the objects which in turn change the dataset. The changes are then serialized, compressed and sent back to the server to get updated.
However, I have begin using NHibernate within projects and I really like the disconnected nature of the POCO objects. The problem we have now is that our objects are so tied to the internal DataSet that they cannot be used in many situations and we end up making duplicate POCO objects to pass back and forth.
Batch.GetBatch() -> calls to web server and populates an internal dataset
Batch.SaveBatch() -> send changes to web server from dataset
Is there a way to achieve a similar model that we are using which all database access occurs through a web service but use NHibernate?
Edit 1
I have a partial solution that is working and persisting through a web service but it has two problems.
I have to serialize and send my whole collection and not just changed items
If I try to repopulate the collection upon return my objects then any references I had are lost.
Here is my example solution.
Client Side
public IList<Job> GetAll()
{
return coreWebService
.GetJobs()
.BinaryDeserialize<IList<Job>>();
}
public IList<Job> Save(IList<Job> Jobs)
{
return coreWebService
.Save(Jobs.BinarySerialize())
.BinaryDeserialize<IList<Job>>();
}
Server Side
[WebMethod]
public byte[] GetJobs()
{
using (ISession session = NHibernateHelper.OpenSession())
{
return (from j in session.Linq<Job>()
select j).ToList().BinarySerialize();
}
}
[WebMethod]
public byte[] Save(byte[] JobBytes)
{
var Jobs = JobBytes.BinaryDeserialize<IList<Job>>();
using (ISession session = NHibernateHelper.OpenSession())
using (ITransaction transaction = session.BeginTransaction())
{
foreach (var job in Jobs)
{
session.SaveOrUpdate(job);
}
transaction.Commit();
}
return Jobs.BinarySerialize();
}
As you can see I am sending the whole collection to the server each time and then returning the whole collection. But I'm getting a replaced collection instead of a merged/updated collection. Not to mention the fact that it seems highly inefficient to send all the data back and forth when only part of it could be changed.
Edit 2
I have seen several references on the web for almost a transparent persistent mechanism. I'm not exactly sure if these will work and most of them look highly experimental.
ADO.NET Data Services w/NHibernate (Ayende)
ADO.NET Data Services w/NHibernate (Wildermuth)
Custom Lazy-loadable Business Collections with NHibernate
NHibernate and WCF is Not a Perfect Match
Spring.NET, NHibernate, WCF Services and Lazy Initialization
How to use NHibernate Lazy Initializing Proxies with Web Services or WCF
I'm having a hard time finding a replacement for the DataSet model we are using today. The reason I want to get away from that model is because it takes a lot of work to tie every property of every class to a row/cell of a dataset. Then it also tightly couples all of my classes together.

I've only taken a cursory look at your question, so forgive me if my response is shortsighted but here goes:
I don't think you can logically get away from doing a mapping from domain object to DTO.
By using the domain objects over the wire you are tightly coupling your client and service, part of the reason to have a service in the first place is to promote loose coupling. So that's an immediate issue.
On top of that you're going to end up with a brittle domain logic interface where you can't make changes on the service side without breaking your client.
I suspect your best bet would be to implement a loosely coupled service which implements a REST / or some other loosely coupled interface. You could use a product such as automapper to make the conversions simpler and easier and also flatten data structures as necessary.
At this point I don't know of any way to really cut down the verbosity involved in doing the interface layers but having worked on large projects that didn't make the effort I can honestly tell you the savings wasn't worth it.

I think your issue revolves around this issue:
http://thatextramile.be/blog/2010/05/why-you-shouldnt-expose-your-entities-through-your-services/
Are you or are you not going to send ORM-Entities over the wire?
Since you have a Services-Oriented architecture.. I (like the author) do not recommend this practice.
I use NHibernate. I call those ORM-Entities. They are THE POCO model. But they have "virtual" properties that allow for lazy-loading.
However, I also have some DTO-Objects. These are also POCO's. These do not have lazy'loading friendly properties.
So I do alot of "converting". I hydrate ORM-Entities (with NHibernate)...and then I end up converting them to Domain-DTO-Objects. Yes, it stinks in the beginning.
The server sends out the Domain-DTO-Objects's. There is NO lazy loading. I have to populate them with the "Goldie Locks" "just right" model. Aka, if I need Parent(s) with one level of children, I have to know that up front and send the Domain-DTO-Objects over that way, with just the right amount of hydration.
WHen I send back Domain-DTO-Objects's (from client to the server), I have to reverse the process. I convert the Domain-DTO-Objects into ORM-Entities. And allow NHibernate to work with the ORM-Entities.
Because the architecture is "disconnected", I do alot of (NHiberntae) ".Merge()" calls.
// ormItem is any NHibernate poco
using (ISession session = ISessionCreator.OpenSession())
{
using (ITransaction transaction = session.BeginTransaction())
{
session.BeginTransaction();
ParkingAreaNHEntity mergedItem = session.Merge(ormItem);
transaction.Commit();
}
}
.Merge is a wonderful thing. Entity Framework does not have it. Boo.
Is this alot of setup? Yes.
Do I think it is perfect? No.
However. Because I send very basic DTO's(Poco's) that are not "flavored" to the ORM, I have the ability to switch ORM's without killing my contracts to the outside world.
My datalayer can be ADO.NET, EF, NHibernate, or anything. I have to write the "Converters" if I switch, and the ORM code, but everything else is isolated.
Many people argue with me. They said I'm doing too much, and the ORM-Entities are fine.
Again, I like to "now allow any lazy loading" appearances. And I prefer to have my data-layer isolated. My clients should not know or care about my data-layer/orm of choice.
There are just enough subtle differences (or some not so subtle ones) between EF and NHibernate to screwball the game plan.
Do my Domain-DTO-Objects's look 95% like my ORM-Entities? Yep. But its the 5% that can screwball you.
Moving from DataSets, especially if they are populated from stored-procedures with alot of biz-logic in the TSQL, isn't trivial. But now that I do object model, and I NEVER write a stored procedure that isn't simple CRUD functions, I'd never go back.
And I hate maintenance projects with voodoo TSQL in the stored procedures. It ain't 1999 anymore. Well, most places.
Good luck.
PS Without .Merge(in EF), here is what you have to do in a disconnected world: (boo microsoft)
http://www.entityframeworktutorial.net/EntityFramework4.3/update-many-to-many-entity-using-dbcontext.aspx

Related

Please explain the Repository-, Mapping-, and Business-Layer relations and responsibilities

I've been reading a lot about this stuff and I am currently in the middle of the development of a larger web-application and its corresponding back-end.
However, I've started with a design where I ask a Repository to fetch data from the database and map it into a DTO. Why DTO? Simply because until now basically everything was simple stuff and no more complexity was necessary. If it got a bit more complex then I started to map e.g. 1-to-n relations directly in the service layer. Something like:
// This is Service-Layer
public List<CarDTO> getCarsFromOwner(Long carOwnerId) {
// Entering Repository-Layer
List<CarDTO> cars = this.carRepository = this.carRepository.getCars(carOwnerId);
Map<Long, List<WheelDTO>> wheelMap = this.wheelRepository.getWheels(carId);
for(CarDTO car : cars) {
List<WheelDTO> wheels = wheelMap.get(car.getId());
car.setWheels(wheels);
}
return cars;
}
This works of course but it turns out that sometimes things are getting more complex than this and I'm starting to realize that the code might look quite ugly if I don't do anything about this.
Of course, I could load wheelMap in the CarRepository, do the wheel-mapping there and only return complete objects, but since SQL queries can sometimes look quite complex I don't want to fetch all cars and their wheels plus taking care of the mapping in getCars(Long ownerId).
I'm clearly missing a Business-Layer, right? But I'm simply not able to get my head around its best practice.
Let's assume I have Car and a Owner business-objects. Would my code look something like this:
// This is Service-Layer
public List<CarDTO> getCarsFromOwner(Long carOwnerId) {
// The new Business-Layer
CarOwner carOwner = new CarOwner(carOwnerId);
List<Car> cars = carOwner.getAllCars();
return cars;
}
which looks as simple as it can be, but what would happen on the inside? The question is aiming especially at CarOwner#getAllCars().
I imagine that this function would use Mappers and Repositories in order to load the data and that especially the relational mapping part is taken care of:
List<CarDTO> cars = this.carRepository = this.carRepository.getCars(carOwnerId);
Map<Long, List<WheelDTO>> wheelMap = this.wheelRepository.getWheels(carId);
for(CarDTO car : cars) {
List<WheelDTO> wheels = wheelMap.get(car.getId());
car.setWheels(wheels);
}
But how? Is the CarMapper providing functions getAllCarsWithWheels() and getAllCarsWithoutWheels()? This would also move the CarRepository and the WheelRepository into CarMapper but is this the right place for a repository?
I'd be happy if somebody could show me a good practical example for the code above.
Additional Information
I'm not using an ORM - instead I'm going with jOOQ. It's essentially just a type-safe way to write SQL (and it makes quite fun using it btw).
Here is an example how that looks like:
public List<CompanyDTO> getCompanies(Long adminId) {
LOGGER.debug("Loading companies for user ..");
Table<?> companyEmployee = this.ctx.select(COMPANY_EMPLOYEE.COMPANY_ID)
.from(COMPANY_EMPLOYEE)
.where(COMPANY_EMPLOYEE.ADMIN_ID.eq(adminId))
.asTable("companyEmployee");
List<CompanyDTO> fetchInto = this.ctx.select(COMPANY.ID, COMPANY.NAME)
.from(COMPANY)
.join(companyEmployee)
.on(companyEmployee.field(COMPANY_EMPLOYEE.COMPANY_ID).eq(COMPANY.ID))
.fetchInto(CompanyDTO.class);
return fetchInto;
}
Pattern Repository belongs to the group of patterns for data access objects and usually means an abstraction of storage for objects of the same type. Think of Java collection that you can use to store your objects - which methods does it have? How it operates?
By this defintion, Repository cannot work with DTOs - it's a storage of domain entities. If you have only DTOs, then you need more generic DAO or, probably, CQRS pattern. It is common to have separate interface and implementation of a Repository, as it's done, for example, in Spring Data (it generates implementation automatically, so you have only to specify the interface, probably, inheriting basic CRUD operations from common superinterface CrudRepository).
Example:
class Car {
private long ownerId;
private List<Wheel> wheels;
}
#Repository
interface CarRepository extends CrudRepository<Car,Long> {
List<Car> findByOwnerId(long id);
}
Things get complicated, when your domain model is a tree of objects and you store them in relational database. By definition of this problem you need an ORM. Every piece of code that loads relational content into object model is an ORM, so your repository will have an ORM as an implementation. Typically, JPA ORMs do the wiring of objects behind the scene, simpler solutions like custom mappers based on JOOQ or plain JDBC have to do it manually. There's no silver bullet that will solve all ORM problems efficiently and correctly: if you have chosen to write custom mapping, it's still better to keep the wiring inside the repository, so business layer (services) will operate with true object models.
In your example, CarRepository knows about Cars. Car knows about Wheels, so CarRepository already has transitive dependency on Wheels. In CarRepository#findByOwnerId() method you can either fetch the Wheels for a Car directly in the same query by adding a join, or delegate this task to WheelRepository and then only do the wiring. User of this method will receive fully-initialized object tree. Example:
class CarRepositoryImpl implements CarRepository {
public List<Car> findByOwnerId(long id) {
// pseudocode for some database query interface
String sql = select(CARS).join(WHEELS);
final Map<Long, Car> carIndex = new HashMap<>();
execute(sql, record -> {
long carId = record.get(CAR_ID);
Car car = carIndex.putIfAbsent(carId, Car::new);
... // map the car if necessary
Wheel wheel = ...; // map the wheel
car.addWheel(wheel);
});
return carIndex.values().stream().collect(toList());
}
}
What's the role of business layer (sometimes also called service layer)? Business layer performs business-specific operations on objects and, if these operations are required to be atomic, manages transactions. Basically, it knows, when to signal transaction start, transaction commit and rollback, but has no underlying knowledge about what these messages will actually trigger in transaction manager implementation. From business layer perspective, there are only operations on objects, boundaries and isolation of transactions and nothing else. It does not have to be aware of mappers or whatever sits behind Repository interface.
In my opinion, there is no correct answer.
This really depends upon the chosen design decision, which itself depends upon your stack and your/team's comfort.
Case 1:
I disagree with below highlighted sections in your statement:
"Of course, I could load wheelMap in the CarRepository, do the wheel-mapping there and only return complete objects, but since SQL queries can sometimes look quite complex I don't want to fetch all cars and their wheels plus taking care of the mapping in getCars(Long ownerId)"
sql join for the above will be simple. Additionally, it might be faster as databases are optimized for joins and fetching data.
Now, i called this approach as Case1 because this could be followed if you decide to pull data via your repository using sql joins. Instead of just using sql for simple CRUD and then manipulating objects in java. (below)
Case2: Repository is just used to fetch data for "each" domain object which corresponds to "one" table.
In this case what you are doing is already correct.
If you will never use WheelDTO separately, then there is no need to make a separate service for it. You can prepare everything in the Car service.
But, if you need WheelDTO separately, then make different services for each. In this case, there can be a helper layer on top of service layer to perform object creation. I do not suggest implementing an orm from scratch by making repositories and loading all joins for every repo (use hibernate or mybatis directly instead)
Again IMHO, whichever approach you take out of the above, the service or business layers just complement it. So, there is no hard and fast rule, try to be flexible according to your requirements. If you decide to use ORM, some of the above will again change.

Ember Data Sync - LocalStorage+REST+RealTime+Online/Offline

We have a combination of requirements in terms o data access.
Pre-load some reference data.
We need reference data to survive browser restarts instead of just living in memory to avoid loading it all the time. I'm currently using the LocalStorageAdapter for that.
Once we have it, we would like to sync changes (polling or using Socket.IO in the background and updating the LocalStorage could do the trick)
There're other models that are more transactional, where we would need to directly go to the Server and get/save them. It would be nice to use something like the RESTAdapter for that.
Lastly, there're some operations that should work off-line and changes should be synced later.
To make it more concrete:
We pre-load vendor and "favorite products" into Local Storage. We work offline with those.
We need to sync server changes to vendor and product information.
If they search the full catalog, that requires them to be online.
When offline, we need to allow users to add something to their cart or even submit and order. We would like to queue this action and submit it when they have an Internet Connection.
So a few questions are derived from this:
Is there a way to user RESTAdapter in combination with LocalStorage?
Is there some Socket.IO support? (Happy to do this part manually)
Is there Queueing support? Ideally at the Ember-Data level.
I know we will have to do a lot of this manually and pull together the different lego pieces, but I wanted to ask for some perspective from experience Ember devs.
You definitely can do this. Like you said you're going to need to do a lot of lego pieces to put it all together.
You'll need to take the RESTAdapter and LSAdapter and create a hybrid. We've done something a little similar at my work, but it only goes one way (from server to client, not reverse).
That being said, I'd just like to pose a few questions:
How much do you plan on storing in localStorage, and do you have an eviction plan in place? Local Storage is generally small for most browsers, though the implementation is the same across most browsers (not implemented until IE8). IndexedDB gives you a much larger chunk of space, though implementation isn't available until later versions of IE.
Depending on performance needs, I'd recommend storing localStorage first, then attempting to persist to the server, if that works pop from localStorage, if it doesn't leave it in there for your adapter to attempt at a later date. (I'd look into using Ember's schedule or scheduleOnce or a slew of other convenient helpers that work within the run loop, http://emberjs.com/api/classes/Ember.run.html#method_schedule).
Called by the store when a newly created record is
`save`d.
It serializes the record, and `POST`s it to a URL generated by `buildURL`.
See `serialize` for information on how to customize the serialized form
of a record.
createRecord: function(store, type, record) {
var data = {};
var serializer = store.serializerFor(type.typeKey);
serializer.serializeIntoHash(data, type, record, { includeId: true });
// build up a model that knows the url, the method, and the data to post
// store it to local storage in some queue to save
// schedule it to save to server later, keep track of the record since you'll
// need to update the record with new information later that could come down
// from the server
return this.ajax(this.buildURL(type.typeKey), "POST", { data: data });
},
Honestly I think the most difficult thing you might experience will be how you handle ids when you don't really save it to the server. Good luck

Return record from a database table using Apache CXF

I am using Apache CXF (apache-cxf-2.5.0) to create Web Services using a bottom-up approach (Java first approach). I want to return some data/records (for example, username, email) from a database table. I can write a Java class which returns a simple response. But I am not able to find way to return a response such as data/records extracted from a database table. How to do that?
You don't mention how you are accessing the database, but the basic idea is that you ensure that the classes that you return have JAXB annotations (notably #XmlRootElement or #XmlType) on them, which allows CXF to convert the instances of those classes into XML document fragments. The classes which you annotate this way probably should not have lots of functionality in them; they should exist just to hold data. (I find anything else too confusing given the complex lifecycle they'll have.) Once the annotations are in place, just return the relevant objects and all the conversions will happen automatically.
I'm talking a simple class like this:
#XmlRootElement // <---- THIS LINE HERE!
public class UserInfo {
public String username;
public String email;
}
You can use this in conjunction with other annotations (e.g., for your ORM) as necessary. Of course, if you're talking straight JDBC to the DB to get the information out, you won't need to worry about that.
The one tricky bit is that the objects being returned will have a lifespan that goes beyond that of the database transaction you're using; you may need to detach (i.e., do some copying, though the ORM layer might provide assistance) the objects extracted from the DB for that to work. This won't be much of a concern in this case as the DB you're describing is very simple (one table, no inter-row relations) but could be an issue if you make things more complex.

EntityFramework, Unit of Work - Tracking changes of custom data and sending it via WebService

We have Unit of Work implemented in EntityFramework, so when we use ObjectContext and make any changes to the Entity it is tracked and then on SaveChanges it is all reflected in underlying database.
But what if I want to track changes for my custom class, so every modifications are tracked down and sent through webservice call ?
I have webservice which provides me some data, that data is displayed in datagrid and then may be modified. I want to track all the changes down and then be able to send back through webservice the data only that have been modified. Is there any solution for that like EntityFramework or POCO or whatever ? Or I have to implement my own Unit of Work pattern for it ?
Change tracking works only when entity is attached to the context. There is special type of entities called Self tracking entities which is able to track changes on the client side when exposed with web service but these classes are still your primary entities (not custom objects) and they apply their tracked state directly to the context.
What you describe has nothing to do with unit-of-work pattern. You are looking for change set pattern which is able to pass only differences back to the service. Implementation of such classes is completely up to you. .NET doesn't provide them. .NET offers two implementations of change set pattern
mentioned Self tracking entities for EF
DataSet and related classes
Both these implementations transfer by default all data (moreover at least DataSets have by default both old and new state in the message). Both data sets and STEs share same limitations - they are very badly interoperable.
Change tracking at the property level should not be left to the client of a WCF call, for a variety of reasons. If you use a DTO (Data-Transfer Object) pattern, you should be able to keep your individual objects small enough to avoid having any significant overhead from sending the entire changed object across the wire. Then, on the server side, you load the current version of the object out of your database, set the values provided by the DTO, and let Entity Framework track the changed properties.
public SavePerson(Person person)
{
using(var context = _contextFactory.Get())
{
var persistentPerson = context.People.Single(p => p.PersonId == person.PersonId);
persistendPerson.FirstName = person.FirstName;
/// etc. (This could be done with a tool like AutoMapper)
context.SaveChanges();
}
}
If you're changing multiple objects on the client side, and you want to keep track of which ones the user has changed, you could have the client be responsible for keeping track of the objects that get changed and send only those objects to the web service in bulk. There, you can apply the same pattern and wait to SaveChanges until all of the objects have been updated.
Hopefully this helps.

(Java) How can I pass Schema-validated XML documents as parameters between distributed components (e.g. web services or sockets)?

Here is a description of the scenario and I would appreciate also any comments on the approach used
The core of my application is a set of web services backed by a P2P database. One service accepts a simple XML-based record (I have designed a generic schema for it). The service processes this data (mainly creating keys based on certain criteria) and pass the original data along with the created keys to a listening SocketServer in one of the listening P2P nodes. This key,data pair is routed to the proper node, which stores the data (associated with the key as an ID) in an XML database.
A second service accepts a query document that is structured based on the same schema, but with optional values that would be used for searching and matching from the previously stored ones. So the second service would pass this query (with the proper keys) to the P2P part, get back the results and pass them back to the service client.
E.g. if the original record submitted to the first service was < attr1 >value1 < /attr1 > < attr2 > value2 < / attr2 > (attribute list along with some other metadata mandated by the schema) then the second service should retrieve that record if the query received was < attr2 >value2 < / attr2 >
(I could later think about using more complex XPath or XQuery queries as the underlying XML database allows instead of exact matches for values here but that is not important at this stage. there is also a third service I am working on but it depends on getting the first two in proper shape first)
So my questions are:
1) What data type should I use as the parameters of the web services? How to utilize my schema for this usage? I was considering various XML binding frameworks (especially JAXB and SDO) for this but didn't know how to proceed.
2) How can I enhance the two services (call them store and search) to use dynamically created templates based on the original generic schema? The service would still accept documents of the main schema type but has the inner attribute list based on a template say template1 only requires whose values are ints while template2 require (float) and (string). The current JSP-based prototype manually creates this template but as an XML document that is assembled by hand (<>tags dispersed in text) and there is no type checking what so ever so I thought I could do better!
3) Is it possible to generate a quick web app prototype for simple access to this system (again by using the schema (&templates) to edit the appropriate XML message structures? What I am looking for is for the (human) user to choose a template and then just "fill in the blanks" and submit, no need for any fancy look and feel.
4) Can I or how can I also use this XML message type for communicating across sockets?
5) Does it matter if I deploy the services as stateless EJBs or not? Do I need them to be EJBs or servlets would be more than enough?
I currently have a rudimentary implementation (from previous developers) that were meant for a subset of my current requirements (I am improving on the services and adding new derived ones) but there was no schema nor validation and the data is passed all along as basic strings, thus providing weak typing and difficult to update manual parsing. The reason I want to update this to a stronger bound typing is to introduce changes in the data schema that would be passed along the whole system easily. Basically I want the system to be as less coupled to the data format/schema used as possible; the current prototype is too coupled to the data that I am finding it extremely difficult to change the data without breaking the system.
My initial investigation led me to consider JAXB but it supports only static typing (cannot create a schema/types dynamically at runtime that I want to persist for later usage). So I came across SDO which has both dynamic and static typing. The problem is just that there is not enough community and/or examples of using this approach so it seems risky (the examples of Apache Tuscany and Eclipselink implementations are very scarce and I could not find complete examples that are not 5+ years old (like this http://www.ibm.com/developerworks/java/library/j-sdo/) and also tackles the XML use case of SDO (most seem to focus on the relational usage of SDO).
This is my first time asking for programming help (here and elsewhere) so please bear with me. I searched a lot on the net but I could not find anything useful but pieces here and there that did not add up.
Any comment or hint is really appreciated.
trfndr
EDIT
I forgot one thing: how would the search service get back the results? Since it is opening a client socket connection, there is no way to get back any results synchronously. The current implementation tackles this by having the service client opening a listening socket on a random port and putting this contact info in the query document. After the search web service sends the query to the p2p part it finishes. The p2p sends the results as a WS call to another service which sends them back to the service client socket. I don't like this approach much, is there any more elegant solution?
I lead the EclipseLink JAXB & SDO implementations and represent Oracle on those specifications so hopefully I can help you out. This question is very similar to talk I'm giving at JavaOne in September.
1) What data type should I use as the
parameters of the web services? How to
utilize my schema for this usage? I
was considering various XML binding
frameworks (especially JAXB and SDO)
for this but didn't know how to
proceed.
This depend's on what web service framework you are using. JAXB is much easier to use with JAX-WS, and while JAXB is still easier to use with JAX-RS SDO, is a possible alternative.
2) How can I enhance the two services
(call them store and search) to use
dynamically created templates based on
the original generic schema? The
service would still accept documents
of the main schema type but has the
inner attribute list based on a
template say template1 only requires
whose values are ints while template2
require (float) and (string). The
current JSP-based prototype manually
creates this template but as an XML
document that is assembled by hand
(<>tags dispersed in text) and there
is no type checking what so ever so I
thought I could do better!
I'm not 100% what you mean here, but the following may be helpful:
Using #XmlAnyElement to Build a Generic Message
3) Is it possible to generate a quick
web app prototype for simple access to
this system (again by using the schema
(&templates) to edit the appropriate
XML message structures? What I am
looking for is for the (human) user to
choose a template and then just "fill
in the blanks" and submit, no need for
any fancy look and feel.
JAX-RS is a nice framework for creating quick prototypes. Below is an example I created:
Part 1 - The Database
Part 2 - Mapping the Database to Objects
Part 3 - Mapping the Objects to XML
Part 4 - The RESTful Service
Part 5 - The Client
4) Can I or how can I also use this
XML message type for communicating
across sockets?
I prefer frameworks like JAX-RS that communicate over the HTTP protocol.
5) Does it matter if I deploy the
services as stateless EJBs or not? Do
I need them to be EJBs or servlets
would be more than enough?
My preference is to use an EJB session bean for the service. If you are interacting with a database then you can leverage the Java Transaction API (JTA) to manage your database transactions.
Part 4 - The RESTful Service
SDO
EclipseLink is the SDO 2.1.1 (JSR-235) reference implementation. We have some examples posted below. If you are looking how to do something specific, I will try to post a relevant example.
http://wiki.eclipse.org/EclipseLink/Examples/SDO
JAXB
JAXB is static. It is also more popular than SDO. Recognizing this in EclipseLink we have implemented a dynamic JAXB feature. It gives you the dynamic aspect of SDO with a JAXB slant.
http://wiki.eclipse.org/EclipseLink/Examples/MOXy/Dynamic
EDIT #1
Since you are dealing with JAX-WS and your model is almost entirely dynamic, I think you should skip the JAXB binding altogether. In the following link see the section "Switching Off Data Binding"
http://java.sun.com/developer/technicalArticles/xml/jaxrpcpatterns3/
This will give us the body of the message as a javax.xml.transform.Source object. We will need to process the XML based on the dynamic templates. SDO would be a good choice here. You can constantly add new types to the HelperContext using XML schemas.
helperContext.getXSDHelper().define(schema1, null);
helperContext.getXSDHelper().define(schema2, null);
You wil be able to unmarshal the Source from the web service as follows:
XMLDocument doc = helperContext.getXMLHelper().load(source, null, null);
DataObject rootDataObject = doc.getRootObject();
String someValue = rootDataObject.getString("attr3/childAttr/anotherChildAttr");
You will also be able to use the XMLHelper to marshal your objects to XML when calling another service.