Using batoo without declaring classes in persistence.xml - jpa-2.0

I want to experiment with new Batoo JPA api.
But I wonder if Batoo also works without defining entity classes in persistence.xml. The testcase org.batoo.jpa.community.test.t1.T1 fails if you delete the tags (like: org.batoo.jpa.community.test.t1.Service ) from persistence.xml although this Service class is annotated correctly with the #Entity annotation !
I think the latter should be enough for the JPA spec.

No, annotating with #Entity is not guaranteed to be enough in Java SE environment. In JPA 2.0 specification this is told with following words:
A list of all named managed persistence classes must be specified in
Java SE environments to insure portability. Portable Java SE
applications should not rely on the other mechanisms described here to
specify the managed persistence classes of a persistence unit.
Persistence providers may require that the set of entity classes and
classes that are to be managed must be fully enumerated in each of the
persistence.xml files in Java SE environments.

Related

Why start from XSD (rather than WSDL) in contract-first SOAP-service?

In this Spring reference (Chapter 3. Writing Contract-First Web Services)
http://docs.spring.io/spring-ws/site/reference/html/tutorial.html
it says
"A service contract is generally expressed as a WSDL file. Note that in Spring-WS, writing the WSDL by hand is not required. Based on the XSD and some conventions, Spring-WS can create the WSDL for you..."
That is also the approach implemented in the sample app:
https://github.com/spring-guides/gs-soap-service.git
Just wondering... if you do "contract first", WHY would you ever want to start from XSD, and let framework generate the WSDL?
I thought the idea behind "contract first" as best-practice is to give you maximum control over the interface,
to ensure maximum compatibility between different SOAP-service frameworks, tools, languages, etc.
While the XSD contains datatypes and request/response object types, it does not define the actual service-operations (and maybe some other stuff?)...
Isn't there a risk that you will encounter incompatibilities between different tools in the stuff that is NOT defined in the XSD?
Would appreciate some clarifications on this...
Please see this comparison:
https://dzone.com/articles/apache-cxf-vs-apache-axis-vs
"Isn't there a risk that you will encounter incompatibilities between different tools in the stuff that is NOT defined in the XSD?"
Basically, what Spring does is to let you define the service through code, and use the domain objects that's been generated from XSD. You won't have an issue on that side I guess.
But, from my current experience, because Spring doesn't fully compatible JAX-WS, you could have some implementation problems, especially when you're working 3rd party teams. For example, spring-ws doesn't support same named methods & property objects.
Other than that, it's pretty easy to setup and use
The main reason in my experience from using spring-ws. The WSDL can be generated dynamically by Spring. This is very advantageous, considering it contains the endpoint(which is different between landscapes). It is common to use base XSD's to define commonly used data elements. Also, after you define the XSD's you can use jaxb to generate the necessary class files for your source code.
As you can see, by defining the XSD's and thereby generating the class files from them. Your code base is well on its way to becoming a legitimate web service.

Interfaces for Rich Domain Models

Should Rich Domain Models have interfaces to assist with isolation during unit testing (e.g. when testing a service that uses the model)?
Or should the Rich Domain Model behaviour be involved in any related unit tests?
Edit:
By Rich Domain Model I'm specifically referring to domain entities that contain logic (i.e. non-anaemic).
Usually, the Domain Model is the part that you should keep isolated from everything else. The Domain Model may use interfaces so that it's isolated from external systems, etc.
However, in the most common scenarios, the Domain Model is what you're trying to protect from the deteriorating influences of external systems, UI logic, etc. - not the other way around.
Thus, there's little reason to put interfaces on the Domain Model.
You should definitely involve domain model behaviour in your unit tests. There's absolutely no point in mocking this part. You should really only be mocking external systems.
Should Rich Domain Models have interfaces
I'd say no, because
Domain behavior happens inside a well-delimited bubble. A given command triggers changes in a single aggregate. Conceptually, you don't have to isolate it since it basically just talks to itself. It might also issue messages (events) to the outside world, but testing that doesn't require the domain entities themselves to be mockable.
Concretely, domain entity (or value object) behavior is fluid -- it happens all in memory and is not supposed to directly call lower level, I/O bound operations. There will be no performance impact to not mocking stuff in your tests as long as the system under test is a small prepared cluster of objects (the Aggregate and maybe the thing calling it).
Domain model is a realm of concrete concepts. Ubiquitous language terms reflected in your entity or value object class/method names are usually prosaic and unequivocal. There's not much need for abstraction and polymorphism there -- which is what interfaces or abstract classes are for. The domain is less about generic contracts or interfaces providing services, and more about real world tasks that take place in the problem domain.

What is the value of separating interface from implementation in internet-based service-oriented computing?

Are the reasons like in normal multi-module application programming - so a client can just use the interface without having to worry about implementation details?
Note that I am talking about WSDI/UDDI/SOAP and not normal application interfaces.
A WSDL has an abstract part and a concrete part and they are separate as to allow the reuse of these definitions. The same contract can be bound to many concrete network protocols and message formats.
This reuse of definitions, in the context of UDDI means one interface, multiple implementations.
One of the idea with UDDI was that needed web services could be discovered at runtime. And you can go inside the registry and look for implementations of a certain WSDL contract:
Beyond the Cookbook: Interfaces and Implementations
[...]
If three different companies have implemented the same WSDL file and a piece of client software has created the proxy/stub code for that WSDL interface, then the client software can communicate with all three of those implementations with the same codebase
[...]
http://www2.sys-con.com/itsg/virtualcd/webservices/archives/0103/januszewski/index.html
At least that was the theory. In practice it turned out another way.
The short answer is none. When you publish a Web service via a WSDL it doesn't matter how you have implemented it. The client application consuming your service, will generate the appropriate code from the WSDL, whether you have defined an interface for your backend Web service or not.
That said, adding an interface to in front of a Web service, is rather a waste of time.
The pointy haired boss decides he'd like the application to work a different way, in a different sequence of screens because:
His wives friend at the tennis club thinks it would work better that way.
Rigorous user testing indicates a higher customer conversion rate based on a different application flow or sequence of usage steps.
You want to provide white-label versions of your website (similar to a franchise).
In the above cases, one would only need to rewrite the graphical elements, the person doing so would not need to know anything about databases, or complex back-end data-processing.
Separating interface and implementation helps you keep your design loosely coupled. You can change the implementation independently from the interface as the requirements change.

Few things about Repository Pattern that I simply don't understand

I've read quite a few topic on what Repository is, but there are still few things bugging me.
To my understanding only difference between Repository and traditional data access layers are Repository's query construction capabilities ( ie Query Object pattern). But when reading the following definitions of a Repository Pattern, it seems we can still have Repository even if we don't implement Query Object pattern:
a)
From:
Repositories are the single point where we hand off and fetch objects.
It is also the boundary where communication with the storage starts
and ends.
I think above quote suggests that Repository is an entry point into DAL. In other words, according to the quote, the DAL consumer (say Service layer) comunicates with DAL via Repository. But shouldn't instead data context represent an entry point into DAL ( thus Repository should reside within data context )?
b)
From:
The primary thing that differentiates a Repository from a traditional
data access layer is that it is to all intents and purposes a
Collection semantic – just like IList in .Net
Don't most traditional DALs also have methods that return a collection (for example List<Customer> GetAllCustomers())? So how exactly is a collection-like semantic of a Repository any different from collection-like semantic of a traditional DAL?
c)
From:
In a nutshell, the Repository pattern means abstracting the
persistence layer, masking it as a collection. This way the
application doesn't care about databases and other persistence
details, it only deals with the abstraction (which usually is coded as
an interface).
As far as I know, the above definition isn't any different from the definition of a traditional DAL.
Thus, if Repository implementation only performed two functions – having the collection-like semantics and isolating the domain objects from details of the database access code – how would it be any different from a traditional DAL? In other words, would/should it still be called Repository?
d)
What makes the following interface a Repository interface instead of just a regular DAL interface?
From:
public interface IPostsRepository
{
void Save(Post mypost);
Post Get(int id);
PaginatedResult<Post> List(int skip,int pageSize);
PaginatedResult<Post> SearchByTitle(string title,int skip,int pageSize);
}
Thank you
FYI I asked a very similar question over here and got some excellent answers.
The bottom line is it appears to depend on the complexity of your architecture. The repository pattern is most useful to create a layer of abstraction when you need to access different types of data stores, i.e. some data is in entity framework, some is on the file system, etc. In simpler web apps with a (probably unchanging) single data store (i.e. all data in SQL Server, or Oracle, etc) it is less important. At that point something like the Entity Framework context object functions as a repository for your entity objects.

Is it okay to expose LinqToSQL generated classes in web service

I'm making a asmx web service and I use LinqToSQL to deal with database.
It's seems to be easy for me to use LinqToSQL generated classes as arguments or return values in web methods.
Like this:
[WebMethod]
public OperationResult Meet_ListMeets(string clientToken, out meet[] meets)
{
ServiceMeet s = new ServiceMeet(sqlCon, clientToken);
return s.ListMeets(out meets);
}
Where "meet" is a LinqToSQL class.
I found that meet class is exposed as WSDL complex type including all dependencies (such as other classes which is referent to meet by the foreign keys in database).
Main question is "is it a good practice to use classes that way?". What about a security?
Should I use wrapper-classes to hide my entity structure?
Not a good practice, and you'll most likely run into problems at some point. Not to mention the overhead of all that extra cruft down the wire.
What I've done in the past is make a "inbetween" model with just the fields I need to actually send across the wire, and the map them back to the real object when they come in. You can do the mapping manually or with one of the many mapping toolkits for .NET (look in NuGet).