I have a JSF2 application that is using JPA2/Hibernate with Spring #Transactional. There are no #Transactional statements in the UI (backing beans), only in the service layer. (I am using #Transactional(propagation=Propagation.MANDATORY) in the DAOs to ensure every call occurs in a transaction.) It's all works very nicely, except...
When I am opening and updating the entities through the transactional service methods, Sometimes the retrieved entities are old. It doesn't matter that it's the same user in the same session, occasionally, the JPA "read" methods return older stale entities that have (should have) already been replaced. This stumped me for quite a while, but it turns out it is caused by caching in the Entity Manager. The DAOs are annotated with #Repository, so the injected EntityManager is being reused. I had expected that when the transaction completed, the entity manager would automatically be cleared. But that is not the case. Usually the Entity Manager returns the correct value, but often it reaches back and returns an old one from an earlier transaction instead.
As a workaround, I have sprinkled strategic entityManager.clear() statements in the DAO read methods, but that is ugly. The entityManagers should be cleared after each transaction.
Has anyone experienced this? Is there a proper solution? Can the entity manager be cleared after each transaction?
Thanks very much.
I am using: org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean and org.springframework.orm.jpa.JpaTransactionManager
The #Transactional annotation exists in the service layer. The service methods marked with #Transactional will adhere to the ACID properties no matter how many DAO calls are made from within it.
This means that you need not annotate the DAO methods as #Transactional.
I am working on something similar and this is how I have done it and my data is consistent.
Try it this and see if you are still getting inconsistent data.
Do you use #PersistenceContext annotation (above EntityManager in DAO) combined with PersistenceAnnotationBeanPostProcessor bean (you don't have to define PersistenceAnnotationBeanPostProcessor bean if you are using <context:annotation-config/> and <context:component-scan/> XML tags) ? If not, I guess this is the reason of your problems.
Related
I am using Apache CXF (apache-cxf-2.5.0) to create Web Services using a bottom-up approach (Java first approach). I want to return some data/records (for example, username, email) from a database table. I can write a Java class which returns a simple response. But I am not able to find way to return a response such as data/records extracted from a database table. How to do that?
You don't mention how you are accessing the database, but the basic idea is that you ensure that the classes that you return have JAXB annotations (notably #XmlRootElement or #XmlType) on them, which allows CXF to convert the instances of those classes into XML document fragments. The classes which you annotate this way probably should not have lots of functionality in them; they should exist just to hold data. (I find anything else too confusing given the complex lifecycle they'll have.) Once the annotations are in place, just return the relevant objects and all the conversions will happen automatically.
I'm talking a simple class like this:
#XmlRootElement // <---- THIS LINE HERE!
public class UserInfo {
public String username;
public String email;
}
You can use this in conjunction with other annotations (e.g., for your ORM) as necessary. Of course, if you're talking straight JDBC to the DB to get the information out, you won't need to worry about that.
The one tricky bit is that the objects being returned will have a lifespan that goes beyond that of the database transaction you're using; you may need to detach (i.e., do some copying, though the ORM layer might provide assistance) the objects extracted from the DB for that to work. This won't be much of a concern in this case as the DB you're describing is very simple (one table, no inter-row relations) but could be an issue if you make things more complex.
We have Unit of Work implemented in EntityFramework, so when we use ObjectContext and make any changes to the Entity it is tracked and then on SaveChanges it is all reflected in underlying database.
But what if I want to track changes for my custom class, so every modifications are tracked down and sent through webservice call ?
I have webservice which provides me some data, that data is displayed in datagrid and then may be modified. I want to track all the changes down and then be able to send back through webservice the data only that have been modified. Is there any solution for that like EntityFramework or POCO or whatever ? Or I have to implement my own Unit of Work pattern for it ?
Change tracking works only when entity is attached to the context. There is special type of entities called Self tracking entities which is able to track changes on the client side when exposed with web service but these classes are still your primary entities (not custom objects) and they apply their tracked state directly to the context.
What you describe has nothing to do with unit-of-work pattern. You are looking for change set pattern which is able to pass only differences back to the service. Implementation of such classes is completely up to you. .NET doesn't provide them. .NET offers two implementations of change set pattern
mentioned Self tracking entities for EF
DataSet and related classes
Both these implementations transfer by default all data (moreover at least DataSets have by default both old and new state in the message). Both data sets and STEs share same limitations - they are very badly interoperable.
Change tracking at the property level should not be left to the client of a WCF call, for a variety of reasons. If you use a DTO (Data-Transfer Object) pattern, you should be able to keep your individual objects small enough to avoid having any significant overhead from sending the entire changed object across the wire. Then, on the server side, you load the current version of the object out of your database, set the values provided by the DTO, and let Entity Framework track the changed properties.
public SavePerson(Person person)
{
using(var context = _contextFactory.Get())
{
var persistentPerson = context.People.Single(p => p.PersonId == person.PersonId);
persistendPerson.FirstName = person.FirstName;
/// etc. (This could be done with a tool like AutoMapper)
context.SaveChanges();
}
}
If you're changing multiple objects on the client side, and you want to keep track of which ones the user has changed, you could have the client be responsible for keeping track of the objects that get changed and send only those objects to the web service in bulk. There, you can apply the same pattern and wait to SaveChanges until all of the objects have been updated.
Hopefully this helps.
I just recently read about "Mocking objects" for unit testing and currently I'm having a difficulties implementing this approach in my application. Please let me explain my problem.
I have a User model class, which is dependent on 2 data sources (database and facebook web service). The controller class simply use this User model as an interface to access data and it doesn't care about where the data came from.
Currently I never done any unit test to this User model because it is dependent on an external web service. But just a while ago, I read about object mocking and now I know that it is a common approach to unit test a class that depends on external resources (like in my case).
Now I want to create a unit test for the User model, but then I encountered a design issue:
In order for the User model to use a mocked Facebook SDK, I have to inject this mocked Facebook SDK to the User object (probably using a setter). Therefore I can't construct the Facebook SDK inside the User object. I have to construct it outside the User object, and inject the SDK into the User object.
The real client of my User model is the application's controller. Therefore I have to construct the Facebook SDK inside the controller and inject it to the user object. Well, this is a problem because I want my controller to be as clean as possible. I want my controller to be ignorant about the application's data source.
I'm not good at explaining something systematically, so you'll probably sleeping before reading this last paragraph. But anyway, I want to ask if anyone here ever encountered the same problem as mine? How do you solve this problem?
Regards,
Andree
P.S: I'm using Zend framework, PHP 5.3.
One way to solve this problem is to create two constructors in User: the default one instantiates the real-life data sources, the other one gets them as parameters. This way your controller can use the default constructor, while your tests use the parameterized one to pass in mock data sources.
Since you haven't specified your language, I show you an example in Java, hopefully this helps get the idea:
class User {
private DataBase database;
private WebService webService;
// default constructor
public User() {
database = new OracleDataBase();
webService = new FacebookWebService();
}
// constructor for unit testing
public User(DataBase database, WebService webService) {
this.database = database;
this.webService = webService;
}
}
This isn't really a question about mocking, but about dependencies--and trying to unit test has forced the issue. It sounds like currently you create your User object within your Controller.
If the User and Controller have the same lifetime (they're created at the same time), then you can pass the User into the Controller's constructor, which is where you can make the substitution.
If there is a User object per call, then perhaps the User object should be passed in by the environment, or returned from some Context object.
If you make the Facebook SDK Object(s) publicly accessible, your User object can still create it when it sets up but you can replace it with a mock before you actually do anything on the User object.
[Test]
public void Test()
{
User u = new User(); // let's say that the object on User gets created in the ctor
u.FacebookObj = new DynamicMock(typeof(FacebookSDK)).MockInstance;
Assert.That(u.Method(), Does.Stuff, "u.Method didn't do stuff");
}
You don't say what language you are using, but I use Ruby and Mocha for mocking objects, and it's quite easy.
See http://mocha.rubyforge.org/.
here the fourth example shows that any call to Product.name from the unit under test is intercepted and the value 'stubbed_name' is returned.
I guess there are similar mechanisms in Java, etc.
Since you will be Unit Testing your test class will play the role of the Controller, so that one should not be touched. So that can remain clean.
You are well underway of reinventing Dependency Injection. So it may be worthwhile to look at Spring or Guice to help you with the plumbing. (If you are in Java land).
Your test can indeed do the injection of the Mocked Facebook SDK and your Database service using setter or constructor arguments.
your tests will now do what the controllers (and maybe views) will do and verify the proper routines are called in the SDK's and that the resources are properly obtained and disposed of.
We currently have an application that retrieves data from the server through a web service and populates a DataSet. Then the users of the API manipulate it through the objects which in turn change the dataset. The changes are then serialized, compressed and sent back to the server to get updated.
However, I have begin using NHibernate within projects and I really like the disconnected nature of the POCO objects. The problem we have now is that our objects are so tied to the internal DataSet that they cannot be used in many situations and we end up making duplicate POCO objects to pass back and forth.
Batch.GetBatch() -> calls to web server and populates an internal dataset
Batch.SaveBatch() -> send changes to web server from dataset
Is there a way to achieve a similar model that we are using which all database access occurs through a web service but use NHibernate?
Edit 1
I have a partial solution that is working and persisting through a web service but it has two problems.
I have to serialize and send my whole collection and not just changed items
If I try to repopulate the collection upon return my objects then any references I had are lost.
Here is my example solution.
Client Side
public IList<Job> GetAll()
{
return coreWebService
.GetJobs()
.BinaryDeserialize<IList<Job>>();
}
public IList<Job> Save(IList<Job> Jobs)
{
return coreWebService
.Save(Jobs.BinarySerialize())
.BinaryDeserialize<IList<Job>>();
}
Server Side
[WebMethod]
public byte[] GetJobs()
{
using (ISession session = NHibernateHelper.OpenSession())
{
return (from j in session.Linq<Job>()
select j).ToList().BinarySerialize();
}
}
[WebMethod]
public byte[] Save(byte[] JobBytes)
{
var Jobs = JobBytes.BinaryDeserialize<IList<Job>>();
using (ISession session = NHibernateHelper.OpenSession())
using (ITransaction transaction = session.BeginTransaction())
{
foreach (var job in Jobs)
{
session.SaveOrUpdate(job);
}
transaction.Commit();
}
return Jobs.BinarySerialize();
}
As you can see I am sending the whole collection to the server each time and then returning the whole collection. But I'm getting a replaced collection instead of a merged/updated collection. Not to mention the fact that it seems highly inefficient to send all the data back and forth when only part of it could be changed.
Edit 2
I have seen several references on the web for almost a transparent persistent mechanism. I'm not exactly sure if these will work and most of them look highly experimental.
ADO.NET Data Services w/NHibernate (Ayende)
ADO.NET Data Services w/NHibernate (Wildermuth)
Custom Lazy-loadable Business Collections with NHibernate
NHibernate and WCF is Not a Perfect Match
Spring.NET, NHibernate, WCF Services and Lazy Initialization
How to use NHibernate Lazy Initializing Proxies with Web Services or WCF
I'm having a hard time finding a replacement for the DataSet model we are using today. The reason I want to get away from that model is because it takes a lot of work to tie every property of every class to a row/cell of a dataset. Then it also tightly couples all of my classes together.
I've only taken a cursory look at your question, so forgive me if my response is shortsighted but here goes:
I don't think you can logically get away from doing a mapping from domain object to DTO.
By using the domain objects over the wire you are tightly coupling your client and service, part of the reason to have a service in the first place is to promote loose coupling. So that's an immediate issue.
On top of that you're going to end up with a brittle domain logic interface where you can't make changes on the service side without breaking your client.
I suspect your best bet would be to implement a loosely coupled service which implements a REST / or some other loosely coupled interface. You could use a product such as automapper to make the conversions simpler and easier and also flatten data structures as necessary.
At this point I don't know of any way to really cut down the verbosity involved in doing the interface layers but having worked on large projects that didn't make the effort I can honestly tell you the savings wasn't worth it.
I think your issue revolves around this issue:
http://thatextramile.be/blog/2010/05/why-you-shouldnt-expose-your-entities-through-your-services/
Are you or are you not going to send ORM-Entities over the wire?
Since you have a Services-Oriented architecture.. I (like the author) do not recommend this practice.
I use NHibernate. I call those ORM-Entities. They are THE POCO model. But they have "virtual" properties that allow for lazy-loading.
However, I also have some DTO-Objects. These are also POCO's. These do not have lazy'loading friendly properties.
So I do alot of "converting". I hydrate ORM-Entities (with NHibernate)...and then I end up converting them to Domain-DTO-Objects. Yes, it stinks in the beginning.
The server sends out the Domain-DTO-Objects's. There is NO lazy loading. I have to populate them with the "Goldie Locks" "just right" model. Aka, if I need Parent(s) with one level of children, I have to know that up front and send the Domain-DTO-Objects over that way, with just the right amount of hydration.
WHen I send back Domain-DTO-Objects's (from client to the server), I have to reverse the process. I convert the Domain-DTO-Objects into ORM-Entities. And allow NHibernate to work with the ORM-Entities.
Because the architecture is "disconnected", I do alot of (NHiberntae) ".Merge()" calls.
// ormItem is any NHibernate poco
using (ISession session = ISessionCreator.OpenSession())
{
using (ITransaction transaction = session.BeginTransaction())
{
session.BeginTransaction();
ParkingAreaNHEntity mergedItem = session.Merge(ormItem);
transaction.Commit();
}
}
.Merge is a wonderful thing. Entity Framework does not have it. Boo.
Is this alot of setup? Yes.
Do I think it is perfect? No.
However. Because I send very basic DTO's(Poco's) that are not "flavored" to the ORM, I have the ability to switch ORM's without killing my contracts to the outside world.
My datalayer can be ADO.NET, EF, NHibernate, or anything. I have to write the "Converters" if I switch, and the ORM code, but everything else is isolated.
Many people argue with me. They said I'm doing too much, and the ORM-Entities are fine.
Again, I like to "now allow any lazy loading" appearances. And I prefer to have my data-layer isolated. My clients should not know or care about my data-layer/orm of choice.
There are just enough subtle differences (or some not so subtle ones) between EF and NHibernate to screwball the game plan.
Do my Domain-DTO-Objects's look 95% like my ORM-Entities? Yep. But its the 5% that can screwball you.
Moving from DataSets, especially if they are populated from stored-procedures with alot of biz-logic in the TSQL, isn't trivial. But now that I do object model, and I NEVER write a stored procedure that isn't simple CRUD functions, I'd never go back.
And I hate maintenance projects with voodoo TSQL in the stored procedures. It ain't 1999 anymore. Well, most places.
Good luck.
PS Without .Merge(in EF), here is what you have to do in a disconnected world: (boo microsoft)
http://www.entityframeworktutorial.net/EntityFramework4.3/update-many-to-many-entity-using-dbcontext.aspx
I am using the Repository Pattern for my application. I have a class User. User is identified by Email. The UserRepository contains a method CreateUser(User user). There is a business rule saying that users should have a unique Email.
I want to implement a transaction which first checks whether an email is in use and if not, the user is created. Where should I put this code which is responsible for checking the uniqueness of the Email?
This is definitely a business rule; it is business logic. I think it is not correct to put this check in my UserRepository implementation.
This sort of thing typically goes in either (1) a service or (2) directly into the schema as a database constraint (and frequently both).
Using a service, you don't access the Repository directly from client code; you call a service which does the useful operations for you.
For example, something like:
public class UserService : ... {
private Repository<User> _userRepository;
public void CreateUser(User u) {
// Verify that the user's email is unique.
if ( ... ) {
_userRepository.Create(u);
}
}
}
If you're building an application large enough to warrent a repository pattern then you'll want to put this validation as close to the data as possible, probably a database constraint such as a unique index/key. This prevents situations of bugs leaking into code later due to corrupt data.
Assuming you're using a database for storage, you should definitely add a unique constraint on the e-mail column in the database.
Check out this excellent article on Simple Talk:
Five Simple Database Design Errors You Should Avoid
See in Section 4:
Enforcing Integrity via applications
Proponents of application based
integrity usually argues that
constraints negatively impact data
access. They also assume selectively
applying rules based on the needs of
the application is the best route to
take. .....
The solution is simple.
Rely on nothing else to provide
completeness and correctness except
the database itself. By nothing, I
mean neither users nor applications
external to the database.**
So in your case - a unique constraint on your e-mail column should really be modelled in the database. That's the best place to put that piece of business logic, and will save you from a lot of grief in the long run.
Marc