How should I manage Generic Repository Pattern when the works of different entities are pretty much different? - repository-pattern

I am new to Repository pattern.
When I am managing the CRUD operations of several entities (like: customers, orders etc) then its fine. I am making an interface, I am making a Generic repository. And that serves my purpose, because CRUD operation is common for them.
My question is:
When the duties of several entities are totally different, there is no common method between them, in this case what should I do? Should I increase the number of interfaces and repositories for those specific purposes? Or is there any better solution in terms of best practices?

How should I manage Generic Repository Pattern when the works of different entities are pretty much different?
This is the core problem with Generic Repository pattern; that is why it is considered an anti-pattern.
I read this here:
No matter what clever mechanism I tried, I always ended up at the same problem: a repository is a part of the domain being modeled, and that domain is not generic. Not every entity can be deleted, not every entity can be added, not every entity has a repository. Queries vary wildly; the repository API becomes as unique as the entity itself.
Why generic repository is anti-pattern?
A repository is a part of the domain being modeled, and that domain is not generic.
Not every entity can be deleted.
Not every entity can be added
Not every entity has a repository.
Queries vary wildly; the repository API becomes as unique as the entity itself.
For GetById(), identifier types may be different.
Updating specific fields (DML) not possible.
Generic query mechanism is the responsibility of an ORM.
Most of the ORMs expose an implementation that closely resemble with Generic Repository.
Repositories should be implementing the SPECIFIC queries for entities by using the generic query mechanism exposed by ORM.
Working with composite keys is not possible.
It leaks DAL logic in Services anyway.
Predicate criteria if you accept as parameter needs to be provided from Service layer. If this is ORM specific class, it leaks ORM into Services.
I suggest you read these (1, 2, 3, 4, 5) articles explaining why generic repository is an anit-pattern.
Better approach is:
Skip the Generic Repository. Implement concrete repositories.
Use Generic Repository as abstract base repository. Derive all concrete repositories from it.
In any case, do not expose generic repository to calling code. Also, do not expose IQueryable from concrete repositories.

Related

Handling non-deleting entities in an N-Tier Architecture

What is the best practice approach to handling non-deleting entities in an N-Tier Architecture. The architecture in question has a service layer and a repository layer. The repository is the only layer that has direct access to the database (well, through an ORM). Currently, the repository layer deals mostly with CRUD operations. Should this layer handle the retrieval of entities based on a given status?
Let me explain the use of status in our system. We want to use status to delete entities. So instead of deleting a User entity, we would set its status do Deleted. Now, the User Repository exposes a Get method. Calling Get without any parameters should return all Users in the system, regardless of its Status, but if we wanted to only get Active Users, would it be best to deal with that in the Service layer, or the Repository layer. If we were to do it in the Service layer, we would need to come up with a filter on the Repository Get methods response. If we did it in the Repository layer, we would have Get take a Status enum, so you could call Get(Status.Active). What would be the best way to handle something like this?
I would suggest limiting Get(id) to retrieving the details for a specific entity and then implement some type of Find/Search functionality that accepted a SearchCriteria object to define your search parameters (such as Status). To answer your question about where to do the filter, I would suggest the database since it is optimized for query execution.

How does SOA service composition work at the code level?

Service composition is a fundamental part of SOA. All of the
capabilities offered by a service inventory are expected to utilize
other services within that inventory, where necessary, in order to
build the task that they are accomplishing from smaller component
parts. This is all from an architectural level, though: how does this
composition happen at the implementation level?
Consider this situation. I have a "task service" which involves
posting a /process-request resource that will manage additional
resources involved in asynchronously processing a large file to
generate the "real" resource. Let's call the eventual "real" resource
a PartsInventory. So, you provide a URL and a name for the
PartsInventory to the /process-request service, which then creates
a PartsInventory "placeholder" and kicks off the asynchronous ETL
involved in converting the massive csv file at your URL into a real
PartsInventory. If this asynchronous job fails, it will remove the
placeholder and queries to the /process-request service will
reflect the failure. If it succeeds, it will remove the placeholder
status and queries to the /process-request service will instead
redirect you to the PartsInventory resource.
Now, how does the implementation of the interaction between
/process-request and the PartsInventory look, from a code
standpoint? Am I POSTing requests to a published /parts-inventory
service, or am I calling ORM objects to create the placeholder? If
the former, I am abiding by the published contract and am behaving as
a consumer of my own services, which seems to fit the composability
principle--but it feels really awkward to interact this way from
within the same codebase. On the other hand, the latter presumes that
the /process-request handler is going to know about how to create
a PartsInventory placeholder on its own, which feels awkward in
itself. The third option would be to create a specialized static
factory method on the PartsInventory object, called something along
the lines of PartsInventory.create_placeholder(), so that the
/process-request code is at least ignorant of the constructor
dependencies of a PartsInventory object. This does still separate
creation into two locations, though.
Have you encountered this? Is there a canonical "right answer" to this
question?
You should differentiate between service endpoints (in your case urls exposed by the various services) and services. Services are basically components with, what should be, well defined boundaries that expose one or more endpoints where they deliver contracts (made of messages).
Calls made within the service don't have to go through service interface calls that cross service boundaries do have to go through service interface.
The question in your case is are process-request and PartsInventory different aspects of the same service or not. I can't understand from the details in your service if they are or aren't but if they really make sense as different services you should go through the service interface.

Advice on using separate controllers for a REST API or not?

we are planning a REST api for our application and are trying to make
a decision on if we should implement separate controllers for the REST
functionality or not.
We could use the withFormat{} in our current controllers but
separating the REST functionality in different controllers feels
somewhat cleaner..
In that way we can build our API seperate from the current controllers
and we could even take the REST controllers into another application
etc.
Any thoughts on this subject? Any real world experience in what the
best practice would be?
We recently faced the same decision and we decided to go for separate controllers for the REST API.
Advantages of separate controllers include cleaner/clearer controller actions and the possibility to support different versions of the REST API later on.
We also would like to keep the option to host the REST API on separate server instances open. These servers would use exactly the same .war, but with different configuration settings for the feature toggles.
A disadvantage of separate controllers might be the DRYness of the controller code. Although this should be limited, since IMHO you should keep the controllers as thin as possible and extract shared logic to Grails services or helper classes.
I will work with grails soon, but so far i have little experience with it. But in web apps i worked, we always left webservices separated from the controller code. We also separated REST from SOAP. Common methods for them would be in service layer. It, indeed, felt cleaner. We didn't had to insert a lot of ifs in the methods
I would, for a given resource, use one controller that interfaces with a service layer based on context (the media type received or requested -- SOAP, JSON, XML, etc.) This way, you have a truly uniform resource identifier that can accept and return various media types and the controller won't need to know anything but what method the user wants to perform on what resource and what media type is involved.
For instance, maybe the service layer returns objects that have methods such as 'toXml', 'toSoap', or 'toJson'. Then you can just ask the service layer to do whatever and use a switch statement on the requested media type to either return the requested information, or by default throw a 406 Not Acceptable status code. For unsafe or idempotent transactions, the object may have constructor or factory methods for a given media type and then you just ask the service layer to do whatever with that object.

Create, edit, delete exposed via Web Service API - best design pattern?

I have a web service which exposes functionality to create, edit, delete user settings. Right now I have a UserSetting entity which is instantiated on each call to the web service. This entity has methods Create, Edit and Delete and other required properties.
The intentions are to serialise this class as an XML file and post that to a folder where it will be picked up by a scheduled console application, deserialised and the final stage of the work completed. The fact that an XML file is being used is not important - I understand there are other messaging techniques that can be used.
My knowledge of design patterns is limited but I want to adopt best practice. I have a few ideas in my head that there should be an IUserSettingTask and classes which implement this interface UserSettingCreator, UserSettingDeleter which have separate methods that are executed at web-service-time and at console-time.
The solution needs to be extensible because there will be a need to create settings for departments and device which I envisage will implement the same interface.
Any help with this will be great. Thanks.
My advice will be to separate the Create/Delete/Edit operatios from the UserSettings entity.
Keep UserSettings as pure DataContract of the service.
Let the service to operation on the entity not the entity to do such operations on itself.

Exposing Rich Domain Objects as a service

I’ve been trying to wrap my head around how to expose my domain objects to the client. Whether I’m using a rich client or I’m using the web, I want to use the MVP and repository patterns.
What I’m trying to wrap my head around is how I expose my repository and model, which will be on the server. Is it even possible to expose complex business objects that have state via a web service, or will I have to use a proprietary technology that is not language/platform agnostic, like .Net remoting, EJB, COM+, DCOM, etc?
Some other constraints are that I don’t want to have to keep loading the complex domain object from the database or passing it all over the wire every time I want to do an operation. Some complex logic might be that certain areas of the screen might be disabled or invisible based on the users permissions in combination with the state of the object. Validation and error message information will also need to be displayed to the user. I want to be able to logically call a lot of my domain object operations as if it were running on the same machine.
With the web, you have free rein. You don’t have to expose your objects across service boundaries, so you can make them a rich as you would like. I’m trying to create an N-teir architecture that is rich and works when the client calling the model is on a different machine.
You can expose your domain objects like any other object through REST or web services. I think key is to understand that you will have to expose services that provide business value in a single call, and these do not necessarily map 1:1 to your repositories. So while you on the server may expect a single service call to use multiple repositories and perform various aggregations, the things you expose over any kind of web-service should be more or less complete results. The operations you expose on the service should not expose individual repositories but rather focus on meaningful operations that provide a given business value.
I hope this helps somewhat.
You can use a SOAP formater for .Net remoting,
but the resulting service will probably be hard
to consume as a service, and it will surly be very chatty.
If you want your domain model to be consumed as a service,it should be designed as a service.
As stated in domain driven design, a service is stateless, so it won't expose your objects directly. Your service should expose methods that provides meaningful business operations that will be executed as a single unit.
Usually consider that the model in your client is in a different bounded context because its concerns will be a bit different from the one on the server.
What I’m trying to wrap my head around
is how I expose my repository and
model, which will be on the server. Is
it even possible to expose complex
business objects that have state via a
web service, or will I have to use a
proprietary technology that is not
language/platform agnostic, like .Net
remoting, EJB, COM+, DCOM, etc?
A good domain model is going to be highly behavioral and designed around the problem domain (and your discussions with domain experts), I'd thus argue against designing it to be exposed to remote consumers (in the same way that designing it from the database or GUI first is a bad idea).
Instead I'd look at using a style like REST or messaging and decide on the interface you want to expose and then map to/from the domain. So if you went with REST you'd design your resources and API (URL's, representations, etc.) and then you'd need to fulfill it from the domain model.
If this becomes un-natural then you can always have multiple models, for example mapping a seperate read-only presentation specific model to the same data-source (or which wraps the complex behavioral domain model) is an approach I've used several times.
Some other constraints are that I
don’t want to have to keep loading the
complex domain object from the
database or passing it all over the
wire every time I want to do an
operation
Look at caching in HTTP and supporting multiple representations for a resource, also look at caching within your data-access solution.
Validation and error message
information will also need to be
displayed to the user. I want to be
able to logically call a lot of my
domain object operations as if it were
running on the same machine.
You can either represent this as a resource or more likely look at HTTP status codes and the response bodies you'd want to use in those situations.