I'm working on a client/server product. Basically, server will transfer a document to client side to do editing. The client side has full MVC architecture. The document is the model.
Now the problem are:
There are some calculation in the model that need some resources in server.
For performance reason, some part of the model should be lazy loaded.
One example is the image in a document. It didn't load when opening the document, but there is something that load the image, once it loaded it will let the document know and document will recalculate the layout.
My question is if the communication code is part of Model or Controller? Or it belongs to some Context that is neither Model nor Controller? Or the Context belongs to Model?
The model layer should be interacting with data source. In case of client-server setup where you have two separate and independent triads, the data source for client's model layer would be server's presentation layer.
Basically, your client-side's model layer becomes the user of server-side.
It will be better if you can provide some calculation example or document object model.
Let's break through the requirement:
There are some calculation in the model that need some resources in server.
This kind of calculation is better to be put at Model, because it needs resources from server. If you are put the logic at Controller, then:
The Controller need access to server (database), in which break the MVC rule. Another thing is, the Controller now know the connection (either string or physical file storage). If you add another adapter / bridge, then it is additional effort
The calculation cannot be applied to other UI-implementation. Say in .Net, you put it in Asp.Net MVC and add the calculation at Controller. If sometimes you need to support desktop UI, then the calculation cannot be used as is (because already being tainted with controller actions, added useless web dependency, etc)
For performance reason, some part of the model should be lazy loaded.
I'm not sure about your objective with this. But let we go through this. I'm assuming that you has Header model which has List of Details that need to be lazy loaded. This can be achieved with 2 approach.
First approach is to implement lazy load at the Details property, and second approach is to retrieve a list of Details given by specific Header or id, retrieved from the repository. Both of them are resulting the same. IMHO I like the second better, because with later solution, you can reuse the repository in other module, and enable you to select Details without specific Header. The placement, I believe it should be on Model.
I may misunderstand the requirement though.
Related
I'm still new to Django and am trying to understand and implement the "fat model, skinny view" pattern. It makes sense to me that a model should be self contained for re-usability though I don't currently see the use case for this particular model.
The model is a virtual machine for one of many cloud vendors. I have a polymorphic base model, VirtualMachine which defines all the fields. I also have a specific model, VirtualMachineVendor which implements the vendor specific control function for VirtualMachine. Examples would be vm_create() or vm_delete that handle the model creation or deletion as well as the management of the cloud resource.
The view mainly processes the request and sends that to the correct model method and preparing data for the template. I want to add functionality for creating a domain record using some independent python code which communicates with the cloud provider.
Question: Should the VirtualMachine model call this domain creation method or should this be something the View calls? In general, should a model be calling other model methods within the same or different app, or should the model return control back to the view after a call?
I've also been trying to make sense of these SO Q&A making mention of a service layer for these types of methods:
Proper way to consume data from RESTFUL API in django
Separation of business logic and data access in django
Related question: is it fair then to say that fat models refer to the methods associated directly with manipulation of the model data?
This is really pretty arbitrary. I personally wouldn't put any code that calls an external API into the model itself; apart from anything else, that would complicate testing, but more generally I would treat model methods as having the database as their only dependency.
If you like, this could go in a utils module.
We have an internal application. As time went on and new applications were requested, that exchange data between eachother, the interaction became bound to the database schema. Meaning changes in the database require changes everywhere else. As we plan to build even more applications that will depend on the same data this quickly will become and unmanagable mess.
Now i'm looking to abstract that interaction behind an API. Currently i have trouble choosing the right tool.
Interaction at times could be complex, meaning data is posted to one service and if the action has been completed it should notify the sender of that.
Another example would be that some data does not have context without the data from other services. Lets say there is one service for [Schools] and one for [Students]. So if the [School] gets deleted or changed the [Student] needs to be informed about it immeadetly and not when he comes to [School].
Advice? Suggestions? SOAP/REST/?
I don't think you need an API. In my opinion you need an architecture which decouples your database from the domain logic and other parts of the application. Such an architecture is for example clean architecture, onion architecture and hexagonal architecture (ports&adapters by new name). They share the same concepts, you have a domain logic, which does not depend from any framework, external lib, delivery method, data storage solutions, etc... This domain logic communicates with the outside world through adapters having well defined interfaces. If you first design the inside of your domain logic, and the interfaces of the adapters, and just after the outside components, then it is called domain driven design (DDD).
So for example if you want to move from MySQL to MongoDB you already have a DataStorageInterface, and the only thing you need is writing a MongoDBAdapter which implements this interface, and ofc migrate the data...
To design the adapters you can use two additional concepts; command and query segregation (CQRS) and event sourcing (ES). CQRS is for connecting delivery methods like REST, SOAP, webapplications, etc... to the domain logic. For example you can raise a CreateUserCommand from your REST API. After that the proper listener in the domain logic processes that command, and by success it raises a domain event, like UserCreatedEvent. Your REST API can listen to that event and respond with a success message to the REST client. The UserCreatedEvent can be listened by one or more storage adapter too. So they can process that event and persist the new user. You don't necessary use only a single database. For example if a relational database is faster by a specific type of query, then you can use that, but if a noSQL database suites better to the job, then you can use that too. So you can use as many databases as you want for your queries, the only thing you need is writing a storage adapter for them. For example if your REST client wants to retrieve the profile of a specific user, then it can raise a GetUserProfileByIdQuery and the domain logic can ask the adapter of a database which can serve the query. After that the adapter can send for example an SQL query to a MySQL database and return the response. By ES you add EventStorage to your system, which stores the raised domain events. It can be very useful if you want to migrate your data from one query database to another. In that case you create a new storage adapter to your new database, and replay all of the domain events from the EventStorage in historical order to that adapter, so it can fill the new database with the relevant data. That's all, you don't have to write complicated migration scripts...
In your case I think your should create at least domain events, and use event sourcing. That will totally decouple your database from the other parts of your application. Adding a REST or SOAP API can have a similar effect, but building HTTP connections to access your database can slow down your application.
What is the best practice approach to handling non-deleting entities in an N-Tier Architecture. The architecture in question has a service layer and a repository layer. The repository is the only layer that has direct access to the database (well, through an ORM). Currently, the repository layer deals mostly with CRUD operations. Should this layer handle the retrieval of entities based on a given status?
Let me explain the use of status in our system. We want to use status to delete entities. So instead of deleting a User entity, we would set its status do Deleted. Now, the User Repository exposes a Get method. Calling Get without any parameters should return all Users in the system, regardless of its Status, but if we wanted to only get Active Users, would it be best to deal with that in the Service layer, or the Repository layer. If we were to do it in the Service layer, we would need to come up with a filter on the Repository Get methods response. If we did it in the Repository layer, we would have Get take a Status enum, so you could call Get(Status.Active). What would be the best way to handle something like this?
I would suggest limiting Get(id) to retrieving the details for a specific entity and then implement some type of Find/Search functionality that accepted a SearchCriteria object to define your search parameters (such as Status). To answer your question about where to do the filter, I would suggest the database since it is optimized for query execution.
we are planning a REST api for our application and are trying to make
a decision on if we should implement separate controllers for the REST
functionality or not.
We could use the withFormat{} in our current controllers but
separating the REST functionality in different controllers feels
somewhat cleaner..
In that way we can build our API seperate from the current controllers
and we could even take the REST controllers into another application
etc.
Any thoughts on this subject? Any real world experience in what the
best practice would be?
We recently faced the same decision and we decided to go for separate controllers for the REST API.
Advantages of separate controllers include cleaner/clearer controller actions and the possibility to support different versions of the REST API later on.
We also would like to keep the option to host the REST API on separate server instances open. These servers would use exactly the same .war, but with different configuration settings for the feature toggles.
A disadvantage of separate controllers might be the DRYness of the controller code. Although this should be limited, since IMHO you should keep the controllers as thin as possible and extract shared logic to Grails services or helper classes.
I will work with grails soon, but so far i have little experience with it. But in web apps i worked, we always left webservices separated from the controller code. We also separated REST from SOAP. Common methods for them would be in service layer. It, indeed, felt cleaner. We didn't had to insert a lot of ifs in the methods
I would, for a given resource, use one controller that interfaces with a service layer based on context (the media type received or requested -- SOAP, JSON, XML, etc.) This way, you have a truly uniform resource identifier that can accept and return various media types and the controller won't need to know anything but what method the user wants to perform on what resource and what media type is involved.
For instance, maybe the service layer returns objects that have methods such as 'toXml', 'toSoap', or 'toJson'. Then you can just ask the service layer to do whatever and use a switch statement on the requested media type to either return the requested information, or by default throw a 406 Not Acceptable status code. For unsafe or idempotent transactions, the object may have constructor or factory methods for a given media type and then you just ask the service layer to do whatever with that object.
I’ve been trying to wrap my head around how to expose my domain objects to the client. Whether I’m using a rich client or I’m using the web, I want to use the MVP and repository patterns.
What I’m trying to wrap my head around is how I expose my repository and model, which will be on the server. Is it even possible to expose complex business objects that have state via a web service, or will I have to use a proprietary technology that is not language/platform agnostic, like .Net remoting, EJB, COM+, DCOM, etc?
Some other constraints are that I don’t want to have to keep loading the complex domain object from the database or passing it all over the wire every time I want to do an operation. Some complex logic might be that certain areas of the screen might be disabled or invisible based on the users permissions in combination with the state of the object. Validation and error message information will also need to be displayed to the user. I want to be able to logically call a lot of my domain object operations as if it were running on the same machine.
With the web, you have free rein. You don’t have to expose your objects across service boundaries, so you can make them a rich as you would like. I’m trying to create an N-teir architecture that is rich and works when the client calling the model is on a different machine.
You can expose your domain objects like any other object through REST or web services. I think key is to understand that you will have to expose services that provide business value in a single call, and these do not necessarily map 1:1 to your repositories. So while you on the server may expect a single service call to use multiple repositories and perform various aggregations, the things you expose over any kind of web-service should be more or less complete results. The operations you expose on the service should not expose individual repositories but rather focus on meaningful operations that provide a given business value.
I hope this helps somewhat.
You can use a SOAP formater for .Net remoting,
but the resulting service will probably be hard
to consume as a service, and it will surly be very chatty.
If you want your domain model to be consumed as a service,it should be designed as a service.
As stated in domain driven design, a service is stateless, so it won't expose your objects directly. Your service should expose methods that provides meaningful business operations that will be executed as a single unit.
Usually consider that the model in your client is in a different bounded context because its concerns will be a bit different from the one on the server.
What I’m trying to wrap my head around
is how I expose my repository and
model, which will be on the server. Is
it even possible to expose complex
business objects that have state via a
web service, or will I have to use a
proprietary technology that is not
language/platform agnostic, like .Net
remoting, EJB, COM+, DCOM, etc?
A good domain model is going to be highly behavioral and designed around the problem domain (and your discussions with domain experts), I'd thus argue against designing it to be exposed to remote consumers (in the same way that designing it from the database or GUI first is a bad idea).
Instead I'd look at using a style like REST or messaging and decide on the interface you want to expose and then map to/from the domain. So if you went with REST you'd design your resources and API (URL's, representations, etc.) and then you'd need to fulfill it from the domain model.
If this becomes un-natural then you can always have multiple models, for example mapping a seperate read-only presentation specific model to the same data-source (or which wraps the complex behavioral domain model) is an approach I've used several times.
Some other constraints are that I
don’t want to have to keep loading the
complex domain object from the
database or passing it all over the
wire every time I want to do an
operation
Look at caching in HTTP and supporting multiple representations for a resource, also look at caching within your data-access solution.
Validation and error message
information will also need to be
displayed to the user. I want to be
able to logically call a lot of my
domain object operations as if it were
running on the same machine.
You can either represent this as a resource or more likely look at HTTP status codes and the response bodies you'd want to use in those situations.