Admin interface to manage two related data sources - web-services

In the project there are two data sources: one is project's own database, another is (semi-)legacy web service. The problem is that admin part has to keep them in sync and manage both so that user doesn't have to know they're separate (or, do know, but they do not care).
Here's an example: there's list of languages. Both apps - project and legacy - need to use them. However, they both add their own meaning. For example, project may need active/inactive, and legacy will need language code.
But admin part has to manage everything - language name, active/inactive, language code. When loading, data from both systems has to be merged and presented, and when saved, data has to be updated in both systems.
Thus, what's the best way to represent this separated data (to be used in the admin page)? Notice that I use ASP.NET MVC / NHibernate.
How do I manage legacy data?
Do I connect admin part to legacy web service external interface - where it currently only has GetXXX() methods - and add the missed C[R]UD methods?
Or, do I connect directly to legacy database - which is possible since I do control it.
Where do I do split/merge of data - in the controller/service layer, or in the repository/data layer?
In the controller layer I'll do "var viewmodel = new ViewModel { MyData = ..., LegacyData = ... }; The problem - code cluttered with legacy issues.
In the data layer, I'll do "var model = repository.Get(id)" and model will contain data from both worlds, and when I do "repository.Save(entity)" it will update both data sources - in local db only project specific fields will be stored. The problems: a) possible leaky abstraction b) getting data from web service always while it is only need sometimes and usually for admin part only
a modification, use ICombinedRepository<Language> which will provide additional split/merge. Problems: still need either new model or IWithLegacy<Language, LegacyLanguage>...
Have a single "sync" method; this will remove legacy items not present in the project item list, update those that are present, create legacy items that are missed, etc...
Well, to summarize the main issues:
do I develop CRUD interface on web service or connect directly to its database (which is under my complete control, so that I may even later decide to move that web service part into the main app or make it use the main db)?
do I have separate classes for project's and legacy entities, thus managed separately, or have project's entities have all the legacy fields, managed transparently when saved/loaded?
Anyway, are there any useful tips on managing mostly duplicated data from different sources? What are the best practices?
In the non-admin part, I'd like to completely hide the notion of the legacy data. Which is what I do now, behind the repository interfaces. But for admin part it's not that clear or easy...

What you are describing here seems to warrant the need for an Anti-Corruption Layer. You can find solutions related to this topic here: DDD, Anti Corruption layer, how-to?
When you have two conceptual Bounded Contexts, but you're only using DDD for one of them, the Anti-Corruption layer comes into play. When reading from your data source (performing a get operation [R]), the anti-corruption layer will translate your legacy data into usable objects for your project. When writing to your data source (performing a set operation [CUD]), the anti-corruption layer will translate your DDD objects into objects understood by your legacy code.
Whether or not to use the existing Web Service depends on whether or not you're willing to change existing code. Sticking with DRY practices, you don't want to duplicate what you already have. If you want to keep the Web Service, you can add CUD methods inside the anti-corruption layer without impacting your legacy application.
In the anti-corruption layer, you will want to make use of adapters and facades to bring together separate classes for your DDD project and the legacy application.
The anti-corruption layer is exactly where you handle splitting and merging.
Let me know if you have any questions on this, as it can be a somewhat advanced topic. I'll try to answer as best I can.
Good luck!

Related

Should I use an internal API in a django project to communicate between apps?

I'm building/managing a django project, with multiple apps inside of it. One stores survey data, and another stores classifiers, that are used to add features to the survey data. For example, Is this survey answer sad? 0/1. This feature will get stored along with the survey data.
We're trying to decide how and where in the app to actually perform this featurization, and I'm being recommended a number of approaches that don't make ANY sense to me, but I'm also not very familiar with django, or more-than-hobby-scale web development, so I wanted to get another opinion.
The data app obviously needs access to the classifiers app, to be able to run the classifiers on the data, and then reinsert the featurized data, but how to get access to the classifiers has become contentious. The obvious approach, to me, is to just import them directly, a la
# from inside the Survey App
from ClassifierModels import Classifier
cls = Classifier.where(name='Sad').first() # or whatever, I'm used to flask
data = Survey.where(question='How do you feel?').first()
labels = cls(data.responses)
# etc.
However, one of my engineers is saying that this is bad practice, because apps should not import one another's models. And that instead, these two should only communicate via internal APIs, i.e. posting all the data to
http://our_website.com/classifiers/sad
and getting it back that way.
So, what feels to me like the most pressing question: Why in god's name would anybody do it this way? It seems to me like strictly more code (building and handling requests), strictly less intuitive code, that's more to build, harder to work with, and bafflingly indirect, like mailing a letter to your own house rather than talking to the person who lives there, with you.
But perhaps in easier to answer chunks,
1) Is there REALLY anything the matter with the first, direct, import-other-apps-models approach? (The only answers I've found say 'No!,' but again, this is being pushed by my dev, who does have more industrial experience, so I want to be certain.)
2) What is the actual benefit of doing it via internal API's? (I've asked of course, but only get what feel like theoretical answers, that don't address the concrete concerns, of more and more complicated code for no obvious benefit.)
3) How much do the size of our app, and team, factor into which decision is best? We have about 1.75 developers, and only, even if we're VERY ambitious, FOUR users. (This app is being used internally, to support a consulting business.) So to me, any questions of Best Practices etc. have to factor in that we have tiny teams on both sides, and need something stable, functional, and lean, not something that handles big loads, or is externally secure, or fast, or easily worked on by big teams, etc.
4) What IS the best approach, if NEITHER of these is right?
It's simply not true that apps should not import other apps' models. For a trivial refutation, think about the apps in django.contrib which contain models such as User and ContentType, which are meant to be imported and used by other apps.
That's not to say there aren't good use cases for an internal API. I'm in the planning process of building one myself. But they're really only appropriate if you intend to split the apps up some day into separate services. An internal API on its own doesn't make much sense if you're not in a service-based architecture.
I cant see any reason why you should not import an app model from another one. Django itself uses several applications and theirs models internally (like auth and admin). Reading the applications section of documentation we can see that the framework has all the tools to manage multiple applications and their models inside a project.
However it seems quite obvious to me that it would make your code really messy and low-performance to send requests to your applications API.
Without context it's hard to understand why your engineer considers this a bad practice. He was maybe referring to database isolation (thus, see "Working multiple databases" in documentation) or proper code isolation for testing.
It is right to think about decoupling your apps. But I do not think that internal REST API is a good way.
Neither direct import of models, calling queries and updates in another app is a good approach. Every time you use model from another app, you should be careful. I suggest you to try to separate communication between apps to the simple service layer. Than you Survey app do not have to know models structure of Classifier app::
# from inside the Survey App
from ClassifierModels.services import get_classifier_cls
cls = get_classifier_cls('Sad')
data = Survey.where(question='How do you feel?').first()
labels = cls(data.responses)
# etc.
For more information, you should read this thread Separation of business logic and data access in django
In more general, you should create smaller testable components. Nowadays I am interested in "functional core and imperative shell" paradigm. Try Gary Bernhardt lectures https://gist.github.com/kbilsted/abdc017858cad68c3e7926b03646554e

Create a separate app for my REST API or place it inside my working app?

I'm building simple gis system on geodjango.
The app displays a set of maps and I'm also attempting to provide a RESTFUL API for these maps.
I'm facing a decision whether to create a separate app for the API or to work inside my existing app.
The two apps are logically separate but they share the same models.
So what is considered better?
Although a case can be made for either of the approaches, I think keeping the APIs inside their associated apps would be a better one. Since the code in APIs is going to depend on the models, or other utility methods anyway, keeping APIs in the same app would lead to more cohesive code. Besides the very ideology behind Django apps is that they can be isolated and reused.
There used to be a similar case with storing the templates. In the initial days of Django, people used to prefer to store all the templates altogether in the same global folder (with subdirectories by the names of the app), however, in recent times even Django has started discouraging the said approach in the favour of storing templates in the respective app itself.
#hspandher's answer is very solid and will allow for most of your needs to be implemented.
There is though another approach which may be a bit more complicated to achieve but gives you all the space you may need for experimentation and reusability potential:
Separate everything:
Backend:
Isolate your API from its visualization (see frontend below) and make it completely autonomous and self-contained.
That can be achieved by separating your apps inside your Django project and expose the corresponding APIs which must be the only way for an external factor (ex. client, another app etc.) to "talk" with any one of your apps.
Frontend:
Assuming that you have your APIs exposed, you effectively separated the visualization from the logic and therefore you have many options on how to visualize your maps.
For example, you can now build a React app which can make requests to your API and visualize the responses by using any of those tools: leaflet.js, D3.js, or anything that you like really.
Summary:
The benefits of this separation are:
Separation of logic and implementation.
Better maintainability.
Many tool and technology options to use.
Reusability.
As a side note, you can read about 12 factor method and think about using it in your implementation.

How independent should Django apps be from one another?

I'm having trouble determining how I should split up the functions of my project into different apps.
Simple example: We have members, and members can have one more more services. Services can be upgraded, downgraded, other services added on, and can also be cancelled. (This is extremely simplfied, were it that simple in reality I'd use a pre-made solution)
My first thought was to make this into a 'member' application, and then a 'services' app that takes care of renewals, up/downgrades and cancellations.
I then thought I should probably make a renewal app, an up/downgrade app, and a cancellation app. But, these apps would all depend on the same table(s) in the DB (members and services). I thought applications were supposed to be independent from one another. Is it ok to make applications that are heavily dependent on other apps models?
Along the same lines, which application should I use to store the models to create the services table if so many apps use it?
I think you first thought was right: you don't get so many benefits of splitting everythin into multiple apps, and on the contrary it could become messy and hard to mantain.
The Django way of doing things depends a lot of the models. Each object is mapped to an entity on the data model. Your apps are mostly organised in relation to such data model. So, if you have an entity (service) that has different pieces, it is better to understand such pieces as parts of the same thing. The other entity (member) should be another one since it is a different thing.
There is no penalty of importing models from different apps. The most important thing is anyway building data model to be consistent.
The point of apps is to allow code which is intended to be reused as an addon by third parties. You probably won't want to split your projects up much, if at all into apps.

Files in domain model

What are the best practices for dealing with binaries in domain model? I frequently must associate images and other files with business objects and the simple byte[] is not adequate even for the simplest of cases.
The files:
Does not have a fixed size and can be quite large thus:
Have to be streamed or buffered, preferably in asynchronous manner;
Must be cached both on server and client to avoid redundant transfer;
On unreliable connections the data transfer can be easily interrupted and has to be
resumed - therefore the transfer could start not from the beginning of file but from arbitrary position.
Are handled differently than the rest of the data:
In web applications are not part of the page content but are downloaded by browser separately;
Might be a black box that is handled by third-party software;
For performance reasons might not even be stored in the database.
How do we go about expressing such files in domain model (or more specifically, in model classes)? If the rest of the model is transferred via DTOs and WCF web services and persisted with NHibernate in the database, but the files not necessarily so, how to make the file handling transparent, part of the overall transaction where applicable yet support all that is necessary for them to be consumed not only in web applications, but also in ordinary desktop applications.
For WPF and ASP.NET the file object must expose some form of Url property that can be data-bound to WPF controls or used in IMG or HTML tags. Uploading a file is a lot more complicated. Preferably, proper presentation and content practices such as MVVM must be maintained there.
I am really lost here as I am not satisfied with any of my previous solutions. What would you advice?
You have to be careful not to try and shoehorn too much functionality into a single class here, your wording sounds a bit like you want a single "File" object that will do everything, this is not a good idea.
You will need to have a concept of a File representation that can be passed around everywhere as you have identified - but this needs to be little more than an identifier and possibly a name - it is then up to individual components to decide how they treat this, for example the HTML page may use a File json object and infer that jsFile.Id needs to be retrieved using ftp://xxx/uploads/{id} or something, while in order to display additional associated information a WCF service might receive the file id and look up info in a database.
It probably makes sense to have a FileAttributesDTO class or some such just to distinguish it from when you are dealing with the physical file. You need to consider seperation of concerns and nail down as many use cases as you can before you proceed really. For example will you really need additional information or would a simple wrapper around an FTP service get you all you need.

Handling the Same Class Definition From Multiple Web Services

The situation:
We have a library project that houses much of our code for the various integrations we work on. Many of the integrations consume web service apis, and my supervisor doesn't want 5 gazillion web service references added to the project.
What we generally do, then, is add a reference to a new project and copy the References.vb to the solution and just call the generated code. Not terribly convenient if changes are made to the service, but it works.
Recently, I ran into a problem where we have to use 3 web services for the same integration. 2 of these contain the same class definitions, however, they're in different namespaces because they belong to different services. This became a problem for me because one of the services searches a user based on user ID, and the other pulls back blocks of users. Both return an object, or list of, that is exactly the same semantically. And I need to process the data the same, whether it came from one service or the other.
My solution, was to strip out the duplicated classes in the service and replace them with classes inherited from common base classes. This allowed me to work with both objects as if they were the same, however, it required modifying the generated web service proxy. Therefore this change will need to be made every time I need to regenerate the proxy.
I'm curious what you all might think a better solution to this would be.
You're going to regret playing games with copying Reference.vb and editing generated files.
Switch to WCF and you'll be able to tell it you want to reuse the types, instead of having multiple types that are more or less the same.
BTW, they would be "less" the same if not all of the web references are updated at the same time after a server change.
The other option would be to build an abstraction layer over top of the web service pre-generated proxies, such that when you make to the calls to the abstraction layer you can always use the same objects, as they are squeezed into (and out of) the web service proxies in the abstraction layer. This would also allow for unit testing :)
I think you really should be looking at WCF for 3.5+, but for .NET 2.0 look at something like WSCF (Web Services Contract First), which defines the contracts in XML and generates a set of libraries reusable across services. E.g You define a MyComany.WS.Common namespace and use that namespace in multiple projects. The code generation then builds a shared library of types which get used across all the web-services. We use this extensively in our .NET 2 solutions and it's great. We had to do some additional work around the code generation to get it to fit into our build process, but once that was done we never looked back.
We're migrating to .NET 3.5 over time, so the WSCF will become obsolete
Heres the link to the thinktecture site for WSCF.
wsdl.exe using the /sharetypes switch allows the same types to be used across multiple service definitions, provided the wire signatures are not correct. I was unable to use it in my situation, though, because the various wsdl contracts were carelessly namespaced.