I have just started playing around with django. I can foresee one app generating data that another could use (bad example: a geomatics app crunching complex data to generate simple location data to pass to another app that uses the data to decide some sort of business logic). Having never done any web programming with frameworks, my first thought was ....globals... But thats obviously not a "good thing"!
Return a key to the client (either via a cookie or in the session) that can be used to retrieve the information from a datastore when needed.
Related
My question is regarding the clean architecture. I am quite new to .NET Core, and my assignment is to integrate a third-party API into the console app so the data is stored in a SQL database. I have been researching for the past two days, and I found so many examples of how to create a repository service querying a local database, but I cannot see any decent example of how to integrate the data from external APIs to a local data storage solution, SQL database in my case.
Any information on how to structure services in a clean manner would help me a lot!
It basically boils down to the question whether you need to store locally data from a 3d party API. You are not the owner of that data.
In many cases it's better to query external API each time you need some data. You don't know if the data will change and when. E.g. you query each time weather data or currency convertion rate. External system knows how to produce such results, so ask the system each time. It's often ok to cache data from 3d party APIs, depending on the needs of your application (does your user needs weather updates with each refresh or once a day is enough, e.g. on a web site of a skiing area).
Now there can be cases when 3d party API is loosely coupled with your domain. E.g. you use a separate API to manage devices in a smart home solution. In this case we are talking about microservice architecture (Microservices by Fowler is a good start). It's OK in this case to build your local copy of the data from a 3d party API. Imaging your service is only interested in how often devices get replaced. So you may query device management API for broken devices and store this information locally in the form that is better for your application. You get data, reorganize it as it better fits you, store and use it.
So basically as always with such broad questions: it depends. On what are you building, on what external API provides, on what do you want to do with the data you query.
Going technical on your question: you query data e.g. via REST from a third party. This gets you TheirObjectDto. You process this data. Their object may change, you are not the owner of the contract. You create MyObject, which contains the data that you need. You save it to your DB (via repository, if you want). You build an Entity for it and DB table. When 3d party API changes and returns TheirObjectDto2 you still work with MyObject. Minimal change for you is to convert TheirObjectDto2 to MyObject. Your app continues working. In a second step if you want some new cool info of TheirObjectDto2 you modify your internal data structure.
I am currently developing a computer based test web app with Django, and I am trying to figure out the best way to persist the choices users make during the course of the test.
I want to persist the choices because they might leave the page due to a crash or something else and I want them to be able to resume from exactly where they stopped.
To implement this I chose saving to Django sessions with db backend which in turns save to the database and this will resolve to a very bad design because I don't want about 2000 users hitting my db every few seconds.
So my question is are there any other ways I can go about implementing this feature that I don't know of. Thanks
If your application is being run on a browser, to be specific, if you are designing a progressive web application, you can make use of browser storage systems, such as localstorage, indexed db, cookies, etc ..
This way, you wouldn't need to send user's updated state back and forth to your backend and you can do the state update based on a specific condition or each n seconds/minutes.
We have an internal application. As time went on and new applications were requested, that exchange data between eachother, the interaction became bound to the database schema. Meaning changes in the database require changes everywhere else. As we plan to build even more applications that will depend on the same data this quickly will become and unmanagable mess.
Now i'm looking to abstract that interaction behind an API. Currently i have trouble choosing the right tool.
Interaction at times could be complex, meaning data is posted to one service and if the action has been completed it should notify the sender of that.
Another example would be that some data does not have context without the data from other services. Lets say there is one service for [Schools] and one for [Students]. So if the [School] gets deleted or changed the [Student] needs to be informed about it immeadetly and not when he comes to [School].
Advice? Suggestions? SOAP/REST/?
I don't think you need an API. In my opinion you need an architecture which decouples your database from the domain logic and other parts of the application. Such an architecture is for example clean architecture, onion architecture and hexagonal architecture (ports&adapters by new name). They share the same concepts, you have a domain logic, which does not depend from any framework, external lib, delivery method, data storage solutions, etc... This domain logic communicates with the outside world through adapters having well defined interfaces. If you first design the inside of your domain logic, and the interfaces of the adapters, and just after the outside components, then it is called domain driven design (DDD).
So for example if you want to move from MySQL to MongoDB you already have a DataStorageInterface, and the only thing you need is writing a MongoDBAdapter which implements this interface, and ofc migrate the data...
To design the adapters you can use two additional concepts; command and query segregation (CQRS) and event sourcing (ES). CQRS is for connecting delivery methods like REST, SOAP, webapplications, etc... to the domain logic. For example you can raise a CreateUserCommand from your REST API. After that the proper listener in the domain logic processes that command, and by success it raises a domain event, like UserCreatedEvent. Your REST API can listen to that event and respond with a success message to the REST client. The UserCreatedEvent can be listened by one or more storage adapter too. So they can process that event and persist the new user. You don't necessary use only a single database. For example if a relational database is faster by a specific type of query, then you can use that, but if a noSQL database suites better to the job, then you can use that too. So you can use as many databases as you want for your queries, the only thing you need is writing a storage adapter for them. For example if your REST client wants to retrieve the profile of a specific user, then it can raise a GetUserProfileByIdQuery and the domain logic can ask the adapter of a database which can serve the query. After that the adapter can send for example an SQL query to a MySQL database and return the response. By ES you add EventStorage to your system, which stores the raised domain events. It can be very useful if you want to migrate your data from one query database to another. In that case you create a new storage adapter to your new database, and replay all of the domain events from the EventStorage in historical order to that adapter, so it can fill the new database with the relevant data. That's all, you don't have to write complicated migration scripts...
In your case I think your should create at least domain events, and use event sourcing. That will totally decouple your database from the other parts of your application. Adding a REST or SOAP API can have a similar effect, but building HTTP connections to access your database can slow down your application.
I have a sencha touch client application and restful web service with OAuth2 authorization protocol. I want to know , how I can hold access_token in my client application for further using. Now, I use a global variable to hold token, is it the best way to do that?
Well, you can use a Store to save the data and keep it in localStorage or sessionStorage.
Store is a good tool and keep the code clean.
Update
Best pratices are about maintainability (legible code), efficiency, dependability, usability. So if you a framework the best pratice is use its tools. With that any programmer that have knowledge about it will understand the code faster.
With store you can keep data even in localSession or sessionStore so you have full control how long you want to save the data.
Other advantage with store you can keep multiple user or agroup multiple data with it without have to do a lot of work (like profile or other data you need to save).
Sure, you can use your global variable without problem. But in my personal opnion if you use a framework use the tools.
This'll be my first question on this platform. I've done lots of development using Flex, WebORB and ASP.NET. We have solved Concurrency problems with messaging (Pessimistic Concurrency Control). This works pretty good but it also makes the whole application dependent of the messaging. No messaging, no concurrency control.
I know that ASP.NET has version control in DataSets, but how would you go and use that if you are working on a RIA. It seems hard to go and store each dataset in the session of the client... So, if the Client would like need all products, I would need to store the dataset in the session of the client. When the client would change something to a product and save the product, I could then update the dataset (stored in the session) and try to save it...
Seems a lot of work and a lot of memory that will be used (because those products will be kept in the memory of the client, so the dataset needs to be kept on the server side session).
I think the most easy way would be to provide all DTO's with a version number. If the client would try to save a DTO, I could compare the version number with the one in the database.
Lieven Cardoen
This is something I've done before - as the original data was coming from an SQL Server database we just used a rowversion typed column in each DTO to determine if it had changed while the user was working on it.
At this point you can either barf on the error or try and figure out a way to merge the changes, but at least you can tell that it's changed underneath you :)