I would like to build versioning for my django models. I have already tried django-reversion but i think that is not good for my use case and i have problems with many-to-many through models and multi table inheritance.
My use case:
I have a multi tenant web app. Every tenant have their own pool of resources. The tenants can create documents and reference to the resources.
Here is a simple diagram:
Now on every update from the document or a resource that is referenced in the document i would create a version of the document.
So the version should show all changes of the document and the referenced resources.
But on revert to a version only the direct values of the document should reverted and not the resources.
For example a document:
Now i edit the document and delete the resource_1 with the id 1. Also i change the name from the resource_1 with the id 2.
When i revert this document to the first version, it should look like this:
But how can i achieve this?
I think i can use MongoDB to store complete version of a document as serialized json data on every update. And can create a signal for the resources if it changed and check the correspondent documents to create also a version for the document.
Then i create for each tenant a new collection in MongoDB.
The resources itself should also have a versioning and also other models in my webapp but for the example that is not important.
Related
i have the following question.
I have in my app an Table called Networkoperator
This table should be filled on creation. I have an JSON file with the data in it that i need to insert.
How is this achievable?
I was thinking of using an CDK that triggers an lambda only on creation
The Amplify documentation has a plugins page that lists a bunch of 3rd party plugins. There are two plugins that help seed DB data, but they both appear a bit out of date.
I'd be inclined to make a custom category (amplify add custom) and make it a dependency of the database table (ideally created and managed by Amplify). Then you can do anything you like in the 'build' step of the custom category's package.json. The aws-cdk-dynamodb-seeder package looks like it would do the trick (but was also published 2 years ago). Within the custom category, you can pull in the DB table name, etc, to avoid hardcoding.
I wish to make a custom model, which has a foreign key relationship to the django_migrations table. Can this be done? if so, exactly how do I import the django migrations model?
Edit... Here is my plan
1: I have one database in which there are many tables on the 'Public' schema which are used for application-wide data. I suspect this system will get very large so for scalability I would like each of the clients to have their own schema. These schema will be dynamically generated upon creation of a client
2: The client schema are controlled by a separate python/django app which has it's own set of tables/migrations.
3: I want the system to be flexible so that in the future, when new features are created, I can deploy those features to a few specific clients without having to deploy to everyone therefore each of the client schema may be at a different migration level, hence, there is a need to track which migration level each client is at
By default AWS Amplify transformers creating tables per each graphql type.
But according DynamoDB documentation it's best practice to
Keep tables few as possible
Keep often queried together entries within a same table
I have an impression Amplify way of doing things stays in contradiction with the statement above.
I am new to both NoSQL and Amplify
Can someone suggest ways to address those issues?
I think we're in a bit of a transition or gray area here. I'm very new to Amplify and have been investigating moving to a single-table design as there are sources (below) that indicate that it's always be there but you'd have to write everything in VTL templates. But in 2020 they released direct lambda resolver support: https://youtu.be/EOQqi6Yun7g?t=960 (clip)
However, it seems like you lose access to the #auth directive (and probably others because you're no longer going to use #model) along with a lot of the nice out-of-the-box functionality that's available with Amplify's multi-table approach.
At this point, being that I'm developing a new app, I'm going to stick with the default multi-table design to hasten the process of getting the app functional.
Trying to implement the single-table design seems to go against what the Amplify team recommends and requires more manual work. You'd have to manually create custom lambda functions (AppSync) and code queries to DynamoDb for each data access element and manage authorization through some other means which I'm not aware of at this time. Maybe someone can chime in here...
Single table vs multi table info
Using Amplify with single table:
https://youtu.be/EOQqi6Yun7g
Single vs Multi Clip:
https://youtu.be/1WF_wped808?t=1251 (clip)
https://www.alexdebrie.com/posts/dynamodb-single-table/ (towards bottom)
https://youtu.be/EOQqi6Yun7g?t=1288 (clip)
Example single table design by Alex Debrie:
https://gist.github.com/dabit3/96dc51e688b18a7d40fc534331758c56
More Discussion:
https://stackoverflow.com/a/56438716/1956540
Basic Setup steps
I setup a single table by following the below instructions. Again, you don't use #models for this. Also, I think you have to include a type query {} in your schema for it to compile, but I could be wrong here.
So the basic steps are:
Create a single table (amplify add storage)
amplify push
Create your schema in the schema.graphql file.
Create supporting lambda function (amplify add function)
Note: if you look at the example here, I believe you can create an entry point to routes to all other methods: https://gist.github.com/dabit3/96dc51e688b18a7d40fc534331758c56#lambda
Add the DynamoDb query code in the function.
amplify push
Complete steps for Setting up a single Table:
https://catalog.us-east-1.prod.workshops.aws/workshops/53b10bf8-2271-4ab4-bfd2-39e878a90dc8/en-US/lab2/1-vtl (both "Connecting to an existing DynamoDB table" and "Direct Lambda Resolver" steps)
Not trying to be negative about Amplify, it is awesome, I love what they are doing with this product. I just think it's very new to everyone and I'm hoping this post is no longer valid next year and we continue to see great progress from the team.
I am doing a poc in Django and i was trying to create the admin console module for inserting,updating and deleting records through django admin console through models and it was doing fine
I have 2 questions.
1.I need to have model objects for existing tables which needs to be present in a particular schema.say schema1.table1
Here as of now i was doing poc for public schema.
So can it be done in a fixed defined schema and if yes how.Any reference would be very helpful
2.Also i wanted to update few columns in the table through console and the rest of the columns will be done automatically like currentimestamp and created date etc.Is it possible through default django console and if yes kindly share any reference
Steps for 1
What i have done as of now is created a class in model.py with attributes as author,title,body,timeofpost
Then i used sqlmigrate after makemigrations app to create the table and after migrating have been using the admin console for django to insert and update the records for the table created.But this is for POC only.
Now i need to do the same but for existing tables with whom i can interact and insert or update record for those existing tables through admin console.
Also the tables are getting created in public schema by default.But i am using postgres and the existing tables are present in different schemas and i wanted to insert,update and delete for this existing tables.
I am stuck up here as i dont know how to configure model with existing database schema tables through which we can interact through django console and also for different schemas and not in public schema
Steps for 2:
Also i wanted the user to give input for few columns like suppose in this case time of creation is not required to be given as input by user .Rather it should be taken care when the database is updating or creating
Thanks
In order for Django to "interact" with an existing database you need to create a model for it which can be done automatically as shown here. This assumes that your "external" database isn't going to be changed often because you'll have to keep your models in sync which is tricky - there are other approaches if you need that.
As for working with multiple database schemas - is there a reason you can't put your POC table in the same database as the others? Django supports multiple databases, but it will be harder to setup. See here.
Finally, it sounds like you are interested in setting the Django default field attribute. For an example of current time see here.
How do bind a DropLink using a custom dataprovider?
More info:
I am trying to build a product catalogue site using Sitecore. Each product in the sitecore content tree can have a star rating and short text review attached to it (which will be linked to a user extended with a profile provider but that is another question).
I am planning to store the review information in an external database and reference it using a custom dataprovider. I have downloaded the NorthwindDataProvider from the Shared Source (here) and have altered it to use a table which contains the rating, text and a uniqueidentifier field to store the ID of the product from in sitecore the review is attached to.
The template field is a droplink and the datasource is set to the products in the catalogue.
When I edit a review in the custom dataprovider using the sitecore content editor, the droplink states 'Value not in selection list' even if I select one of the populated products and save using sitecore.
It is saving the ID in the database but if I look at the raw value it displays the id without the curly brackets. Working droplink fields' raw values appear to contain the brackets.
To create a review, I am using a jquery post to a webservice which writes to the database using an external datacontext. Should I be using some Sitecore API to use the custom dataprovider instead?
Any information using custom dataproviders would be helpful. The documentation I've been able to find has all stated what can be done but I'm struggling to find actual implementation.
So the first thing is that you have a template field and you're using droplink which is going to store the guid for the item selected. I'm not quite clear on whether or not you're pointing the datasource to a Sitecore item or not.. but that's essential if you're using droplink. Here's what I would suggest instead for the most straight forward way to do this:
Create a template that you add fields to handle the logic dealing with your catalog items. How you do that is your choice and Sitecore doesn't care since its only going to deal with the item and all it cares about is finding an item... you write business logic to manipulate the external data.
Once you have a folder that stores your catalog items, you could easily write a script to be triggered by the Rules engine in Sitecore or a Sitecore task that runs regularly to get your catalog items to add/update or remove the corresponding list of Sitecore items.
Also, another option that is more complex to implement, but if you have multiple data sources on your site, is a valid approach, is to use an object framework (like the Entity framework) as a data object layer that allows you to create and populate common objects with from any data source.
Hope this is helpful!