I have a scenario and want to know the best possible way to handle it.
I have a user who has n number of addresses.
addressList (id) for example (which I get at frontend) -
addresses=[1,2,3,4]
Once I fetch all the addresses to the frontend, the user can delete one or n number of addresses.
Note - The delete is only removing the address object from the list
(at frontend) and not permanently delete it from the database.
so here, addressList now can be -
addresses=[1,4]
Also, the users add n number of new addresses
the new addressList may be -
addresses=[1,4, {newAddressDetail}, {newAddressDetail}]
Now this updated data (addresses) is sent over to the backend for the update process.
Would like to know how to handle this scenario best at the backend?
Things are -
delete all the previously saved addresses which are not received currently.
Not to alter the previously saved addresses which are received currently.
create new addresses which are not in the database.
It is not advised to delete an address from the database when the user removes it from the frontend. The address is not editable at the frontend and the data which I have shown here is only for the explanation purpose and are not correct technically.
Can provide more details if required.
I think your best bet is to use sets in python.
So in your backend something like:
new = set(addresses_from_frontend)
old = set(user.addresses.all())
Address.objects.bulk_create([Address(**item) for item in old - new])
Address.objects.filter(id__in=[a.id for a in new - old]).delete()
Its not gonna look exactly like this, you are probably gonna have to use the address ids to get the addresses before deleting them and so forth. But the main point is that sets are an efficient way of doing this bulk_create() and delete() using a filter can also help you get your db queries down.
Related
I have a DynamoDB-based web application that uses DynamoDB to store my large JSON objects and perform simple CRUD operations on them via a web API. I would like to add a new table that acts like a categorization of these values. The user should be able to select from a selection box which category the object belongs to. If a desirable category does not exist, the user should be able to create a new category specifying a name which will be available to other objects in the future.
It is critical to the application that every one of these categories be given a integer ID that increments starting the first at 1. These numbers that are auto generated will turn into reproducible serial numbers for back end reports that will not use the user-visible text name.
So I would like to have a simple API available from the web fronted that allows me to:
A) GET /category : produces { int : string, ... } of all categories mapped to an ID
B) PUSH /category : accepts string and stores the string to the next integer
Here are some ideas for how to handle this kind of project.
Store it in DynamoDB with integer indexes. This leaves has some benefits but it leaves a lot to be desired. Firstly, there's no auto incrementing ID in DynamoDB, but I could definitely get the state of the table, create a new ID, and store the result. This might have issues with consistency and race conditions but there's probably a way to achieve this safely. It might, however, be a big anti pattern to use DynamoDB this way.
Store it in DynamoDB as one object in a table with some random index. Just store the mapping as a JSON object. This really forgets the notion of tables in DynamoDB and uses it as a simple file. It might also run into some issues with race conditions.
Use AWS ElasticCache to have a Redis key value store. This might be "the right" decision but the downside is that ElasticCache is an always on DB offering where you pay per hour. For a low-traffic web site like mine I'd be paying minumum $12/mo I think and I would really like for this to be pay per access/update due to the low volume. I'm not sure there's an auto increment feature for Redis built in the way I'd need it. But it's pretty trivial to make a trasaction that gets the length of the table, adds one, and stores a new value. Race conditions are easily avoid with this solution.
Use a SQL database like AWS Aurora or MYSQL. Well this has the same upsides as Redis, but it's also more overkill than Redis is, and also it costs a lot more and it's still always on.
Run my own in memory web service or MongoDB etc... still you're paying for constant containers running. Writing my own thing is obviously silly but I'm sure there are services that match this issue perfectly but they'd all require a constant container to run.
Is there a food way to just store a simple list, or integer mapping like this that doesn't cost a constant monthly cost? Is there a better way to do this with DynamoDB?
Store the maxCounterValue as an item in DyanamoDB.
For the PUSH /category, perform the following:
Get the current maxCounterValue.
TransactWrite:
Put the category name and id into a new item with id = maxCounterValue + 1.
Update the maxCounterValue +1, add a ConditionExpression to check that maxCounterValue = :valueFromGetOperation.
If TransactWrite fails, start at 1 again, try X more times
I'm building a product with Zend 2 and Doctrine 2 and it requires that I have a separate table for each user to contain data unique to them. I've made an entity that defines what that table looks like but how do I change the name of the table to persist the data to, or in fact retrieve the data from, at run time?
Alternatively am I going to be better off giving each user their own database, and just changing which DB I am connecting to?
I'd question the design-choice at first. What happens if you create a new user after runtime. The table has to be created first? Furthermore, what kind of data are you storing, to me this sounds like a pretty common multi-client capabilities. Like:
tbl_clients
- id
- name
tbl_clientdata
- client_id
- data_1_value
- data_2_value
- data_n_value
If you really want to silo users data, you'd have to go the separate databases route. But that only works if each "user" is really independent of each other. Think very hard about that.
If you're building some kind of software-as-a-service, and user A and user B are just two different customers of yours, with no relationship to each other, then an N+1 database might be appropriate (one db for each of your N users, plus one "meta" database which just holds user accounts (and maybe billing-related stuff).
I've implemented something like this in ZF2/Doctrine2, and it's not terribly bad. You just create a factory for EntityManager that looks up the database information for whatever user is active, and configures the EM to connect to it. The only place it gets a bit tricky is when you find yourself writing some kind of shared job queue, where long-running workers need to switch database connections with some regularity -- but that's doable too.
I'm considering an app which will store customer data. Given the way buckets work in CouchBase, all customer data will be in one bucket. It appears that I have two choices:
Implement multi-tenancy in views, by assigning a field to each record that indicates the customer it belongs to.
Implement it by putting a factor on every key that is a customer ID.
It seems, though, that since I will be using views, I'll really want to do both. In case number 2, I need to have the data in the record so that it can be indexed on (or maybe I can pull out part of the key in the map phase and index on customer) and in option 1, I'd want it to be part of the key as a check when retrieving data to make sure I don't send the wrong customers data down the line.
The problem is, this is a service where multiple customers will interact, and sometimes one customer will create some data and the other will view it, at the first customers request. But putting an ACL on each record that lists everyone who's authorized to view it would be problematic, to say the least.
I bet there is a common methodology or design pattern to answer this question, and would appreciate some pointers to best practices.
I'm also concerned about the performance if the indexes are indexing both on the particular piece of relevant data, and the customer id... a large number of different customers would presumably make the indexes much less efficient. (but maybe not.)
Here are my thoughts on your questions:
[Concerning items #1 and 2] - It seems, though, that since I will be using views, I'll really want to do both.
This doesn't seem to make sense to me. In Couchbase, the map phase can include content from both the key and the value. It makes little sense to store the data in both the key and the value, as you are guaranteed to have 1:1 duplication there. Store it wherever it makes the most sense to store it; in this case, probably the value.
The problem is, this is a service where multiple customers will interact, and sometimes one customer will create some data and the other will view it, at the first customers request. But putting an ACL on each record that lists everyone who's authorized to view it would be problematic, to say the least.
My site also has muti-tenant data stored in a single database. In my case, I use object unique identifiers as my keys. By default, customers can access all objects that belong to them (I have a user object, and the user is associated with a customer account). Users may also have additional permissions assigned to them, whereby a single object from another customer could be added to their user account, and they would thereby be granted access to view the object.
The alternative is "security through obscurity" and use guids as a random identifier, giving customers access to view any object that they have the guid for.
I would not, however, try to store the permissions on the objects themselves. That would quickly become unwieldy. You need to think about your specific use case, and decide what simple approach would work for the majority of the cases, and just not support the other 1-2% of the cases.
On my website I'm going to provide points for some activities, similarly to stackoverflow. I would like to calculate value basing on many factors so each computation for each user will take for instance 10 SQL queries.
I was thinking about caching it:
in memcache,
in user's row in database (so that wherever I need to get user from base I easly show the points)
Storing in database seems easy but on other hand it's redundant information and I decided to ask, since maybe there is easier and prettier solution which I missed.
I'd highly recommend this app for storing the calculated values in the model: https://github.com/initcrash/django-denorm
Memcache is faster than the db... but if you already have to retrieve the record from the db anyway, having the calculated values cached in the rows you're retrieving (as a 'denormalised' field) is even faster, plus it's persistent.
I have a django model in use on a production application and I need to change the name and data type of the field with zero downtime to the site. So here is what I was planning:
1) Create the new field in the database that will replace the original field
2) Everytime an instance of the Model is loaded, convert the data form the original field and store it into the new field, then save the object (only save object if new field is empty)
3) Over time the original field can be removed once every object has a non-blank new field
What method can I attach too for the 2nd step?
Won't you have to change your business logic (and perhaps templates) first to accomodate the new fieldname?
Unless stuff gets assigned to the field in question at dozens of places in your code, you could (after creation of the field in the database)
1) adapt the code to recognize the old (read) and the new field(name)s (write).
2) change the data in the database from old to new field via locking / .update() call, etc.
3) remove the old field(name) from the model/views/templates completely
Without downtime, I don't see how users of your site will not suffer getting "old" values for a few seconds (depending on how many rows are in the table, how costly the recalc to the new datatype, etc.).
Sounds complex, and effects a lot of production code.
Are you trying to avoid doing this in bulk because of downtime? What volume of data are you working with?
Have you looked at any Django migration tools that are out there. South is a very popular one:
http://south.aeracode.org/
As you seemingly can't afford ANY downtime whatsoever (I wouldn't want your job!!!) you probably don't want to risk overriding the model's constructor method. What you could try instead is catching the post init signal...
https://docs.djangoproject.com/en/1.0/ref/signals/#django.db.models.signals.post_init