I'm facing a vexing issue in which two adjacent lines of code seem to be committed separately to the database- with one of them hanging out for a full day or more before getting committed.
if form.is_valid():
instance = form.save(commit=False)
if form.cleaned_data['assigned_to_mtm'].count() <= 0:
instance.status = "Unassigned"
elif instance.status == 'Unassigned':
instance.status = "Assigned"
instance.save()
form.save_m2m()
With the above, I am finding the assigned_to_mtm reflected right away in the database. The status, however, isn't. Instead, when a separate save (even happening the next day) updates the status to something else, then this saved status comes in right afterward with the same timestamp, reverting its value to what it should have been at the time the mtm was saved.
I'm astonished that I can't get these two to just happen in a single transaction (I tried the #transaction.non_atomic_requests decorator and doing it myself- that didn't help), let alone that the instance.save() hangs out uncommitted for so long, then gets committed only when the status is updated again separately.
I might not even care, if the order of saves was at least preserved, but confoundingly, my earlier save is always committed after the next save.
What am I doing wrong here? Can someone provide insight into why this happens?
Related
#action(detail=False, methods=["get"])
def home_list(self, request):
data = extra_models.objects.order_by("?")
print(data)
paginator = self.paginator
results = paginator.paginate_queryset(data, request)
serializer = self.get_serializer(results, many=True)
return self.get_paginated_response(serializer.data)
What I want to do is, I want the data of extra_models (objects) to come out randomly without duplication every time the home_list API is called.
However, I want to come out randomly but cut it out in 10 units. (settings.py pagination option applied)
The current problem is that the first 10 appear randomly, but when the next 10 appear, the first ones are also mixed.
In other words, data is being duplicated.
Duplicates do not occur within the same page.
If you move to the next page, data from the previous page is mixed.
Even if you try print(data) or print(serializer.data) in the middle, duplicate data is not delivered.
However, data duplication occurs from /home_list?page=2 when calling the actual API.
Which part should I check?
You should expect this behaviour when you're dealing with .order_by("?").
Whenever a request hits in server's end, Django shuffles the objects and also Django doesn't preserve the previous request or page
You are doing nothing wrong here, The only reason why this is happening is because of order_by("?"). The API is stateless which means on the second API call when you call for Page=2 then it does not know which data is sent for page=1 and returns random data for page=2.
The only solution is to order your data by ASC or DESC
I'm working with a survey app, so I need to save all the answers a user gives in the database. The way I'm doing it is this:
for key, value in request.POST.items():
if key != 'csrfmiddlewaretoken': # I don't want to save the token info
item = Item.objects.get(pk=key) # I get the question(item) I want to save
if item == None:
return render(request, "survey/error.html")
Answer.objects.create(item= item, answer=value, user = request.user)
Taking into account that django by default closes connections to the database (i.e. does not use persistent connections). My question is:
In case the dictionary has for example the answer to 60 questions (so it will iterate 60 times), would it open and close the connections 60 times, or does it only do it once?
Is there a better way to save POST information manually? (without using django forms, since for various reasons I currently need to do it manually)
This definitely is not a good way to store Answers in bulk, since:
you each time fetch the Item object for every single question;
your code does not handle the case correctly where an item is missing: in that case it will raise an exception, and the Django middleware will (likely) render a 500 page; and
it will make several calls to create all these objects.
We can create objects in bulk to reduce the number of queries. Typically we will create all elements with a single query, although depending on the database and the amount of data, it might take a limited number of queries.
We furtermore do not need to fetch the related Item objects, at all, we can just set the item_id field instead, the "twin" of the item ForeignKey field, like:
from django.db import IntegrityError
try:
answers = [
Answer(item_id=key, answer=value, user=request.user)
for key, value in request.POST.items()
if key != 'csrfmiddlewaretoken'
]
Answer.objects.bulk_create(answers)
except IntegrityError:
return render(request, 'survey/error.html')
The bulk_create will thus insert all the objects in a small number of queries and thus significantly reduce the time of the request.
Note however that bulk_create has some limitations (listed on the documentation page). It might be useful to read those carefully and take them into account. Although I think in the given case, these are not relevant, it is always better to know the limitations of the tools you are using.
I have multiple context processor and in each I have to request the user. Each of these look like this:
def UploadForm(request):
user = request.user
Uplo = UploadForm(request.POST or None, initial={user})
return {'Uplo': Uplo}
I saw that this is not efficient since im requesting the user multiple times, so I thought about writing one big context processor where I define all the Forms at once.
def AllForms(request):
user = request.user
Uplo = UploadForm(request.POST or None, initial={user...})
SetForm = SetForm(request.POST or None, initial={user...})
...
return {'Uplo': Uplo,'SetForm': SetForm}
Can anybody tell me if I gain here anything? What is the common standard for context processors? I could not find anything on SO.
Getting user from request is not a big thing. It is o(1) operation.
However if the multiple context processors are not doing different thing and can be don at one time, it should be better to create one big context processor as you say it. The reason being you have to get in and out of the function multiple times in same request.
Anyway if you want definitive difference, you can just print time in multiple and clubbed context processors.
And yes, if you are hitting the database every time, you should club them and optimise the number of times you have to hit the db.
There is a lot of topics on Django concurrency, but after checking a lot of those, I don't feel I have found my answer when it comes to transactions.
Django version 1.3.1. Postgresql version 8.4.7.
A very simple version of my models could look like this:
def Member(Model):
money = PositiveIntegerField(default=0)
user = OneToOneField(User, related_name='member', primary_key=True)
def Bet(Model):
total_money = PositiveIntegerField(default=0)
I also have a table Money which is a relation between Member and Bet. It's not directly linked to my problem but it helps me monitor it, because it can't be impacted by any concurrency issues. i.e I just have to count my table Money to test if the fields money of Member and total_money of Bet are correct.
I can't rely only on the table Money though, and I need my fields to be correct, because I filter a lot using them.
My first try for the bet function was something like this (just with a lot more modifications to a lot more tables).
def bid(user_pk, bet_pk, value):
#create Money object
member = User.objects.get(user_pk).member
member.money = F('money') - value
member.save()
bet = Bet.objects.get(bet_pk)
bet.total_money = F('total_money') + value
bet.save()
This version was working just fine until I get my first crash during one transaction.
I had also to copy paste all the tests from my clean() functions in bid(), because I'm not really able to use clean() or full_clean() in this case (especially if bet raises, after member is saved).
So I decided to give a try to django transaction.
#transaction.commit_manually
def bid(user_pk, bet_pk, value):
try:
#create money object
member = User.objects.get(user_pk).member
member.money -= value
member.clean()
member.save()
bet = Bet.objects.get(bet_pk)
bet.total_money += value
bet.clean()
bet.save()
except:
transaction.rollback()
raise
else:
transaction.commit()
But without the possibility to use F() object inside of manual transaction (which makes sense). I ended up with a lot of concurrency issues.
I see only two solutions:
Only create Money objects during the bid()/transaction, then have an asynchronous worker (Celery ?) that updates the related fields in Member and Bet.
Create a list of bid()/transaction (Redis ?), and make all transactions that modify money related fields synchronous.
Am I missing an obvious and easier solution ?
If not, what solution would you recommend using which technology ?
would this work?
#transaction.commit_on_success
def bid(user_pk, bet_pk, value):
Member.objects.filter(user__pk=user_pk).update(money=F('money') - value)
Bet.objects.filter(pk=bet_pk).update(total_money=F('total_money') + value)
I have a page that displays multiple Formsets, each of which has a prefix. The formsets are created using formset_factory the default options, including extra=1. Rows can be added or deleted with JavaScript.
If the user is adding new data, one blank row shows up. Perfect.
If the user has added data but form validation failed, in which case the formset is populated with POST data using MyFormset(data, prefix='o1-formsetname') etc., only the data that they have entered shows up. Again, perfect. (the o1 etc. are dynamically generated, each o corresponds to an "option", and each "option" may have multiple formsets).
However if the user is editing existing data, in which case the view populates the formset using MyFormset(initial=somedata, prefix='o1-formsetname') where somedata is a list of dicts of data that came from a model in the database, an extra blank row is inserted after this data. I don't want a blank row to appear unless the user explicitly adds one using the JavaScript.
Is there any simple way to prevent the formset from showing an extra row if the initial data is set? The reason I'm using initial in the third example is that if I just passed the data in using MyFormset(somedata, prefix='o1-formsetname') I'd have to do an extra step of reformatting all the data into a POSTdata style dict including prefixes for each field, for example o1-formsetname-1-price: x etc., as well as calculating the management form data, which adds a whole load of complication.
One solution could be to intercept the formset before it's sent to the template and manually remove the row, but the extra_forms attribute doesn't seem to be writeable and setting extra to 0 doesn't make any difference. I could also have the JavaScript detect this case and remove the row. However I can't help but think I'm missing something obvious since the behaviour I want is what would seem to be sensible expected behaviour to me.
Thanks.
Use the max_num keyword argument to formset_factory:
MyFormset = formset_factory([...], extra=1, max_num=1)
For more details, check out limiting the maximum number of forms.
One hitch: presumably you want to be able to process more than one blank form. This isn't too hard; just make sure that you don't use the max_num keyword argument in the POST processing side.
I've come up with a solution that works with Django 1.1. I created a subclass of BaseFormSet that overrides the total_form_count method such that, if initial forms exist, the total does not include extra forms. Bit of a hack perhaps, and maybe there's a better solution that I couldn't find, but it works.
class SensibleFormset(BaseFormSet):
def total_form_count(self):
"""Returns the total number of forms in this FormSet."""
if self.data or self.files:
return self.management_form.cleaned_data[TOTAL_FORM_COUNT]
else:
if self.initial_form_count() > 0:
total_forms = self.initial_form_count()
else:
total_forms = self.initial_form_count() + self.extra
if total_forms > self.max_num > 0:
total_forms = self.max_num
return total_forms