In knockout.js there is a function called destroy() See bottom of this page
It says that it is useful for Rails developers as it adds a _destroy attribute to a object in an observerable array
Im using django and trying to use the same function to know which objects to delete from my database - and as far as i understand a django deserialized object only contains the and pk what is in the fields object
this is what the json looks like:
{"pk": 1,
"model": "eventmanager.datetimelocgroup",
"fields": {"event": 10},
"_destroy": "true"
}
As of now i have very ugly but working code - i was wondering if there is any shorter way to detect if a deserialized object had a destroy flag
my current code looks like this
ra = []
removejson = json.loads(eventslist)
for i,a in enumerate(removejson):
if '_destroy' in a:
ra.append(i)
for index,event in enumerate(serializers.deserialize("json", eventslist)):
if index in ra:
try:
e = Event.objects.get(id = event.object.pk)
e.delete()
except ObjectDoesNotExist:
pass
else:
event.save()
I was wondering if there is a better way than going through the json multiple time
This oneliner should work (please understand it before trying it out):
Event.objects.filter(
id__in = [
x['fields']['event'] for x in json.loads(eventslist) if '_destroy' in x
]
).delete()
Related
I have a list of object of this kind of structure returned in my api
SomeCustomModel => {
itemId: "id",
relatedItem: "id",
data: {},
created_at: "data string"
}
I want to return a list that contains only unique relatedItemIds, filtered by the one that was created most recently.
I have written this and it seems to work
id_tracker = {}
query_set = SomeCustomModel.objects.all()
for item in query_set:
if item.relatedItem.id not in id_tracker:
id_tracker[item.relatedItem.id] = 1
else:
query_set = query_set.exclude(id=item.id)
return query_set
This works by I am wondering if there is cleaner way of writing this using only django aggregations.
I am using Mysql so the distinct("relatedItem") aggregation is not supported.
You should try to do this within sql. You can use Subquery to accomplish this. Here's the example from the django docs.
from django.db.models import OuterRef, Subquery
newest = Comment.objects.filter(post=OuterRef('pk')).order_by('-created_at')
Post.objects.annotate(newest_commenter_email=Subquery(newest.values('email')[:1]))
Unfortunately, I haven't found anything that can replace distict() in a django-esque manner. However, you could do something along the lines of:
list(set(map(lambda x: x.['relatedItem_id'], query_set.order_by('created_at').values('relatedItem_id'))))
or
list(set(map(lambda x: x.relatedItem_id, query_set.order_by('created_at'))))
which are a bit more Pythonic.
However, you are saying that you want to return a list yet your function returns a queryset. Which is the valid one?
I have a table that contains values saved as a dictionary.
FIELD_NAME: extra_data
VALUE:
{"code": null, "user_id": "103713616419757182414", "access_token": "ya29.IwBloLKFALsddhsAAADlliOoDeE-PD_--yz1i_BZvujw8ixGPh4zH-teMNgkIA", "expires": 3599}
I need to retrieve the user_id value from the field "extra_data" only not the dictionnary like below.
event_list = Event.objects.filter(season_id=season_id, event_status_id=2).value('extra_data')
If you are storing a dictionary as text in the code you can easily convert it to a python dictionary using eval - although I don't know why you'd want to as it opens you to all sorts of potential malicious code injections.
event_list = eval(Event.objects.filter(season_id=season_id, event_status_id=2).value('extra_data'))
user_id = event_list['user_id']
print user_id
Would give:
"103713616419757182414"
Edit:
On deeper inspection , thats not a Python dictionary, you could import a JSON library to import this, or declare what null is like so:
null = None
event_list = eval(Event.objects.filter(season_id=season_id, event_status_id=2).value('extra_data'))
user_id = event_list['user_id']
Either way, the idea of storing any structured data in a django textfield is fraught with danger that will come back to bite you. The best solution is to rethink your data structures.
This method worked for me. However, this works with a json compliant string
import json
json_obj = json.loads(event_list)
dict1 = dict(json_obj)
print dict1['user_id']
Sorry about the confusing title, but I don't know how to describe it better.
I need to run a model-function on the object I am editing using PUT in Django-Rest-Framework, which uses some of the new data from the PUT to calculate some new values it should save in the same model.
Example:
An item with {'amount': 2, 'price': 0, 'total': 0} is already stored in the database.
I am then updating price to 1 using a normal PUT request using django-rest-framework.
The model have a helperfunction called update_total() which I need to call to update the total field in the database (to, in this case 2 (2*1)).
The item is updated in the database, but the response returned from django-rest-framework is still showing total=0. After getting the object on new, total will be 2 as expected.
I need the response to be 2 in the response from the PUT, not after a regrab of the object. But how?
I have tried several things (which all doesn’t work):
Updating attrs in a validator to the new value.
Using post_save() in ListCreateAPIView to update the data.
Using pre_save() in ListCreateAPIView
Updating instance in restore_object() (even though it isn't for this purpose)
Does this look like a bug? Or is there another trick?
I kinda found a solution, but it feels somewhat dirty..
In my serializers restore_object I put code like this:
new_values = instance.update_counters()
for k, v in new_values.items():
self.data[k] = v
and in my models update_counters() function, I am returning a dict of what I changed..
For a mock web service I wrote a little Django app, that serves as a web API, which my android application queries. When I make requests tp the API, I am also able to hand over an offset and limit to only have the really necessary data transmitted. Anyway, I ran into the problem, that Django gives me different results for the same query to the API. It seems as if the results are returned round robin.
This is the Django code that will be run:
def getMetaForCategory(request, offset, limit):
if request.method == "GET":
result = { "meta_information": [] }
categoryIDs = request.GET.getlist("category_ids[]")
categorySet = set(toInt(categoryIDs))
categories = Category.objects.filter(id__in = categoryIDs)
metaSet = set([])
for category in categories:
metaSet = metaSet | set(category.meta_information.all())
metaList = list(metaSet)
metaList.sort()
for meta in metaList[int(offset):int(limit)]:
relatedCategoryIDs = getIDs(meta.category_set.all())
item = {
"_id": meta.id,
"name": meta.name,
"type": meta.type,
"categories": list(categorySet & set(relatedCategoryIDs))
}
result['meta_information'].append(item)
return HttpResponse(content = simplejson.dumps(result), mimetype = "application/json")
else:
return HttpResponse(status = 403)
What happens is the following: If all MetaInformation objects would be Foo, Bar, Baz and Blib and I would set the limit to 0:2, then I would get [Foo, Bar] with the first request and with the exact same request the method would return [Baz, Blib] when I run it for the second time.
Does anyone see what I am doing wrong here? Or is it the Django cache that somehow gets into my way?
I think the difficulty is that you are using a set to store your objects, and slicing that - and sets have no ordering (they are like dictionaries in that way). So, the results from your query are in fact indeterminate.
There are various implementations of ordered sets around - you could look into using one of them. However, I must say that I think you are doing a lot of unnecessary and expensive unique-ifying and sorting in Python, when most of this could be done directly by the database. For instance, you seem to be trying to get the unique list of Metas that are related to the categories you pass. Well, this could be done in a single ORM query:
meta_list = MetaInformation.objects.filter(category__id__in=categoryIDs)
and you could then drop the set, looping and sorting commands.
is there a way that I can save the model by using dictionary
for e.g.
this is working fine,
p1 = Poll.objects.get(pk=1)
p1.name = 'poll2'
p1.descirption = 'poll2 description'
p1.save()
but what if I have dictionary like { 'name': 'poll2', 'description: 'poll2 description' }
is there a simple way to save the such dictionary direct to Poll
drmegahertz's solution works if you're creating a new object from scratch. In your example, though, you seem to want to update an existing object. You do this by accessing the __dict__ attribute that every Python object has:
p1.__dict__.update(mydatadict)
p1.save()
You could unwrap the dictionary, making its keys and values act like named arguments:
data_dict = {'name': 'foo', 'description': 'bar'}
# This becomes Poll(name='foo', description='bar')
p = Poll(**data_dict)
...
p.save()
I find only this variant worked for me clear.
Also in this case all Signals will be triggered properly
p1 = Poll.objects.get(pk=1)
values = { 'name': 'poll2', 'description': 'poll2 description' }
for field, value in values.items():
if hasattr(p1, field):
setattr(p1, field, value)
p1.save()
You could achieve this by using update on a filterset:
e.g.:
data = { 'name': 'poll2', 'description: 'poll2 description' }
p1 = Poll.objects.filter(pk=1)
p1.update(**data)
Notes:
be aware that .update does not trigger signals
You may want to put a check in there to make sure that only 1 result is returned before updating (just to be on the safe side). e.g.: if p1.count() == 1: ...
This may be a preferable option to using __ methods such as __dict__.