I'm making a django website and due to the load on the database I'd like to filter the queryset after retrieving it.
files = Files.objects.get(folder_id=folder_id)
first_file = files.objects.filter(sequence = 0)
This example throws me error, same if I tried for loop. So is it possible to filter the retrieved queryset without interacting with database ?
when you run get that executes the query, returns a single Files object. https://docs.djangoproject.com/en/4.1/ref/models/querysets/#get
you would want to change the first line to filter (which won't actually execute the query yet)
https://docs.djangoproject.com/en/4.1/ref/models/querysets/#filter
and the second line to get.
files = Files.objects.filter(folder_id=folder_id)
first_file = files.objects.get(sequence = 0)
Related
I am trying to save a preview image generated by "preview_generator" Python app.
But I am getting IntegrityError duplicate key value violates unique constraint "users_material_pkey". I've tried many things but nothing seems to be working.
If I call super at the end of save I don't get material_file url or path.
remove this line material = self in your code and use below way
obj = super(Material, self).save(force_update=False, using=None,update_fields=None)
material = obj
I am creating an apllication using google appengine, in which i am fetching a data from the website and storing it in my Database (Data store).Now whenever user hits my application url as "application_url\name =xyz&city= abc",i am fetching the data from the DB and want to show it as json.Right now i am using a filter to fetch data based on the name and city but getting output as [].I dont know how to get data from this.My code looks like this:
class MainHandler(webapp2.RequestHandler):
def get(self):
commodityname = self.request.get('veg',"Not supplied")
market = self.request.get('market',"No market found with this name")
self.response.write(commodityname)
self.response.write(market)
query = commoditydata.all()
logging.info(commodityname)
query.filter('commodity = ', commodityname)
result = query.fetch(limit = 1)
logging.info(result)
and the db structure for "commoditydata" table is
class commoditydata(db.Model):
commodity= db.StringProperty()
market= db.StringProperty()
arrival= db.StringProperty()
variety= db.StringProperty()
minprice= db.StringProperty()
maxprice= db.StringProperty()
modalprice= db.StringProperty()
reporteddate= db.DateTimeProperty(auto_now_add = True)
Can anyone tell me how to get data from the db using name and market and covert it in Json.First getting data from db is the more priority.Any suggestions will be of great use.
If you are starting with a new app, I would suggest to use the NDB API rather than the old DB API. Your code would look almost the same though.
As far as I can tell from your code sample, the query should give you results as far as the HTTP query parameters from the request would match entity objects in the datastore.
I can think of some possible reasons for the empty result:
you only think the output is empty, because you use write() too early; app-engine doesn't support streaming of response, you must write everything in one go and you should do this after you queried the datastore
the properties you are filtering are not indexed (yet) in the datastore, at least not for the entities you were looking for
the filters are just not matching anything (check the log for the values you got from the request)
your query uses a namespace different from where the data was stored in (but this is unlikely if you haven't explicitly set namespaces anywhere)
In the Cloud Developer Console you can query your datastore and even apply filters, so you can see the results with-out writing actual code.
Go to https://console.developers.google.com
On the left side, select Storage > Cloud Datastore > Query
Select the namespace (default should be fine)
Select the kind "commoditydata"
Add filters with example values you expect from the request and see how many results you get
Also look into Monitoring > Log which together with your logging.info() calls is really helpful to better understand what is going on during a request.
The conversion to JSON is rather easy, once you got your data. In your request handler, create an empty list of dictionaries. For each object you get from the query result: set the properties you want to send, define a key in the dict and set the value to the value you got from the datastore. At the end dump the dictionary as JSON string.
class MainHandler(webapp2.RequestHandler):
def get(self):
commodityname = self.request.get('veg')
market = self.request.get('market')
if commodityname is None and market is None:
# the request will be complete after this:
self.response.out.write("Please supply filters!")
# everything ok, try query:
query = commoditydata.all()
logging.info(commodityname)
query.filter('commodity = ', commodityname)
result = query.fetch(limit = 1)
logging.info(result)
# now build the JSON payload for the response
dicts = []
for match in result:
dicts.append({'market': match.market, 'reporteddate': match.reporteddate})
# set the appropriate header of the response:
self.response.headers['Content-Type'] = 'application/json; charset=utf-8'
# convert everything into a JSON string
import json
jsonString = json.dumps(dicts)
self.response.out.write( jsonString )
I have a huge list that will be generated dynamically from a csv file in a django view, but i had a requirement to use that list in the next view, so i thought to give a try on django sessions
def import_products(request):
if request.method == 'POST':
if request.FILES:
csv_file_data = ...........
total_records = [row for row in csv_file_data]
request.session['list_data'] = total_records
# total_records is `list of lists` of length more than 150
# do some processing with the list and render the page
return render_to_response('product/import_products.html')
def do_something_with_csv_data_from_above_view(request):
data = request.session['list_data']
# do some operations with list and render the page
return render_to_response('website/product/success.html',
So as mentioned in the above, i need to use the total_records huge list in the do_something_with_csv_data_from_above_view view and delete the session by storing it in another variable
So actually how to implement/use exactly the sessions concept(i have read the django docs but could n't able to get the exact concept)
In my case,
When a user tries to upload the csv file each time, i am reading the data and storing
the data as list to session
==> Is this the right way to do so ? also i want to store the session variable in
database concept
Because the list was huge, i need to delete it for sure in the next view when i
copied it in to another variable
Am i missing anything, can anyone please implement exact code for my above scenario ?
You have two options:
Client side RESTful - This is a RESTful solution but may require a bit more work. You can retrieve the data in the first request to the client and after selecting you can send the selected rows back to the server for processing, CSV etc.
Caching: In the first request you can cache your data on the server using a django file system or memcached. In the second request use the cache key (which would be the user session key + some timestamp + whatever else) to fetch the data and store in the db.
If it's a lot of data option 2 may be better.
What is wrong in these lines:
for i in message_list:
message_stream = Messages.objects.filter(OrderID = i.OrderID).order_by('-MessageLocalID')
if message_stream[0].MessageTypeName != 'MessageAck':
message_stream[0].status = message_stream[0].MessageTypeName
message_stream[0].save()
The status field doesn't get saved here in the DB. What I do misunderstand here?
The problem was in the DB itself, the status field, where it should be updated with new values,wasn't able to receive values more than 2 characters. I used Django DB migration and extended the status field, solved the problem.
This command worked like a charm with no problems:
NpMessages.objects.filter(NPOrderID = i.NPOrderID, MessageTypeName = 'Request').update(status = message_stream[1].MessageTypeName)
and get rid of the save statement, as it didn't work eitherway!!!
For a mock web service I wrote a little Django app, that serves as a web API, which my android application queries. When I make requests tp the API, I am also able to hand over an offset and limit to only have the really necessary data transmitted. Anyway, I ran into the problem, that Django gives me different results for the same query to the API. It seems as if the results are returned round robin.
This is the Django code that will be run:
def getMetaForCategory(request, offset, limit):
if request.method == "GET":
result = { "meta_information": [] }
categoryIDs = request.GET.getlist("category_ids[]")
categorySet = set(toInt(categoryIDs))
categories = Category.objects.filter(id__in = categoryIDs)
metaSet = set([])
for category in categories:
metaSet = metaSet | set(category.meta_information.all())
metaList = list(metaSet)
metaList.sort()
for meta in metaList[int(offset):int(limit)]:
relatedCategoryIDs = getIDs(meta.category_set.all())
item = {
"_id": meta.id,
"name": meta.name,
"type": meta.type,
"categories": list(categorySet & set(relatedCategoryIDs))
}
result['meta_information'].append(item)
return HttpResponse(content = simplejson.dumps(result), mimetype = "application/json")
else:
return HttpResponse(status = 403)
What happens is the following: If all MetaInformation objects would be Foo, Bar, Baz and Blib and I would set the limit to 0:2, then I would get [Foo, Bar] with the first request and with the exact same request the method would return [Baz, Blib] when I run it for the second time.
Does anyone see what I am doing wrong here? Or is it the Django cache that somehow gets into my way?
I think the difficulty is that you are using a set to store your objects, and slicing that - and sets have no ordering (they are like dictionaries in that way). So, the results from your query are in fact indeterminate.
There are various implementations of ordered sets around - you could look into using one of them. However, I must say that I think you are doing a lot of unnecessary and expensive unique-ifying and sorting in Python, when most of this could be done directly by the database. For instance, you seem to be trying to get the unique list of Metas that are related to the categories you pass. Well, this could be done in a single ORM query:
meta_list = MetaInformation.objects.filter(category__id__in=categoryIDs)
and you could then drop the set, looping and sorting commands.