Django ORM call to obtain multiple fk values? - django

models.py (derived from existing db)
class IceCreamComponent(models.Model):
name = models.CharField
#can be 'flavor', 'nut', etc
class IceCream(models.Model):
name = models.CharField
component = models.ForeignKey(IceCreamComponent)
value = models.CharField #will correspond to the component
The context behind this database is that 'IceCream' reports will come in from someone who's only purpose is to report back on a certain component (i.e. my 'extras' reporter will report the name of the ice cream and the extra it contained). It is assumed that all needed reports are in the db when queried so that something like:
IceCreams = IceCream.objects.values('name', 'component__name', 'value')
will return something akin to:
[
{'name': 'Rocky road', 'component__name': 'ice cream flavor', 'value':'chocolate'},
{'name': 'Rocky road', 'component__name': 'nut', 'value':'almond'},
{'name': 'Rocky road', 'component__name': 'extra', 'value':'marshmallow'},
{'name': 'Vanilla Bean', 'component__name': 'ice cream flavor', 'value':'vanilla'},
{'name': 'Vanilla Bean', 'component__name': 'extra', 'value':'ground vanilla bean'},
]
However, as you can imagine something like:
[
{'name': 'Rocky Road', 'ice cream flavor': 'chocolate', 'nut': 'almond', 'extra':'marshmallow' },
{'name': 'Vanilla Bean', 'ice cream flavor': 'vanilla', 'extra':'ground vanilla bean'}
]
is much more usable (especially considering I'd like to use this in a ListView).
Is there a better way to query the data or will I need to loop through the ValuesQuerySet to achieve the desired output?

Can't you reconstruct the list from the original result?
results = []
for row in vqueryset:
converted_row = {}
converted_row[row['component__name']] = row['value']
converted_row['name'] = row['name']
results.append(converted_row)
Of course you would want to paginate the original result before evaluating it (turning it into a list).
oh, you asked if there's a better way. I'm doing it this way because I couldn't find a better way anyway.

Here is the solution I came up with.
processing = None
output = []
base_dict = {}
for item in IceCreams:
# Detect change current site code from ordered list
if item['name'] != processing:
processing = item['name']
# If base_dict is not empty add it to our output (only first one)
# TODO see if there's a better way to do first one
if base_dict:
output.append(base_dict)
base_dict = {}
base_dict['name'] = item['name']
base_dict[item['component__name']] = item['value']

Related

How to extract multiple rows of data relative to single row in scrapy

I am trying to scrape webpage given in the this link -
http://new-york.eat24hours.com/picasso-pizza/19053
Here I am trying to get all the possible details like address and phone etc..
So, Far I have extracted the name, phone, address, reviews, rating.
But I also want to extract the the full menu of restaurant here(name of item with price).
So, far I have no idea how to manage this data into output of csv.
The rest of the data for a single url will be single but the items in menu will always be of different amount.
here below is my code so far-
import scrapy
from urls import start_urls
class eat24Spider(scrapy.Spider):
AUTOTHROTTLE_ENABLED = True
name = 'eat24'
def start_requests(self):
for x in start_urls:
yield scrapy.Request(x, self.parse)
def parse(self, response):
brickset = response
NAME_SELECTOR = 'normalize-space(.//h1[#id="restaurant_name"]/a/text())'
ADDRESS_SELECTION = 'normalize-space(.//span[#itemprop="streetAddress"]/text())'
LOCALITY = 'normalize-space(.//span[#itemprop="addressLocality"]/text())'
REGION = 'normalize-space(.//span[#itemprop="addressRegion"]/text())'
ZIP = 'normalize-space(.//span[#itemprop="postalCode"]/text())'
PHONE_SELECTOR = 'normalize-space(.//span[#itemprop="telephone"]/text())'
RATING = './/meta[#itemprop="ratingValue"]/#content'
NO_OF_REVIEWS = './/meta[#itemprop="reviewCount"]/#content'
OPENING_HOURS = './/div[#class="hours_info"]//nobr/text()'
EMAIL_SELECTOR = './/div[#class="company-info__block"]/div[#class="business-buttons"]/a[span]/#href[substring-after(.,"mailto:")]'
yield {
'name': brickset.xpath(NAME_SELECTOR).extract_first().encode('utf8'),
'pagelink': response.url,
'address' : str(brickset.xpath(ADDRESS_SELECTION).extract_first().encode('utf8')+', '+brickset.xpath(LOCALITY).extract_first().encode('utf8')+', '+brickset.xpath(REGION).extract_first().encode('utf8')+', '+brickset.xpath(ZIP).extract_first().encode('utf8')),
'phone' : str(brickset.xpath(PHONE_SELECTOR).extract_first()),
'reviews' : str(brickset.xpath(NO_OF_REVIEWS).extract_first()),
'rating' : str(brickset.xpath(RATING).extract_first()),
'opening_hours' : str(brickset.xpath(OPENING_HOURS).extract_first())
}
I am sorry if I am making this confusing but any kind of help will be appreciated.
Thank you in advance!!
If you want to extract full restaurant menu, first of all, you need to locate element who contains both name and price:
menu_items = response.xpath('//tr[#itemscope]')
After that, you can simply make for loop and iterate over restaurant items appending name and price to list:
menu = []
for item in menu_items:
menu.append({
'name': item.xpath('.//a[#class="cpa"]/text()').extract_first(),
'price': item.xpath('.//span[#itemprop="price"]/text()').extract_first()
})
Finally you can add new 'menu' key to your dict:
yield {'menu': menu}
Also, I suggest you use scrapy Items for storing scraped data:
https://doc.scrapy.org/en/latest/topics/items.html
For outputting data in csv file use scrapy Feed exports, type in console:
scrapy crawl yourspidername -o restaurants.csv

Create multiple (two) source with chartit in Django

I'm using chartit in a django project.
I have a models (ReadingSensor) with the following attributes:
id_sensor
date_time
value
I want to create a line chart with several lines for different id_sensors
for example:
ReadingSensor.objects.filter(id_sensor=2)
ReadingSensor.objects.filter(id_sensor=1)
For a single model we have:
ds = DataPool(
series=
[{'options': {
'source': MonthlyWeatherByCity.objects.all()},
'terms': [
'month',
'houston_temp',
'boston_temp']}
])
cht = Chart(
datasource = ds,
series_options =
[{'options':{
'type': 'line',
'stacking': False},
'terms':{
'month': [
'boston_temp',
'houston_temp']
}}],
chart_options =
{'title': {
'text': 'Weather Data of Boston and Houston'},
'xAxis': {
'title': {
'text': 'Month number'}}})
Documentation: http://chartit.shutupandship.com/docs/
I consulted the documentation but found no suggestive example to help me.
Can someone help me?
Actually the example is on the site you provide, please check this link: http://chartit.shutupandship.com/demo/chart/multi-table-same-x/
The idea is just to add more items with options and terms to the series list when constructing DataPool object and adjust the terms in series_options when constructing Chart object accordingly.
Then you may find it's helpful to adjust the field name for the case when two data sources have the fields with the same name, the detailed document regarding to this issue is here: http://chartit.shutupandship.com/docs/apireference.html#datapool

Plone: The number of fields in the new order differs from the number of fields in the schema

I have a batch.py file which have many fields in it. So i am Trying to add a new ReferenceField in batch.py as per my requirement.
ExtReferenceField('Asfield',
required = 0,
multiValued=1,
allowed_types = ('AnalysisService',),
referenceClass = HoldingReference,
relationship = 'BatchAsfield',
widget=SearchAnalysisWidget(
label=_("AS Search"),
description="",
render_own_label=False,
visible={'edit': 'visible', 'view': 'visible'},
#visible=True,
base_query={'inactive_state': 'active'},
catalog_name='portal_catalog',
showOn=True,
colModel = [{'columnName':'AsCode','width':'20','label':_('Code')},
{'columnName':'AsName','width':'80','label':_('Name')},
{'columnName':'AsDate','width':'80','label':_('Date')},
{'columnName':'AsTat','width':'80','label':_('TAT')},
{'columnName':'AsLocation','width':'80','label':_('Location')},
],
),
),
It gives me error like:
ValueError: The number of fields in the new order differs from the
number of fields in the schema.
I get the Solution for this error. This error is coming because i am not added my Asfield in the getOrder method of batch.py file.
Actually our widgets needs a particular order to show at browser.So by using the getOrder method we can maintain the order of all widgets(fields) at browser. This getOrder is defined in same file batch.py.
def getOrder(self, schematas):
schematas['default'] = ['id',
'title',
'description',
'BatchID',
'ClientPatientID',
'Patient',
'Client',
'Doctor',
'Asfield',
'ClientBatchID',
'ReceiptNo',
'AmountPaid',
'BatchDate',
'OnsetDate',
'PatientAgeAtCaseOnsetDate',
]
return schematas
I just add Asfield (ReferenceField) after the Doctor (at the browser this Referencewidget will show just after Doctor). You can add your widget where you want it.
It is simple Solution for this value error.

making separate array from existing json django

I have this array of JSON:
[{'pk': 4L, 'model': u'logic.member', 'fields': {'profile': 3L, 'name': u'1', 'title': u'Mr', 'dob': datetime.date(1983, 1, 1), 'lastname': u'jkjk', 'redressno': u'jdsfsfkj', 'gender': u'm'}}, {'pk': 5L, 'model': u'logic.member', 'fields': {'profile': 3L, 'name': u'2', 'title': u'Mr', 'dob': datetime.date(1983, 1, 1), 'lastname': u'jkjk', 'redressno': u'jdsfsfkj', 'gender': u'm'}}]
I want to make separate array of JSON for only fields property.
What I tried is:
memarr=[]
for index,a in data1:
print index
print a
memarr[index]=a[index].fields
And it is giving an error of:
too many values to unpack
Please correct.
First of all, data1 is a list, so you can't unpack it into 2 variables.
If you want the index, you have to use something like enumerate.
Second, you can't assign to a list via indexing if the key doesn't exist. You have to append or use another valid list insert method.
Third, a[index].fields doesn't really make sense - there is no key in a that would be associated with an integer index and fields is not an attribute.
You're probably looking for something like this:
memarr = []
for index, a in enumerate(data1):
memarr.append(a['fields'])
So many things wrong with that snippet...
Anyway:
memarr = [a['fields'] for a in data]

how to save django object using dictionary?

is there a way that I can save the model by using dictionary
for e.g.
this is working fine,
p1 = Poll.objects.get(pk=1)
p1.name = 'poll2'
p1.descirption = 'poll2 description'
p1.save()
but what if I have dictionary like { 'name': 'poll2', 'description: 'poll2 description' }
is there a simple way to save the such dictionary direct to Poll
drmegahertz's solution works if you're creating a new object from scratch. In your example, though, you seem to want to update an existing object. You do this by accessing the __dict__ attribute that every Python object has:
p1.__dict__.update(mydatadict)
p1.save()
You could unwrap the dictionary, making its keys and values act like named arguments:
data_dict = {'name': 'foo', 'description': 'bar'}
# This becomes Poll(name='foo', description='bar')
p = Poll(**data_dict)
...
p.save()
I find only this variant worked for me clear.
Also in this case all Signals will be triggered properly
p1 = Poll.objects.get(pk=1)
values = { 'name': 'poll2', 'description': 'poll2 description' }
for field, value in values.items():
if hasattr(p1, field):
setattr(p1, field, value)
p1.save()
You could achieve this by using update on a filterset:
e.g.:
data = { 'name': 'poll2', 'description: 'poll2 description' }
p1 = Poll.objects.filter(pk=1)
p1.update(**data)
Notes:
be aware that .update does not trigger signals
You may want to put a check in there to make sure that only 1 result is returned before updating (just to be on the safe side). e.g.: if p1.count() == 1: ...
This may be a preferable option to using __ methods such as __dict__.