How do i schedule notification for this program? i am using channels to create notification and use "crontab" for scheduling but it doesn't work.
def my_schedule_job():
vehicle_objs = Vehicle.objects.all()
for vehicle_obj in vehicle_objs:
insurance_expiry = vehicle_obj.insurance_expiry
insurance_expiry_date = insurance_expiry - timedelta(days=5)
today = date.today()
print('insurance_expiry_date',insurance_expiry_date)
print('today',today)
if insurance_expiry_date == today:
notification_obj = Notification(user_id=vehicle_obj.user_id,notification="Your insurance for {} will expire on {}".format(vehicle_obj.vehicle,insurance_expiry) ,is_seen=False)
notification_obj.save()
elif insurance_expiry_date <= today:
notification_obj = Notification(user_id=vehicle_obj.user_id,notification=vehicle_obj.vehicle + " insurance is going to expire on " + str(insurance_expiry),is_seen=False)
notification_obj.save()
you'll want to create a custom manage.py command to solve your problem. To do so, you'll want to create an <appname>/management/commands directory structure and put your code in a file in that directory with the file name for the code bearing the name of the command you want to run. For example, emails.py.
After that, there are functions that Django expects to see present in your custom management command. You'll want to place your code into the respective functions.
Review this URL and let me know if you run into issues?
Good luck.
We have an application which can book appointment in Exchange by EWS(Exchange web service), we use following code to book
CheckAppointmentsIsBookedInSameTime();
var appointment = new Microsoft.Exchange.WebServices.Data.Appointment(ExchangeService);
...
appointment.Save(SendInvitationsMode.SendToAllAndSaveCopy);
The CheckAppointmentsIsBookedInSameTime will get all booked appointments and check if an appointment was booked in same time. It will throw exception if a appointment was booked.
Currently it have concurrence problem. Two user can book same appointment in same time range if they do operation in same time. The book result is one is accept, one is declined.
My question is, during an appointment is in booking(in progress, but not done), how do we check the appointment in booking if another guy want to booking same appointment in same time?
I've followed the guide in the queryset documentation as per (https://docs.djangoproject.com/en/1.10/ref/models/querysets/#update-or-create) but I think im getting something wrong:
my script checks against an inbox for maintenance emails from our ISP, and then sends us a calendar invite if you are subscribed and adds maintenance to the database.
Sometimes we get updates on already planned maintenance, of which i then need to update the database with the new date and time, so im trying to use "update or create" for the queryset, and need to use the ref no from the email to update or create the record
#Maintenance
if sender.lower() == 'maintenance#isp.com':
print 'Found maintenance in mail: {0}'.format(subject)
content = Message.getBody(mail)
postcodes = re.findall(r"[A-Z]{1,2}[0-9R][0-9A-Z]? [0-9][A-Z]{2}", content)
if postcodes:
print 'Found Postcodes'
else:
error_body = """
Email titled: {0}
With content: {1}
Failed processing, could not find any postcodes in the email
""".format(subject,content)
SendMail(authentication,site_admins,'Unprocessed Email',error_body)
Message.markAsRead(mail)
continue
times = re.findall("\d{2}/\d{2}/\d{4} \d{2}:\d{2}", content)
if times:
print 'Found event Times'
e_start_time = datetime.strftime(datetime.strptime(times[0], "%d/%m/%Y %H:%M"),"%Y-%m-%dT%H:%M:%SZ")
e_end_time = datetime.strftime(datetime.strptime(times[1], "%d/%m/%Y %H:%M"),"%Y-%m-%dT%H:%M:%SZ")
subscribers = []
clauses = (Q(site_data__address__icontains=p) for p in postcodes)
query = reduce(operator.or_, clauses)
sites = Circuits.objects.filter(query).filter(circuit_type='MPLS', provider='KCOM')
subject_text = "Maintenance: "
m_ref = re.search('\[(.*?)\]',subject).group(1)
if not len(sites):
#try use first part of postcode
h_pcode = postcodes[0].split(' ')
sites = Circuits.objects.filter(site_data__postcode__startswith=h_pcode[0]).filter(circuit_type='MPLS', provider='KCOM')
if not len(sites):
#still cant find a site, send error
error_body = """
Email titled: {0}
With content: {1}
I have found a postcode, but could not find any matching sites to assign this maintenance too, therefore no meeting has been sent
""".format(subject,content)
SendMail(authentication,site_admins,'Unprocessed Email',error_body)
Message.markAsRead(mail)
continue
else:
#have site(s) send an invite and create record
for s in sites:
create record in circuit maintenance
maint = CircuitMaintenance(
circuit = s,
ref = m_ref,
start_time = e_start_time,
end_time = e_end_time,
notes = content
)
maint, CircuitMaintenance.objects.update_or_create(ref=m_ref)
#create subscribers for maintenance
m_ref, is the unique field that will match the update, but everytime I run this in tests I get
sites_circuitmaintenance.start_time may not be NULL
but I've set it?
If you want to update certain fields provided that a record with certain values exists, you need to explicitly provide the defaults as well as the field names.
Your code should look like this:
CircuitMaintenance.objects.update_or_create(default=
{'circuit' : s,'start_time' : e_start_time,'end_time' : e_end_time,'notes' : content}, ref=m_ref)
The particular error you are seeing is because update_or_create is creating an object because one with rer=m_ref does not exist. But you are not passing in values for all the not null fields. The above code will fi that.
I've got a model that looks something like:
class Deal(models.Model):
sender = models.ForeignKey(settings.AUTH_USER_MODEL)
created_on = models.DateField(auto_now_add=True)
recipient = modes.ForeignKey(settings.AUTH_USER_MODEL)
...
I'd like to get the daily count of deals sent or received by a particular user for the three month window ending today. To further complicate things, I want that dataset serialized in JSON format so that I have output like this:
{
"timestamp_1": count_1,
"timestamp_2": count_2,
...
"timestamp_n": count_n
}
So far I have arrived at a query that'll look something like this:
# Assume we have a user object called "user"
Deal.objects.filter(Q(sender=user) | Q(recipient=user)).extra({'date_created' : "date(created_on)"}).values('date_created').annotate(created_count=Count('id'))
Still combing through the documentation, but does anyone have a better way to tackle this?
I don't think Django can handle this without RAW sqls. There's a great django third-party package, django-qsstats-magic which is designed to make repetitive tasks such as generating aggregate statistics of querysets over time easier. It supports postgresql, Mysql, and sqlite.
The code below solves you problem.
import qsstats
import datetime
Deal.objects.filter(Q(sender=user) | Q(recipient=user))
qss = qsstats.QuerySetStats(qs, date_field='date_created', aggregate=Count('id'))
three_month_ago = datetime.datetime.now() - datetime.timedelta(months=3)
qss.time_series(start=three_month_ago, interval='days')
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
Do any of the AWS APIs/Services provide access to the product reviews for items sold by Amazon? I'm interested in looking up reviews by (ASIN, user_id) tuple. I can see that the Product Advertising API returns a URL to a page (for embedding in an IFRAME) containing the URLs, but I am interested in a machine-readable format of the review data, if possible.
Update 2:
Please see #jpillora's comment. It's probably the most relevant regarding Update 1.
I just tried out the Product Advertising API (as of 2014-09-17), it seems that this API only returns a URL pointing to an iframe containing just the reviews. I guess you'd have to screen scrape - though I imagine that would break Amazon's TOS.
Update 1:
Maybe. I wrote the original answer below earlier. I don't have time to look into this right now because I'm no longer on a project concerned with Amazon reviews, but their webpage at Product Advertising API states "The Product Advertising API helps you advertise Amazon products using product search and look up capability, product information and features such as Customer Reviews..." as of 2011-12-08. So I hope someone looks into it and posts back here; feel free to edit this answer.
Original:
Nope.
Here is an intersting forum discussion about the fact including theories as to why: http://forums.digitalpoint.com/showthread.php?t=1932326
If I'm wrong, please post what you find. I'm interested in getting the reviews content, as well as allowing submitting reviews to Amazon, if possible.
You might want to check this link: http://reviewazon.com/. I just stumbled across it and haven't looked into it, but I'm surprised I don't see any mention on their site about the update concerning the drop of Reviews from the Amazon Products Advertising API posted at: https://affiliate-program.amazon.com/gp/advertising/api/detail/main.html
Here's my quick take on it - you easily can retrieve the reviews themselves with a bit more work:
countries=['com','co.uk','ca','de']
books=[
'''http://www.amazon.%s/Glass-House-Climate-Millennium-ebook/dp/B005U3U69C''',
'''http://www.amazon.%s/The-Japanese-Observer-ebook/dp/B0078FMYD6''',
'''http://www.amazon.%s/Falling-Through-Water-ebook/dp/B009VJ1622''',
]
import urllib2;
for book in books:
print '-'*40
print book.split('%s/')[1]
for country in countries:
asin=book.split('/')[-1]; title=book.split('/')[3]
url='''http://www.amazon.%s/product-reviews/%s'''%(country,asin)
try: f = urllib2.urlopen(url)
except: page=""
page=f.read().lower(); print '%s=%s'%(country, page.count('member-review'))
print '-'*40
According to Amazon Product Advertizing API License Agreement (https://affiliate-program.amazon.com/gp/advertising/api/detail/agreement.html) and specifically it's point 4.b.iii:
You will use Product Advertising Content only ... to send end users to and drive sales on the Amazon Site.
which means it's prohibited for you to show Amazon products reviews taken trough their API to sale products at your site. It's only allowed to redirect your site visitors to Amazon and get the affiliate commissions.
I would use something like the answer of #mfs above. Unfortunately, his/her answer would only work for up to 10 reviews, since this is the maximum that can be displayed on one page.
You may consider the following code:
import requests
nreviews_re = {'com': re.compile('\d[\d,]+(?= customer review)'),
'co.uk':re.compile('\d[\d,]+(?= customer review)'),
'de': re.compile('\d[\d\.]+(?= Kundenrezens\w\w)')}
no_reviews_re = {'com': re.compile('no customer reviews'),
'co.uk':re.compile('no customer reviews'),
'de': re.compile('Noch keine Kundenrezensionen')}
def get_number_of_reviews(asin, country='com'):
url = 'http://www.amazon.{country}/product-reviews/{asin}'.format(country=country, asin=asin)
html = requests.get(url).text
try:
return int(re.compile('\D').sub('',nreviews_re[country].search(html).group(0)))
except:
if no_reviews_re[country].search(html):
return 0
else:
return None # to distinguish from 0, and handle more cases if necessary
Running this with 1433524767 (which has significantly different number of reviews for the three countries of interest) I get:
>> print get_number_of_reviews('1433524767', 'com')
3185
>> print get_number_of_reviews('1433524767', 'co.uk')
378
>> print get_number_of_reviews('1433524767', 'de')
16
Hope it helps
As said by others above, amazon has discontinued providing reviews in its api. Howevever, i found this nice tutorial to do the same with python. Here is the code he gives, works for me! He uses python 2.7
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Written as part of https://www.scrapehero.com/how-to-scrape-amazon-product-reviews-using-python/
from lxml import html
import json
import requests
import json,re
from dateutil import parser as dateparser
from time import sleep
def ParseReviews(asin):
#This script has only been tested with Amazon.com
amazon_url = 'http://www.amazon.com/dp/'+asin
# Add some recent user agent to prevent amazon from blocking the request
# Find some chrome user agent strings here https://udger.com/resources/ua-list/browser-detail?browser=Chrome
headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.90 Safari/537.36'}
page = requests.get(amazon_url,headers = headers).text
parser = html.fromstring(page)
XPATH_AGGREGATE = '//span[#id="acrCustomerReviewText"]'
XPATH_REVIEW_SECTION = '//div[#id="revMHRL"]/div'
XPATH_AGGREGATE_RATING = '//table[#id="histogramTable"]//tr'
XPATH_PRODUCT_NAME = '//h1//span[#id="productTitle"]//text()'
XPATH_PRODUCT_PRICE = '//span[#id="priceblock_ourprice"]/text()'
raw_product_price = parser.xpath(XPATH_PRODUCT_PRICE)
product_price = ''.join(raw_product_price).replace(',','')
raw_product_name = parser.xpath(XPATH_PRODUCT_NAME)
product_name = ''.join(raw_product_name).strip()
total_ratings = parser.xpath(XPATH_AGGREGATE_RATING)
reviews = parser.xpath(XPATH_REVIEW_SECTION)
ratings_dict = {}
reviews_list = []
#grabing the rating section in product page
for ratings in total_ratings:
extracted_rating = ratings.xpath('./td//a//text()')
if extracted_rating:
rating_key = extracted_rating[0]
raw_raing_value = extracted_rating[1]
rating_value = raw_raing_value
if rating_key:
ratings_dict.update({rating_key:rating_value})
#Parsing individual reviews
for review in reviews:
XPATH_RATING ='./div//div//i//text()'
XPATH_REVIEW_HEADER = './div//div//span[contains(#class,"text-bold")]//text()'
XPATH_REVIEW_POSTED_DATE = './/a[contains(#href,"/profile/")]/parent::span/following-sibling::span/text()'
XPATH_REVIEW_TEXT_1 = './/div//span[#class="MHRHead"]//text()'
XPATH_REVIEW_TEXT_2 = './/div//span[#data-action="columnbalancing-showfullreview"]/#data-columnbalancing-showfullreview'
XPATH_REVIEW_COMMENTS = './/a[contains(#class,"commentStripe")]/text()'
XPATH_AUTHOR = './/a[contains(#href,"/profile/")]/parent::span//text()'
XPATH_REVIEW_TEXT_3 = './/div[contains(#id,"dpReviews")]/div/text()'
raw_review_author = review.xpath(XPATH_AUTHOR)
raw_review_rating = review.xpath(XPATH_RATING)
raw_review_header = review.xpath(XPATH_REVIEW_HEADER)
raw_review_posted_date = review.xpath(XPATH_REVIEW_POSTED_DATE)
raw_review_text1 = review.xpath(XPATH_REVIEW_TEXT_1)
raw_review_text2 = review.xpath(XPATH_REVIEW_TEXT_2)
raw_review_text3 = review.xpath(XPATH_REVIEW_TEXT_3)
author = ' '.join(' '.join(raw_review_author).split()).strip('By')
#cleaning data
review_rating = ''.join(raw_review_rating).replace('out of 5 stars','')
review_header = ' '.join(' '.join(raw_review_header).split())
review_posted_date = dateparser.parse(''.join(raw_review_posted_date)).strftime('%d %b %Y')
review_text = ' '.join(' '.join(raw_review_text1).split())
#grabbing hidden comments if present
if raw_review_text2:
json_loaded_review_data = json.loads(raw_review_text2[0])
json_loaded_review_data_text = json_loaded_review_data['rest']
cleaned_json_loaded_review_data_text = re.sub('<.*?>','',json_loaded_review_data_text)
full_review_text = review_text+cleaned_json_loaded_review_data_text
else:
full_review_text = review_text
if not raw_review_text1:
full_review_text = ' '.join(' '.join(raw_review_text3).split())
raw_review_comments = review.xpath(XPATH_REVIEW_COMMENTS)
review_comments = ''.join(raw_review_comments)
review_comments = re.sub('[A-Za-z]','',review_comments).strip()
review_dict = {
'review_comment_count':review_comments,
'review_text':full_review_text,
'review_posted_date':review_posted_date,
'review_header':review_header,
'review_rating':review_rating,
'review_author':author
}
reviews_list.append(review_dict)
data = {
'ratings':ratings_dict,
'reviews':reviews_list,
'url':amazon_url,
'price':product_price,
'name':product_name
}
return data
def ReadAsin():
#Add your own ASINs here
AsinList = ['B01ETPUQ6E','B017HW9DEW']
extracted_data = []
for asin in AsinList:
print "Downloading and processing page http://www.amazon.com/dp/"+asin
extracted_data.append(ParseReviews(asin))
sleep(5)
f=open('data.json','w')
json.dump(extracted_data,f,indent=4)
if __name__ == '__main__':
ReadAsin()
Here, is the link to his website reviews scraping with python 2.7
Unfortunately you can only get an iframe URL with the reviews, the content itself is not accessible.
Source: http://docs.amazonwebservices.com/AWSECommerceService/2011-08-01/DG/CHAP_MotivatingCustomerstoBuy.html#GettingCustomerReviews
Checkout RapidAPI: https://rapidapi.com/blog/amazon-product-reviews-api/
By using this API we can get Amazon product reviews.
You can use Amazon Product Advertising API. It has a Response Group 'Reviews' which you can use with operation 'ItemLookup'. You need to know ASIN i.e. unique item id of the product.
Once you set all the parameters and execute the signed URL, you will receive an XML which contains a link to customer reviews under "IFrameURL" tag.
Use this URL and use pattern searching in html returned from this url to extract the reviews. For each review in the html, there will be a unique review id and under that you can get all the data for that particular review.