Rails 4 - Couldn't find Subscription with 'id' - ruby-on-rails-4

Question: Could one kindly advise me how i adjust my method to relate to the url in order to avoid the error Couldn't find Subscription with 'id'=, depending on which payment option the user selects & the displayed url - i will explain much clearer below:
I have 2 urls
[option payment 1] when a user chooses the option to pay to attend an event, the user is directed to the payment new page and this url appears: http://localhost:3000/payments/new?event_id=2&user=4
[option payment 2] when a user chooses the subscription option, the user is directed to the payment new page and this url appears: http://localhost:3000/payments/new?subcription_id=1&user=4
in my payments_controller.rb file for the action new i have the below method:
def new
#subscription = Subscription.find(params[:subcription_id])
#subcription_id = params[:subcription_id]
#event = Event.find(params[:event_id])
#event_id = params[:event_id]
#payment = Payment.new
end
when a user chooses [option payment 1] and is directed to the payment new page with the displaying url http://localhost:3000/payments/new?event_id=2&user=4 i get the below error
Couldn't find Subscription with 'id'=
Question:
Could one kindly advise me how i adjust my method to relate to the url inorder to avoid the error Couldn't find Subscription with 'id'=, depending on which payment option the user selects & the displayed url
i tried the below but no luck:
def new
if #subscription.present?
#subscription = Subscription.find(params[:subcription_id])
#subcription_id = params[:subcription_id]
elsif #event.present?
#event = Event.find(params[:event_id])
#event_id = params[:event_id]
end
#payment = Payment.new
end
i also tried this but no luck:
def new
if is_path?("/payments/new?subcription_id=1&user=4")
#subscription = Subscription.find(params[:subcription_id])
#subcription_id = params[:subcription_id]
elsif is_path?("/payments/new?event_id=2&user=4")
#event = Event.find(params[:event_id])
#event_id = params[:event_id]
end
#payment = Payment.new
end
def is_path?(*paths)
paths.include?(request.path)
end

Just make the finds conditional on whether the parameter is present.
def new
#subscription = Subscription.find(params[:subcription_id]) if params[:subscription_id].present?
#subcription_id = params[:subcription_id]
#event = Event.find(params[:event_id]) if params[:event_id].present?
#event_id = params[:event_id]
#payment = Payment.new
end
or perhaps more clearly...
def new
if params[:subscription_id].present?
#subscription = Subscription.find(params[:subcription_id])
#subcription_id = params[:subcription_id]
end
if params[:event_id].present?
#event = Event.find(params[:event_id])
#event_id = params[:event_id]
end
#payment = Payment.new
end

the below worked for me:
def new
#subcription_id = params[:subcription_id]
#event_id = params[:event_id]
#payment = Payment.new
if #subcription_id.present?
#subscription = Subscription.find(params[:subcription_id])
elsif #event_id.present?
#event = Event.find(params[:event_id])
end
end

Related

How do I tackle this issue in my Guess The Flag Django web app

I'm completely new to Django and I'm trying to build this guess the flag web game with it. In main page when someone presses the 'play' button, he's sent to a page where a list of 4 randomly selected countries from the DB is generated, and only of one these 4 countries is the actual answer.
Here's the code from views.py in my App directory :
context = {}
context['submit'] = None
context['countries_list'] = None
score = []
score.clear()
context['score'] = 0
def play(request):
len_score = len(score)
countries = Country.objects.all()
real_choice = None
if request.POST:
get_guess = request.POST.get('guess')
print(request.POST)
if str(get_guess).casefold() == str(context['submit']).casefold():
score.append(1)
else:
score.clear()
len_score = len(score)
choices = random.sample(tuple(countries),4)
real_choice = random.choice(choices)
context['countries_list'] = choices
context['submit'] = real_choice
context['score'] = len_score
return render (request, 'base/play.html', context)
Everything works as expected when there's only one person playing, or the site is opened in only one tab.
The issue here is one someone else opens the site or it's opened in more than one tab, the score gets reset and a new list of random countries is generated for all users, so your guess will never be right!
How do I go about to solve this? Again, I'm completely new to this so I'm left clueless here.
The best way to store short term, per user information is by session variables. This way you can maintain the user's individual settings. So, something like:
def play(request):
len_score = len(score)
countries = Country.objects.all()
real_choice = None
if request.POST:
get_guess = request.POST.get('guess')
print(request.POST)
#Compare guess to session variable
#(using .get() which returns none rather than error if variable not found)
if str(get_guess).casefold() == str(request.session.get('real_choice').casefold():
score.append(1)
else:
score.clear()
len_score = len(score)
choices = random.sample(tuple(countries),4)
real_choice = random.choice(choices)
#Store answer in session variable
request.session['real_choice'] = real_choice
context['countries_list'] = choices
context['submit'] = real_choice
context['score'] = len_score
return render (request, 'base/play.html', context)

Why does the render_template keep on showing the old value of the flask form?

I've been searching for an answer for hours. I apologise if I missed something.
I'm using the same form multiple times in order to add rows to my database.
Every time I check an excel file to pre-fill some of the wtforms StringFields with known information that the user may want to change.
The thing is: I change the form.whatever.data and when printing it, it shows the new value. But when I render the template it keeps showing the old value.
I tried to do form.hours_estimate.data = "" before assigning it a new value just in case but it didn't work.
I will attach here the route I'm talking about. The important bit is after # Get form ready for next service. If there's more info needed please let me know.
Thank you very much.
#coordinator_bp.route("/coordinator/generate-order/<string:pev>", methods=['GET', 'POST'])
#login_required
def generate_order_services(pev):
if not (current_user.is_coordinator or current_user.is_manager):
return redirect(url_for('public.home'))
# Get the excel URL
f = open("./app/database/datafile", 'r')
filepath = f.read()
f.close()
error = None
if GenerateServicesForm().submit1.data and GenerateServicesForm().validate():
# First screen submit (validate the data -> first Service introduction)
form = FillServiceForm()
next_service_row = get_next_service_row(filepath)
if next_service_row is None:
excel_info = excel_get_pev(filepath)
error = "Excel error. Service code not found. If you get this error please report the exact way you did it."
return render_template('coordinator/get_pev_form.html', form=GetPevForm(), error=error, info=excel_info)
service_info = get_service_info(filepath, next_service_row)
service_code = service_info[0]
start_date = service_info[1]
time_estimate = service_info[2]
objects = AssemblyType.get_all()
assembly_types = []
for assembly_type in objects:
assembly_types.append(assembly_type.type)
form.service_code.data = service_code
form.start_date.data = start_date
form.hours_estimate.data = time_estimate
return render_template('coordinator/fill_service_form.html', form=form, error=error, assembly_types=assembly_types)
if FillServiceForm().submit2.data:
if not FillServiceForm().validate():
objects = AssemblyType.get_all()
assembly_types = []
for assembly_type in objects:
assembly_types.append(assembly_type.type)
return render_template('coordinator/fill_service_form.html', form=FillServiceForm(), error=error,
assembly_types=assembly_types)
# Service screen submits
# Here we save the data of the last submit and ready the next one or end the generation process
# Ready the form
form = FillServiceForm()
next_service_row = get_next_service_row(filepath)
if next_service_row is None:
excel_info = excel_get_pev(filepath)
error = "Excel error. Service code not found. If you get this error please report the exact way you did it."
return render_template('coordinator/get_pev_form.html', form=GetPevForm(), error=error, info=excel_info)
service_info = get_service_info(filepath, next_service_row)
service_code = service_info[0]
form.service_code.data = service_code
# create the service (this deletes the service code from the excel)
service = create_service(form, filepath)
if isinstance(service,str):
return render_template('coordinator/fill_service_form.html', form=form, error=service)
# Get next service
next_service_row = get_next_service_row(filepath)
if next_service_row is None:
# This means there is no more services pending
return "ALL DONE"
else:
# Get form ready for next service
service_info = get_service_info(filepath, next_service_row)
service_code = service_info[0]
start_date = service_info[1]
time_estimate = service_info[2]
print("time_estimate")
print(time_estimate) # I get the new value.
objects = AssemblyType.get_all()
assembly_types = []
for assembly_type in objects:
assembly_types.append(assembly_type.type)
form.service_code.data = service_code
form.start_date.data = start_date
form.hours_estimate.data = time_estimate
print(form.hours_estimate.data) # Here I get the new value. Everything should be fine.
# In the html, the old value keeps on popping.
return render_template('coordinator/fill_service_form.html', form=form, error=error,
assembly_types=assembly_types)
number_of_services = excel_get_services(filepath=filepath, selected_pev=pev)
# Get the number of the first excel row of the selected pev
first_row = excel_get_row(filepath, pev)
if first_row is None:
excel_info = excel_get_pev(filepath)
error = "Excel error. PEV not found. If you get this error please report the exact way you did it."
return render_template('coordinator/get_pev_form.html', form=GetPevForm(), error=error, info=excel_info)
service_code = []
start_date = []
time_estimate_code = []
quantity = []
# Open the excel
wb = load_workbook(filepath)
# grab the active worksheet
ws = wb.active
for idx in range(number_of_services):
# Append the data to the lists
service_code.append(ws.cell(row=first_row+idx, column=12).value)
start_date.append(str(ws.cell(row=first_row + idx, column=5).value)[:10])
time_estimate_code.append(ws.cell(row=first_row+idx, column=7).value)
quantity.append(ws.cell(row=first_row + idx, column=9).value)
wb.close()
return render_template('coordinator/generate_services_form.html',
form=GenerateServicesForm(),
pev=pev,
service_code=service_code,
start_date=start_date,
time_estimate_code=time_estimate_code,
quantity=quantity)
Well I found a workarround: I send the data outside the form like this:
return render_template('coordinator/fill_service_form.html', form=form, error=error,
assembly_types=assembly_types,
service_code=service_code,
start_date=start_date,
time_estimate=time_estimate)
And replace the jinja form for this:
<input class="form-control" placeholder="2021-04-23" name="start_date" type="text" value="{{start_date}}">
I'm still using the form (name= the form field name) and at the same time I input the value externally.
I hope this helps somebody.

How should I be formatting my yield requests?

My scrapy spider is very confused, or I am, but one of us is not working as intended. My spider pulls start url's from a file and is supposed to: Start on an Amazon search page, crawl the page and grab the url's of each search result, follow the link to the items page, crawl the items page for information on the item, once all items have been crawled on the first page follow pagination up to page X, rinse and repeat.
I am using ScraperAPI and Scrapy-user-agent to randomize my middlewares. I have formatted my start_requests with a priority based on their index in the file, so they should be crawled in order. I have checked and ensured that I AM receiving a successful 200 html response with the actual html from the Amazon page. Here is the code for the spider:
class AmazonSpiderSpider(scrapy.Spider):
name = 'amazon_spider'
page_number = 2
current_keyword = 0
keyword_list = []
payload = {'api_key': 'mykey', 'url':'https://httpbin.org/ip'}
r = requests.get('http://api.scraperapi.com', params=payload)
print(r.text)
#/////////////////////////////////////////////////////////////////////
def start_requests(self):
with open("keywords.txt") as f:
for index, line in enumerate(f):
try:
keyword = line.strip()
AmazonSpiderSpider.keyword_list.append(keyword)
formatted_keyword = keyword.replace(' ', '+')
url = "http://api.scraperapi.com/?api_key=mykey&url=https://www.amazon.com/s?k=" + formatted_keyword + "&ref=nb_sb_noss_2"
yield scrapy.Request(url, meta={'priority': index})
except:
continue
#/////////////////////////////////////////////////////////////////////
def parse(self, response):
print("========== starting parse ===========")
for next_page in response.css("h2.a-size-mini a").xpath("#href").extract():
if next_page is not None:
if "https://www.amazon.com" not in next_page:
next_page = "https://www.amazon.com" + next_page
yield scrapy.Request('http://api.scraperapi.com/?api_key=mykey&url=' + next_page, callback=self.parse_dir_contents)
second_page = response.css('li.a-last a').xpath("#href").extract_first()
if second_page is not None and AmazonSpiderSpider.page_number < 3:
AmazonSpiderSpider.page_number += 1
yield scrapy.Request('http://api.scraperapi.com/?api_key=mykey&url=' + second_page, callback=self.parse_pagination)
else:
AmazonSpiderSpider.current_keyword = AmazonSpiderSpider.current_keyword + 1
#/////////////////////////////////////////////////////////////////////
def parse_pagination(self, response):
print("========== starting pagination ===========")
for next_page in response.css("h2.a-size-mini a").xpath("#href").extract():
if next_page is not None:
if "https://www.amazon.com" not in next_page:
next_page = "https://www.amazon.com" + next_page
yield scrapy.Request(
'http://api.scraperapi.com/?api_key=mykey&url=' + next_page,
callback=self.parse_dir_contents)
second_page = response.css('li.a-last a').xpath("#href").extract_first()
if second_page is not None and AmazonSpiderSpider.page_number < 3:
AmazonSpiderSpider.page_number += 1
yield scrapy.Request(
'http://api.scraperapi.com/?api_key=mykey&url=' + second_page,
callback=self.parse_pagination)
else:
AmazonSpiderSpider.current_keyword = AmazonSpiderSpider.current_keyword + 1
#/////////////////////////////////////////////////////////////////////
def parse_dir_contents(self, response):
items = ScrapeAmazonItem()
print("============= parsing page ==============")
temp = response.css('#productTitle::text').extract()
product_name = ''.join(temp)
product_name = product_name.replace('\n', '')
product_name = product_name.strip()
temp = response.css('#priceblock_ourprice::text').extract()
product_price = ''.join(temp)
product_price = product_price.replace('\n', '')
product_price = product_price.strip()
temp = response.css('#SalesRank::text').extract()
product_score = ''.join(temp)
product_score = product_score.strip()
product_score = re.sub(r'\D', '', product_score)
product_ASIN = response.css('li:nth-child(2) .a-text-bold+ span').css('::text').extract()
keyword = AmazonSpiderSpider.keyword_list[AmazonSpiderSpider.current_keyword]
items['product_keyword'] = keyword
items['product_ASIN'] = product_ASIN
items['product_name'] = product_name
items['product_price'] = product_price
items['product_score'] = product_score
yield items
For the FIRST start url, it will crawl three or four items and then it will jump to the SECOND start url. It will skip processing the remaining items and pagination pages, going directly to the second start url. For the second url, it will crawl three or four items, then it again will skip to the THIRD start url. It continues in this way, grabbing three or four items, then skipping to the next URL until it reaches the final start url. It will completely gather all information on this URL. Sometimes the spider COMPLETELY SKIPS the first or second starting url. This happens infrequently, but I have no idea as to what could cause this.
My code for following result item URL's works fine, but I never get the print statement for "starting pagination" so it is not correctly following pages. Also, there is something odd with middlewares. It begins parsing before it has assigned a middleware

Docusign: Email To Sign Not Being Received

I am developing a Django webapp for my company that will allow for users (or our sales associates) to enter in customer information, from which Weasyprint will generate a PDF that contains all of the information, and allow the user to sign the docusign digitally using Docusign's Python SDK. Everything is working at this point, minus the sending of the email to the recipient to be signed. I am unsure of why this is happening, and after seeking help from Docusign's employees, none of them seem to have any idea as to what is going wrong. Here is the code with which I am creating and sending the envelope:
loa = LOA.objects.filter().order_by('-id')[0] #Model in my SQLite database being used to retrieve the saved PDF for sending.
localbillingname = loa.billingname.replace(" ", "_")
username = "myusername"
integrator_key = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
base_url = "https://demo.docusign.net/restapi"
oauth_base_url = "account-d.docusign.com"
redirect_uri = "http://MyRedirectUrl.com/"
private_key_filename = "path/to/pKey.txt"
user_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx"
client_user_id = 'Your Local System ID' #This is the actual string being used
# Add a recipient to sign the document
signer = docusign.Signer()
signer.email = loa.email
signer.name = loa.ainame
signer.recipient_id = '1'
signer.client_user_id = client_user_id
sign_here = docusign.SignHere()
sign_here.document_id = '1'
sign_here.recipient_id = '1'
sign_here.anchor_case_sensitive = 'true'
sign_here.anchor_horizontal_alignment = 'left'
sign_here.anchor_ignore_if_not_present = 'false'
sign_here.anchor_match_whole_word = 'true'
sign_here.anchor_string = 'Signature of individual authorized to act on behalf of customer:'
sign_here.anchor_units = 'cms'
sign_here.anchor_x_offset = '0'
sign_here.anchor_y_offset = '0'
sign_here.tab_label = 'sign_here'
sign_here.IgnoreIfNotPresent = True;
tabs = docusign.Tabs()
tabs.sign_here_tabs = [sign_here]
# Create a signers list, attach tabs to signer, append signer to signers.
# Attach signers to recipients objects
signers = []
tabs = tabs
signer.tabs = tabs
signers.append(signer)
recipients = docusign.Recipients()
recipients.signers = signers
# Create an envelope to be signed
envelope_definition = docusign.EnvelopeDefinition()
envelope_definition.email_subject = 'Please Sign the Following Document!'
envelope_definition.email_blurb = 'Please sign the following document to complete the service transfer process!'
# Add a document to the envelope_definition
pdfpath = "Path/to/my/pdf.pdf"
with open(pdfpath, 'rb') as signfile:
file_data = signfile.read()
doc = docusign.Document()
base64_doc = base64.b64encode(file_data).decode('utf-8')
doc.document_base64 = base64_doc
doc.name = "mypdf.pdf"
doc.document_id = '1'
envelope_definition.documents = [doc]
signfile.close()
envelope_definition.recipients = recipients
envelope_definition.status = 'sent'
api_client = docusign.ApiClient(base_url)
oauth_login_url = api_client.get_jwt_uri(integrator_key, redirect_uri, oauth_base_url)
print("oauth_login_url:", oauth_login_url)
print("oauth_base_url:", oauth_base_url)
api_client.configure_jwt_authorization_flow(private_key_filename, oauth_base_url, integrator_key, user_id, 3600)
docusign.configuration.api_client = api_client
auth_api = AuthenticationApi()
envelopes_api = EnvelopesApi()
try: #login here via code
login_info = auth_api.login()
login_accounts = login_info.login_accounts
base_url, _ = login_accounts[0].base_url.split('/v2')
api_client.host = base_url
docusign.configuration.api_client = api_client
envelope_summary = envelopes_api.create_envelope(login_accounts[0].account_id, envelope_definition = envelope_definition)
print(envelope_summary)
except ApiException as e:
raise Exception("Exception when calling DocuSign API: %s" % e)
except Exception as e:
print(e)
return HttpResponse({'sent'})
When I check in the sandbox page, I see that the envelope is listed as being sent, to the correct email, containing the correct document to be signed, with the correct information. However, the email is never received, no matter whom I set as the recipient. Does anyone know why this might be?
Thank you in advance!

How to access the values inside the 'files' field in scrapy

I have downloaded some files using the file pipeline and i want to get the values of the files field. I tried to print item['files'] and it gives me a key error. Why is this so and how can i do it?
class testspider2(CrawlSpider):
name = 'genspider'
URL = 'flu-card.com'
URLhttp = 'http://www.flu-card.com'
allowed_domains = [URL]
start_urls = [URLhttp]
rules = (
[Rule(LxmlLinkExtractor(allow = (),restrict_xpaths = ('//a'),unique = True,),callback='parse_page',follow=True),]
)
def parse_page(self, response):
List = response.xpath('//a/#href').extract()
item = GenericspiderItem()
date = strftime("%Y-%m-%d %H:%M:%S")#get date&time dd-mm-yyyy hh:mm:ss
MD5hash = '' #store as part of the item, some links crawled are not file links so they do not have values on these fields
fileSize = ''
newFilePath = ''
File = open('c:/users/kevin123/desktop//ext.txt','a')
for links in List:
if re.search('http://www.flu-card.com', links) is None:
responseurl = re.sub('\/$','',response.url)
url = urljoin(responseurl,links)
else:
url = links
#File.write(url+'\n')
filename = url.split('/')[-1]
fileExt = ''.join(re.findall('.{3}$',filename))
if (fileExt != ''):
blackList = ['tml','pdf','com','php','aspx','xml','doc']
for word in blackList:
if any(x in fileExt for x in blackList):
pass #url is blacklisted
else:
item['filename'] = filename
item['URL'] = url
item['date'] = date
print item['files']
File.write(fileExt+'\n')
yield GenericspiderItem(
file_urls=[url]
)
yield item
It is not possible to access item['files'] in your spider. That is because the files are download by the FilesPipeline, and items just reach pipelines after they get out of your spider.
You first yield the item, then it gets to FilesPipeline, then the files are dowloaded, an just then the field images is populated with the info you want. To access it, you have to write a pipeline and schedule it after the FilesPipeline. Inside your pipeline, you can access the files field.
Also note that, in your spider, you are yielding to different kinds of items!