Ember.js this in callback - ember.js

I have an action in my controller to create a new page which is one of my models.
Afterwards, I want to clear out the form and transition to the new page.
Here is my code:
App.ApplicationController = Ember.ObjectController.extend
newSlug: (->
# some function
).property('newTitle')
newMenuName: # some property
actions:
addNewPage: ->
# Get the site
site = #get('model')
# Get all the pages associated with the site
pages = site.get('pages')
# ...
# snipped a bunch out to save space
# ...
# Create a new page passing in our title, slug and menu_name
page = pages.createRecord({title: title, slug: slug, menu_name: menu_name, menu_1: menu_1, menu_2: menu_2, menu_1_order: menu_1_order, menu_2_order: menu_2_order})
page.save().then(onSuccess, onError)
onSuccess = (page) ->
page = this.store.find('page', slug)
console.log('Slug: ' + slug + ' ' + page.slug)
#set('newTitle', '')
#set('newSlug', '')
#set('newMenuName', '')
$('#addPageModal').foundation('reveal', 'close')
#transitionToRoute('page', page)
But I'm getting the error that this has no method set. I have even tried using self = this but it gives me the same error.
Any thoughts?
BTW, in case you're wondering, inside onSuccess, page is not defined, so I have to look it up again. I thought a promise was supposed to return the data sent back from the API.
EDIT:
I must look like a total imbecile, but once again, right after posting a question here, I find (part) of the answer myself. Turns out in CoffeeScript (so far I'm not a big fan of it in combination with Ember.js) I needed the => so my callback function becomes:
onSuccess = (page) =>
page = this.store.find('page', slug)
console.log('Slug: ' + slug + ' ' + page.slug)
#set('newTitle', '')
#set('newSlug', '')
#set('newMenuName', '')
$('#addPageModal').foundation('reveal', 'close')
#transitionToRoute('page', page)
However, page is still not defined. Not sure what to do about that.
EDIT:
And the correct answer is....
App.ApplicationController = Ember.ObjectController.extend
newSlug: (->
# some function
).property('newTitle')
newMenuName: # some property
actions:
addNewPage: ->
# Get the site
site = #get('model')
# Get all the pages associated with the site
pages = site.get('pages')
# ...
# snipped a bunch out to save space
# ...
onSuccess = (page) ->
#set('newTitle', '')
#set('newSlug', '')
#set('newMenuName', '')
$('#addPageModal').foundation('reveal', 'close')
#transitionToRoute('page', page.get('slug'))
# And just for completeness
onError = (page) =>
# Do something
# Create a new page passing in our title, slug, menu_name, etc.
page = pages.createRecord({title: title, slug: slug, menu_name: menu_name, menu_1: menu_1, menu_2: menu_2, menu_1_order: menu_1_order, menu_2_order: menu_2_order})
page.save().then(onSuccess, onError)

Related

How to add URL component to current URL via a HTML button? (Django 2.1)

I have a HTML button that is supposed to sort the search results by alphabetical order.
Button HTML code:
A-Z
views.py:
def query_search(request):
articles = cross_currents.objects.all()
search_term = ''
if 'keyword' in request.GET:
search_term = request.GET['keyword']
articles = articles.annotate(similarity=Greatest(TrigramSimilarity('Title', search_term), TrigramSimilarity('Content', search_term))).filter(similarity__gte=0.03).order_by('-similarity')
if request.GET.get('a-z') == 'True':
articles = articles.order_by('Title')
Currently, the URL contains the keywords searched by the user. For example, if the user searches for "cheese," the URL will be search/?keyword=cheese. When I click the sorting button, the URL becomes search/?a-z=True and loses the keyword, which means that the sorting mechanism isn't sorting the search results based on the keyword. I think I need the URL to contain both the keyword and ?a-z=True for the sorting mechanism to work on the search results. How can I make that happen?
I think this is not specific to Django only, you can use javascript to do that:
Add a function that will get the current url and do the sorting function and call that function via onclick.
Then add the necessary param.
A-Z
Then in your js part, you can check the url contains a keyword param, if not, just add the sort param.
function sort(foo) {
let url = window.location.href;
// check if current url includes a "keyword" param else add the sort as param
if(url.includes("?") && url.includes("keyword")) window.location.href = url + "&" + foo;
else window.location.href = url + "?" + foo;
}
This is just an idea and might not work since I never tried to run this.

How does one go about partial updates?

How does one go about partial updates (i.e. via PATCH)? rake routes indicates that def update handles PUT and PATCH. This is how my Rails API is setup:
#user.first_name = user_params[:attributes][:'first-name']
#user.last_name = user_params[:attributes][:'last-name']
In user model. Both first_name and last_name have validates … presence: true. However, client, is trying to hit the endpoint with just attributes[first-name]. Note, attributes[last-name] is not being passed in the request. Rails thinks that #user.first_name has a value, but #user.last_name is nil. So a validation error is thrown
One way I thought of going about this was something like:
#user.first_name = user_params[:attributes][:'first-name'].present? ? user_params[:attributes][:'first-name'] : #user.first_name
#user.last_name = user_params[:attributes][:'last-name'].present? ? user_params[:attributes][:'last-name'] : #user.last_name
Is this a viable approach? Or is there something better I can consider?
EDIT. A more sophisticated problem is when I need to pre-calculate before actually saving the object. Take for example a product trying to update its price against a discount value, if present
def update
product = Product.find(params[:id])
product.amount_in_cents = product_params[:attributes][:'amount-in-cents']
product.discount_in_percentage = product_params[:attributes][:'discount-in-percentage'].present? ? product_params[:attributes][:'discount-in-percentage'].to_f : nil # Can be 0.10
if product.discount_in_percentage.present?
product.amount_in_cents = product.amount_in_cents + (product.amount_in_cents * product.discount_in_percentage)
else
product.amount_in_cents = product.amount_in_cents
end
if product.save
# ...
end
end
In Rails, we have a convention that the attributes in Model should be fetched to the Rails App like user[first_name] , user[last_name] and in controller we build a private method like users_params which will represent the data to be fed to the User model. like
# in controller
def update
user = User.find(params[:id])
user.update(users_params)
end
private
# This will prepare a whitelisted params data
def users_params
params.require(:user).permit(:first_name, :last_name, ...)
end
No need of this
#user.first_name = user_params[:attributes][:'first-name'].present? ? user_params[:attributes][:'first-name'] : #user.first_name
#user.last_name = user_params[:attributes][:'last-name'].present? ? user_params[:attributes][:'last-name'] : #user.last_name
In your case, you need to reformat the params keys to first_name instead of first-name and so forth. This will help you do your stuff with ease.
Tip: Try to keep it simpler as possible

Why does scrapy miss some links?

I am scraping the web-site "www.accell-group.com" using the "scrapy" library for Python. The site is scraped completely, in total 131 pages (text/html) and 2 documents (application/pdf) are identified. Scrapy did not throw any warnings or errors. My algorithm is supposed to scrape every single link. I use CrawlSpider.
However, when I look into the page "http://www.accell-group.com/nl/investor-relations/jaarverslagen/jaarverslagen-van-accell-group.htm", which is reported by "scrapy" as scraped/processed, I see that there are more pdf-documents, for example "http://www.accell-group.com/files/4/5/0/1/Jaarverslag2014.pdf". I cannot find any reasons for it not to be scraped. There is no dynamic/JavaScript content on this page. It is not forbidden in "http://www.airproducts.com/robots.txt".
Do you maybe have any idea why it can happen?
It is maybe because the "files" folder is not in "http://www.accell-group.com/sitemap.xml"?
Thanks in advance!
My code:
class PyscrappSpider(CrawlSpider):
"""This is the Pyscrapp spider"""
name = "PyscrappSpider"
def__init__(self, *a, **kw):
# Get the passed URL
originalURL = kw.get('originalURL')
logger.debug('Original url = {}'.format(originalURL))
# Add a protocol, if needed
startURL = 'http://{}/'.format(originalURL)
self.start_urls = [startURL]
self.in_redirect = {}
self.allowed_domains = [urlparse(i).hostname.strip() for i in self.start_urls]
self.pattern = r""
self.rules = (Rule(LinkExtractor(deny=[r"accessdenied"]), callback="parse_data", follow=True), )
# Get WARC writer
self.warcHandler = kw.get('warcHandler')
# Initialise the base constructor
super(PyscrappSpider, self).__init__(*a, **kw)
def parse_start_url(self, response):
if (response.request.meta.has_key("redirect_urls")):
original_url = response.request.meta["redirect_urls"][0]
if ((not self.in_redirect.has_key(original_url)) or (not self.in_redirect[original_url])):
self.in_redirect[original_url] = True
self.allowed_domains.append(original_url)
return self.parse_data(response)
def parse_data(self, response):
"""This function extracts data from the page."""
self.warcHandler.write_response(response)
pattern = self.pattern
# Check if we are interested in the current page
if (not response.request.headers.get('Referer')
or re.search(pattern, self.ensure_not_null(response.meta.get('link_text')), re.IGNORECASE)
or re.search(r"/(" + pattern + r")", self.ensure_not_null(response.url), re.IGNORECASE)):
logging.debug("This page gets processed = %(url)s", {'url': response.url})
sel = Selector(response)
item = PyscrappItem()
item['url'] = response.url
return item
else:
logging.warning("This page does NOT get processed = %(url)s", {'url': response.url})
return response.request
Remove or expand appropriately your "allowed_domains" variable and you should be fine. All the URLs the spider follows, by default, are restricted by allowed_domains.
EDIT: This case mentions particularly pdfs. PDFs are explicitly excluded as extensions as per the default value of deny_extensions (see here) which is IGNORED_EXTENSIONS (see here).
To allow your application to crawl PDFs all you have to do is to exclude them from IGNORED_EXTENSIONS by setting explicitly the value for deny_extensions:
from scrapy.linkextractors import IGNORED_EXTENSIONS
self.rules = (Rule(...
LinkExtractor(deny=[r"accessdenied"], deny_extensions=set(IGNORED_EXTENSIONS)-set(['pdf']))
..., callback="parse_data"...
So, I'm afraid, this is the answer to the question "Why does Scrapy miss some links?". As you will likely see it just opens the doors to further questions, like "how do I handle those PDFs" but I guess this is the subject of another question.

Return ajax select field on form error

I have a form that triggers an ajax request to a view after the user tabs out of the 'address' field. It retrieves the zip code and then populates pickups with the same zip. The problem is that if there are any form errors is that the drop down with the results I generate from my get_pickups view is lost. How can I keep it so that even on form errors it will keep the results
def get_pickups(request):
if request.is_ajax():
# Get available pickup dates based upon zip code
zip = request.POST.get('zip',None)
routes = Route.objects.filter(zip=zip).values_list('route',flat=True)
two_days_from_today = date.today() + relativedelta(days = +2)
submitted_from = request.POST.get('template',None)
if submitted_from == '/donate/':
template = 'donate_form.html'
results = PickupSchedule.objects.filter(route__in=routes,date__gt = two_days_from_today, current_count__lt=F('specials')).order_by('route','date')
else:
template = 'donate-external.html'
results = PickupSchedule.objects.filter(route__in=routes,date__gt = two_days_from_today, current_count__lt=F('specials')).order_by('date')
return render_to_response(template,{ 'zip':zip, 'results':results}, context_instance=RequestContext(request))
my ajax call via jquery:
$.ajax({
type: "POST",
url:"/get_pickups/",
data: {
'zip': zip,
'template':template
},
success: function(data){
results = $(data).find('#results').html()
$("#id_pickup_date").replaceWith("<span>" + results + "</span >");
},
error: function(){
alert("Error");
}
I can think of two approaches:
Attach the contents of that dropdown to your POST and return it along with the data in the other fields if there's an error (downside: added complexity in server code)
Use javascript to check if that zip field is non-empty on page load/reload. If it's not empty (e.g. error return), call your ajax lookup for that dropdown (downside: duplicate calculations)
I'd use whichever solution you feel most skilled with (python/django or javascript)

In django how know if a url is part of urlpatterns config?

I build a dynamic breadcumb, and some parts of it are not valid urls (are not in urlpatterns).
I have this templatetag:
#register.filter
def crumbs(url):
"Return breadcrumb trail leading to URL for this page"
l = url.split('/')
urls = []
path = ""
for index, item in enumerate(l):
if item == "":
continue
path += item + "/"
urls.append({'path':path,'name':item})
Now, I want to check if that specific URL is a valid url, ie, have a key in urlpatterns (of curse I will need to change my templatetag).
Something like:
IsInUrlPattern('/') => True
IsInUrlPattern('/blog/2004/') => True
IsInUrlPattern('/blog/thisfail/') => False
You want the resolve() function.