User geolocation - case study - django

We have a Django app which stores information about places around the world.
The case is, we want to use geolocation (any method) to locate visitors' location and then show places near them using Google maps.
I am looking for advice on best practices how to achieve this. I have in mind that not all browsers support geolocation and, even if it is supported, users may decline geolocation permission.
I was thinking about a scenerio such as this:
new user visits our website
we ask for geolocation permission (if browser supports it)
if user gives us permission - we query database for nearby places
if user declines geolocation - we use geo-IP data to get position
However, in addition to this, immediately after user visit website, I want to center map on his position, then via AJAX request load places near to them.
Do you have any suggestion, how to locate user even on older-browser? Maybe my conception is wrong?

No, your concept is not wrong. You are heading in the right direction, just needs some refinement:
New user visit our website
we use geo-IP data to get position
We show Google Map with user's position centered on the map (using IP)
we ask for geolocation permission (if browser supports it)
if the user accepts permission we center the map to the new
location
we query database for near places (using either browser geolocation), using either geo-IP in case of (decline permission or lack of browser support) or location from geolocation.
You can also add (5.5 step) using Peter Tinkler's suggestion to ask for location in case the browser doesn't support geolocation. (Though not everyone knows their postal code, so you need to ask for free text like nearest intersection, especially useful if you are on the go). This is also useful in case you want to search for things not exactly where you are now

Related

How to customize mobile device detector?

I have a customize request from my client in sitecore mobile module.
I want to be on full view site from my mobile on Career page, i am able to do it but any of the link inside career page with having mobile layout in presentation detail is taking that page again to mobile device.
Can we make any customization that maintain the device in cookies so that it keep user on full view site after coming to career page.
Is there any setting we can do in pipeline or session. I just want to be throughout in full view site after coming on career page and no going back to mobile layout. Kindly suggest.
Yes, you can do exactly that. You'd need to update the rule to detect the various devices (which I'm guessing you already have). You can then create a custom condition to check if cookies have been set for the full site(does not need to be a cookie of course, but that'd be the easiest way).
You can find the existing Conditions and Actions here in the path /sitecore/system/Settings/Rules.
To create custom conditions and actions, please read the following article on SitecoreInsight.com
After creating your custom condition, go into your device item again and update the Rule there to only switch to that device when that cookie has not been set.
[edit]
Come to think about it, it might be possible to have your 'go to full site' link appended with a querystring sc_device={GUID of device}. I'm not sure what would take precedence here, the 51degrees rules or Sitecore's cookies... Worth a try though :-)

How To Get List Of Apps A User Has Used

I was hoping to use the Graph API to retrieve a list of apps that the user currently has on their account - those that they are using or have used, and have not yet removed. This would be the same list as found by clicking App Centre and My Apps.
I'm not looking to get info about a specific application (yet; I don't know what app to look at, or how to get its ID).
This article: How to get the List of Facebook Applications for a user using Graph API
seemed to be the right answer (and the OP certainly liked it), but for me, while it works, it returns an empty array - but I have about 20 apps (mostly games) listed on my profile, half of which are in favourites. I don't need to know which are in favourites; I simply would like to know what apps a user is interested in.
I realise this is considered sensitive information, but since I cannot find what string to use for the api() method, I can't tell what permissions I'll need; I currently ask for read_stream but since the call itself succeeds, I assume I have sufficient privilege.
I see that you can list the app requests using /me/apprequests, which suggests you should be able to retrieve the list of apps (that's where the requests come from, usually, except invites to new games) - but I can't find anything that seems right.
Any ideas?
Thanks.
No, it is not possible to retrieve a full list of the applications that a user has TOS'ed.
As described in the comments to the OP, it is possible to deduce some of them from /[user]/scores assuming they have granted the user_games_activity permission.

Bypass specific URL from Akamai if certain cookie exist

I would like Akamai not to cache certain URLs if a specified cookie exist (i.e) If user logged in on specific pages. Is there anyway we can do with Akamai?
The good news, is that I have done exactly this in the past for the Top Gear site (www.topgear.com/uk). The logic goes that if a cookie is present (in this case "TGCACHEKEY") then the Akamai cache is to be bypassed for certain url paths. This basically turns off Akamai caching of html pages when logged in.
The bad news is that you require an Akamai consultant to make this change for you.
If this isn't an option for you, then Peter's suggestions are all good ones. I considered all of these before implementing the cookie based approach for Top Gear, but in the end none were feasible.
Remember also that Akamai strips cookies for cached resources by default. That may or may not effect you in your situation.
The Edge Server doesn't check for a cookie before it does the request to your origin server and I have never seen anything like that in any of their menus, conf screens or documentation.
However, there are a few ways I can think of that you can get the effect that I think you're looking for.
You can specify in the configuration settings for the respective digital property what path(s) or URL(s) you don't want it to cache. If you're talking about a logged on user, you might have a path that only they would get to or you could set up such a thing server side. E.g. for an online course you would have www.course.com/php.html that anybody could get to whereas you might use www.course.com/student/php-lesson-1.html for the actual logged on lessons content. Specifying that /student/* would not be cached would solve that.
If you are serving the same pages to both logged on and not-logged on users and can't do it that way, you could check server-side if they're logged on and if so add a cache-breaker to the links so when they follow a link a cache-breaker is automatically added. You could also do this client side if you want, but it would be more secure and faster to do it server-side. As a note on this, this could be userid-random#. That would keep it unique enough when combined with the page that nobody else would request it and get the earlier 'cache-broken' page.
If neither of the above are workable, there is one other way I can think of, which is a bit unconventional to say the least, but it would work. Create symbolically linked directory in your document root with another name so that you can apply the first option and exempt it from cacheing. Then you check if the guy is logged on and if so prepend the extra directory to the links. From akamai's point of view www.mysite.com/logged-on/page.html can be exempt from cache where www.mysite.com/content/page.html is cached. On your server if /logged-on/ symbolically links over to /content/ then you're all set.
When they login you could send them to a subdomain which is set up as a ServerAlias, so on your side it's the same, but on Akamai has differnt cache handling rules.
Following the same answer than #llevera, you can use the cookies on CloudFlare without intervention of engineers to make the change for you.
Having that sort of cookies to bypass content is a technique that its becoming more popular with the time, and even bug companies like Magento are using it for Magento 2 platform.
But solutions from above still valid, Maybe Akamai supports that that already now, we are in 2017!

In Django, disable #login_required for search engine spiders

I'm looking for a clean way to let search engine spiders bypass #login_required, viewing pages which typically require a logged-in user. I could write middleware that would automatically log search engines into a dummy account, but that's not exactly what I'd call clean. Any suggestions for a better solution? Thanks.
Don't do this. This is 'cloaking', and can get you banned from Google's index.
Cloaking refers to the practice of presenting different content or URLs to users and search engines. Serving up different results based on user agent may cause your site to be perceived as deceptive and removed from the Google index.
Cloaking: http://www.google.com/support/webmasters/bin/answer.py?answer=66355
Instead, you need to implement Google's First Click Free solution. In this setup, the first click from a Google search result is able to see the full content, subsequent clicks are trapped. This can be done on a referrer basis, or a cookie basis. You can read more about First Click Free here:
First Click Free: http://www.google.com/support/webmasters/bin/answer.py?answer=74536
Why would you want to do this? If search engines can see the pages, then anyone can see them without being logged in, because the information would surface on the search engine's results page. In any case, the only way to identify a spider or bot is by its user agent string, which is trivial to spoof.
I don't get it. in "#login_required" you have an important word: "required". if it's "required", it's for a good reason. It means that, in order to see the webpage, your credentials are mandatory. Because the content is private, secret, etc.
If you want to make your pages to be available via search engines, you have to make them public, and thus, login is not required anymore. And so, your view should not be protected by the #login_required decorator.
Maybe your problem lies beyond the availabilty of your pages. Maybe your content is actually made to be public, and your views should not be protected by this decorator. Maybe the only thing you need is to load the public part for every user (logged or anonymous) and eventually load the private bits if your user is identified.
Otherwise, leaving a backdoor for spiders is definitely a bad idea, because your private content won't be private anymore.

Cross Domain User Tracking

We have several websites on different domains and I'd like to be able to track users' movements on these sites.
Obviously cookies are not feasable, because they don't cross domain borders.
I could look at a combination of IP address and User Agent, but there are some cases where that does not work.
I don't want to use flash or other plugins.
Any ideas? Or am I doomed to rely on the IP/User_Agent combination?
You can designate one domain or subdomain to tracking and have it serve a 1x1 pixel image which you include in all pages you would like to track. Serve a cookie with the image, look at the tracking domain's server logs, voilĂ .
This solution requires no JavaScript, and works even if the user disables third-party cookies.
First, let's make sure the user agent is sending cookies:
If getCookie("c") == null then setCookie("c", "anyValue")
Then let the request finish (aka wait for next request)
Let's call our tracker cookie uaid.
If GET http://child.com/any-page and getCookie("c") is not null and getCookie("uaid") is null...
Redirect to http://parent.com/give-me-a-uaid?returnTo=http://child.com/any-page
On http://parent.com/give-me-a-uaid, check for cookie uaid
If not exists, create it and add it to response. If it exists, get its value.
Redirect to http://child.com/any-page?uaid=valueOfParentsUAIDCookie
Child.com sets cookie uaid with valueOfParentsUAIDCookie
Redirect to http://child.com/any-page
And of course, you are validating input, and white-listing your redirect URLs :)
Flows:
This question is closely related to the Question Accessing Domain Cookies within an iFrame on Internet Explorer.
For Internet Explorer I need to take P3P Policies into account and set an additional P3P HTTP-Header to allow images to set cookies across domain borders. Then I can use simon's suggestion.
You can follow the same concept used in Google Analytics. Injecting javascript in the pages you want to track.
You do not give any context to your situation -just the basic problem. So it is difficult to give an answer that clearly fits. However, here are some techniques/mechanisms for passing information from one page to another, regardless of what domain is involved.
include hyperlink to a 1x1 pixel transparent gif image (sometimes called a "beacon")
rely on referrer information in HTTP request headers to identify page hyperlink is on
include extra parameters in hyperlinks to other site - assuming you run both sites
buy services of a company like Akamai to do user tracking for you
possibly use cross domain cookie mechanism in the future if standard is ever approved
Which techniques really come down to whether you can place software on all of the sites (servers) that the user will visit where you have interest - or you cannot place your software on all of them.