I'm trying to distinguish between a web request coming from inside the iframe of my app in facebook vs a regular web visit. That way I can deliver the correct layout.
I had set a session variable when the first iframe request comes into my server (facebook sends a POST param called signed_request to your default canvas url), but then if the user actually visits me website after (outside of facebook) they get iframe layout delivered instead of what my site should look like.
I've looked through all the META info that come in with the iframe request and I dont see anything that would allow me to distinguish the two.
Any help would be much appreciated.
UPDATE: I'm using AppEngine as my application host
The easiest way is to make a unique url for access from Facebook, e.g. if your website is www.site.com then set up either fb.site.com or www.site.com/fb on your server and point it to the same place as www.site.com (and of course set your Facebook app settings to use the alternate url). Then your server code can easily check the accessing url to determine whether to format for Facebook or standalone website.
Another approach is to combine a session variable on the server-side with some javascript on the client-side. You can set a session variable when you receive the signed_request parameter, and then check it on each page load. As long as the session variable is set, you output iframe format and add a bit of javascript code to each page. The javascript checks to make sure the page is still in an iframe using something like if (window.self!=window.top) { //inside iframe }. If not inside an iframe it means the session variable is now stale, so the js jumps to some url that tells the server to clear it and then re-display the page in regular layout.
Related
I have website that requires authorization on all its pages. To achieve that there is server that response based on cookie. If there is cookie send ember app, login site otherwise. This allows to facebook like behavior. I am using ember-simple-auth addon to help with authorization.
Since user is never inside app without being successfully authorized, one should be able to save ember-data record object into session. This works nice for one tab but breaks horrible with multiple tabs. Is there way how to have ember-data objects in session and yet support multiple browser tabs?
Edit:
I maybe found workaround, save index into session and use peekRecord in computed property.
Currently I am working on an WebApp with Ember.JS. Now I want my customers to log in with their Twitter account using OAuth but I don't want my App to reload when they do.
So my idea was to have the login button open an popup to the Twitter authentication page which redirects to my page which has some JS based on the result e.g
window.opener.success(userdata);
and
window.opener.failure(error);
But since it first redirects to Twitter (the popup) browsers remove the window.opener properties to prevent cross site scripting even though it does redirect back to my own domain (where the JS code is).
Is there another way to go about this?
edit: I could user postMessage, but this doesn't work in IE8/IE9 in a popup. Only in an iFrame.
Yes, you have the same idea as some other programmers at Vestorly; they made a social authentication plugin called Torii I would recommend this as they have probably also taken care of all your obvious security concerns.
As the title implies,
I need to fetch data from certain website which need logins to use.
The login procedure might need cookies, or sessions.
Do I need QtWebkit, or can I get away with just QNetworkAccessManager?
I have no experience at both, and will start learning as I go.
So please save me a bit of time of comparing both ^^
Thank you in advance,
Evan
Edit: Having read some related answers,
I'll add some clarifications:
The website in concern does not have an API. So I will need to scrape web elements for the data myself.
Can I do that with just QNetworkAccessManager?
No, in most cases you don't need a full simulated web browser. In most cases, just performing the same web requests like a web browser would do is enough.
Try to record the web requests in your browser, using a plugin like "HTTP Live Headers" or "Firebug" in Firefox. I think Chrome provides a similar tool out of the box. These tools record the GET and POST requests done by the website when you send a form in the webpage.
Another option is to inspect the HTML code of the login page. Find the <form> tag and its fields. Put them together in a GET / POST request in your application to simulate the same form.
Remember that some pages use randomized "tokens" in their forms, some set the tokens as cookies. In such cases, you need to request the login page itself in your application first (before sending the filled in form). Both QWebView and QNetworkAccessManager have cookie support.
To sum things up, I think QWebView provides a far more elegant way to simulate user interaction with a web page. The manual way is, however, more "lightweight", as you don't need Webkit and your application might be faster (because only the HTML page is loaded, without any linked resources like images, CSS, javascript files).
QWebView as class name states is a view, so it views something (in this case web pages). If you don't need to display loaded page, then you don't need a view. QNetworkAccessManager may do the work, but you need some knowledge about HTTP protocol, and also anything about target site: how does it hande logins, what type of request you have to send to login etc.
I am developing a Desktop application that allows the user to capture the contents of a web page loaded in web-browser. I take the URL from the browser, then load the contents into my WebView and then create image out of it.
It works fine with http URLs. The problem comes when I have to capture https URL contents.
Suppose I have a login page with https URL displayed in the browser, I get this URL from the browser and try to load it in my web view. I get the following error :
"The certificate for this server is invalid. You might be connecting to a server that is pretending to be “” which could put your confidential information at risk."
If user has logged in to a web page and viewing some contents in Safari browser. Now, if he wants to capture the entire web page, he comes back to my app.
But, my app is not able to capture these contents. The reason is that, once user logs into a site, cookies are written into his system and this is browser specific. Hence, my web view is not able to directly enter into the page that user is viewing in the browser.
Even though technically it sounds right, user will not accept this behavior in my app.
How do I solve this? Is there any alternative method to capture the entire web page that user is viewing in the browser?
Thanks and Regards,
Deepa
You could investigate making your application a browser plugin for those browsers on your target platform.
I've read all the other q's here regarding the topic but couldn't solve my problem.
I'm setting on my website the email of the user in the localStorage and i want to retrieve it in the extension.
localStorage.setItem("user", "andrei.br92#gmail.com" );
But when i try to receive it with the chrome extension it fails to do so
value = localStorage.getItem("user");
Which way is easier ? cookies localstorage ? im not pretentious
Please see this:
http://code.google.com/chrome/extensions/content_scripts.html#host-page-communication
Content scripts are run in a separate JavaScript world, which means the content script's localStorage is different from the website's localStorage. The only thing the two share is the DOM, so you'll need to use DOM nodes/events to communicate.
Use chrome.storage.local instead of localstorage. Content scripts using chrome.storage see the same thing that the extension page sees. More at https://developer.chrome.com/extensions/storage.html
Please see the information on Chrome content scripts. I'm betting you fell into the same initial trap that I did -- trying to read localStorage from an page-injected script, yes?
You do not want to use cookies when localstorage can do. That is because
Cookies can be accessed/modified through background page only.
Cookies are stored in context of a url/domain and not extension. So you will have to store a cookie for every domain that you wish to operate upon.
With every HttpRequest all the cookies associated with corresponding url/domain gets transmitted to server, so in effect you will be adding overhead to user's requests.)