I'm writing an application in Django that gives users the ability to embed videos from my site. I'm giving the user iFrame code to embed the videos. I've come to discover that this isn't allowed. The console shows the following error when trying to do so:
Load denied by X-Frame-Options: http://blah.com/embed/110/ does not permit cross-origin framing.
After much research, I've discovered what's going on. My question is: does anyone know how services like Youtube and Vimeo get around this?
There is a special header to allow or disallow showing page inside i-frame - X-Frame-Options
It's used to prevent an attack called clickjacking. You can check the Django's doc about it https://docs.djangoproject.com/en/dev/ref/clickjacking/
Sites that want their content to be shown in i-frame just don't set this header.
I think in your installation of Django this protection is turned on by default. If you wan't to allow embedding your content inside i-frames you can either disable the clickjack protection in your settings for the whole site, or use per view control with django.views.decorators.clickjacking decorators:
xframe_options_exempt
xframe_options_deny
xframe_options_sameorigin
Per view control is a better option.
Related
I'm looking to see some info about my facebook contacts, and I want the info to be overlayed on the currently open website.
Currently, I'm trying to do this via a bookmarklet.
Is it possible for me to overlay a div over the currently open web page and populate it with a functioning facebook login button (if the user is not logged in)? Are there publicly available working examples of something like this?
It is probably not possible to simply embed Facebook within an iframe because Facebook blocks people from embedding their pages within frames or iframes by putting this into the response header, "X-Frame-Options: DENY". This is most likely to prevent click-jacking and similar security exploits.
To test this, enter any page from Facebook into http://savanttools.com/testframe
Facebook has an API which allows you to do many things, but it requires server side code, and can not be done simply with a bookmarklet.
There is also always the brute force method where your server scrapes data from any website you want it to. Then that data could be put into a bookmarklet.
Finally, the same thing could be achieved by writing an add-on or a user script without using a bookmarklet at all.
As the title implies,
I need to fetch data from certain website which need logins to use.
The login procedure might need cookies, or sessions.
Do I need QtWebkit, or can I get away with just QNetworkAccessManager?
I have no experience at both, and will start learning as I go.
So please save me a bit of time of comparing both ^^
Thank you in advance,
Evan
Edit: Having read some related answers,
I'll add some clarifications:
The website in concern does not have an API. So I will need to scrape web elements for the data myself.
Can I do that with just QNetworkAccessManager?
No, in most cases you don't need a full simulated web browser. In most cases, just performing the same web requests like a web browser would do is enough.
Try to record the web requests in your browser, using a plugin like "HTTP Live Headers" or "Firebug" in Firefox. I think Chrome provides a similar tool out of the box. These tools record the GET and POST requests done by the website when you send a form in the webpage.
Another option is to inspect the HTML code of the login page. Find the <form> tag and its fields. Put them together in a GET / POST request in your application to simulate the same form.
Remember that some pages use randomized "tokens" in their forms, some set the tokens as cookies. In such cases, you need to request the login page itself in your application first (before sending the filled in form). Both QWebView and QNetworkAccessManager have cookie support.
To sum things up, I think QWebView provides a far more elegant way to simulate user interaction with a web page. The manual way is, however, more "lightweight", as you don't need Webkit and your application might be faster (because only the HTML page is loaded, without any linked resources like images, CSS, javascript files).
QWebView as class name states is a view, so it views something (in this case web pages). If you don't need to display loaded page, then you don't need a view. QNetworkAccessManager may do the work, but you need some knowledge about HTTP protocol, and also anything about target site: how does it hande logins, what type of request you have to send to login etc.
I am working on a website that generates traffic for partner sites. When a partner site's logo is clicked on our site we open the partner site in a page that contains our basic header and the partner site within an iframe. Earlier we were simply opening the partner site in new window. All cool so far.
Most partner sites use google analytics to track the traffic that we send them and soon after we started opening sites within iframe our partners reported that google analytics does not track data anymore (or tracks just a fraction of data).
I have done my fair share of homework/research on googleverse and found the know issue with google analytics or cookies in general across domains and iframes.
I am trying to resolve this issue and the only solution that has been referenced is the use of P3P headers.
First, where do the P3P headers go? In my sites pages or the partner sites pages. Since we have many partner sites (big and small) it wont be practical if the solution is to put tags in each of these sites. I can easily have them added to the page that contains the iframe.
Among the various p3p header generators is there a reliable one that you recommend?
Is there any way around this issue? I really need to open the sites in iframes and obviously the partner sites really need to track the traffic.
Thank you for the help.
Unfortunately, both you and the partner site needs to set the headers.
Alternatives:
If you do not want the partner site to set headers, one option is to lower the security level (in IE) or grant access to 3rd party cookies (in FF) in the browser settings. Every client has to do this, so this may not be an attractive solution.
Use localStorage (HTML5 thingy - browsers that support localStorage allow access to both the site and the iFrame's content that is stored in localStorage). This may not be feasible in the short term as it requires both you and your partner site to implement saving/reading information to/from localStorage and not every browser supports it (older IE browsers especially).
To add a basic policy header (ideally you should generate your own policy which is straight forward - check item#2 below)
in php add this line:
<?php header('P3P: CP="CAO PSA OUR"'); ?>
in ASP.Net:
HttpContext.Current.Response.AddHeader("p3p", "CP=\"CAO PSA OUR\"");
in HTML pages:
<meta http-equiv="P3P" content='CP="CAO PSA OUR"'>
Regarding your other concerns:
1) P3P headers refer to the HTTP header that delivers something called a compact policy to the browser. Without such a policy in place, IE (most notably) and other browsers will block access to 3rd party cookies (a term used to refer to iFrame's cookies) to protect user's privacy concerns.
As far as Google Analytics goes, both your site and the partner site still needs to configure cross domain tracking as outlined in their documentation.
2) You can use this basic policy header (which is enough to fix iFrame's cookies):
P3P: CP="CAO PSA OUR"
or generate your own. If you're not sure what those terms mean, see this.
To generate such policy, you can use online editors such as p3pedit.com or IBM's tool which present a set of questions and allow you to present answers. This makes it easy for you to quickly generate such policy. You can generate the policy XML, compact policy and more.
3) You can try the two alternatives mentioned above.
Steps to add the policy to your entire site
Generate a compact policy (using one of the tools mentioned earlier) or use the basic policy
In IIS, right-click the desired page, directory, or site, and then click Properties.
On the HTTP Headers tab, click Add.
In the Custom Header Name field, type P3P.
In the Custom Header Value field, enter your Compact P3P Policy (or the basic one from above) and then click OK.
In Apache, a mod_header line like this will do:
Header append P3P "CP=\"CAO PSA OUR\""
Hope ths helps.
The Twitter API returns this value for the Twitter account 'image_url':
http://a1.twimg.com/profile_images/75075164/twitter_bird_profile_bigger.png
In my Twitter client webapp, I am considering hotlinking the HTTPS version of avatars which is hosted on Amazon S3 : https://s3.amazonaws.com/twitter_production/profile_images/75075164/twitter_bird_profile_bigger.png
Any best practices which would discourage me from doing this ? Do 3rd party Twitter client applications typically host their own copies of avatars ?
EDIT: To clarify, I need to use HTTPS for images because my webapp will use a HTTPS connection and I don't want my users to get security warnings from their browser about the page containing some content which is not authenticated. For example, Firefox is known to complain about mixed http/https content.
My problem is to figure out whether or not hotlinking the https URLs is forbidden by Twitter, since these URLs are not "public" from their API. I got them by analyzing their web client HTML source when connected to my Twitter account in HTTPS.
Are you thinking of storing the image URL in your application or retrieving it for the user as it is required?
If its the latter option then I don't see an issue with hot-linking the images. If you are storing the location of the image url in your own system then I see you having broken links whenever the images change (I'm sure they will change the URLs at some point in the future).
Edit
Ok, now i see your dilemma. I've looked through the API docs and there doesnt seem to be too much in terms of being able to get images served in HTTPS or getting the URL of the Amazon S3 image. You could possibly write a handler on your own server that would essentially cache & re-serve the HTTP image as HTTPS however thats a bit of un-neccesary load on your servers. Short of that I haven't come across a better solution. GL
the things seems updated since that.
Please check: https://dev.twitter.com/docs/user-profile-images-and-banners
The SSL-enabled path template for a profile image is indicated in the profile_image_url_https. The table above demonstrates how to apply the same variant selection techniques to SSL-based images.
Why would you want to copy the image to your own webspace? This will increase your bandwidth cost and you get cache consistency issues.
Use the URL that the API gives you.
I can see that you may want to cache the URL that the API returns for some time in order to reduce the amount of API calls.
If you are writing something like an iPhone app, it makes sense to cache the image locally (on the phone), in order to avoid web traffic altogether, but replacing one URL with another URL should not make a difference (assuming that the Twitter image server works reliably).
Why do you want HTTPS?
I was just wondering if it is possible and if so what the best way to create a web-page that is only accessible from a custom iPhone application? For example, if you tried to access the webpage from the iPhone's built in browser, or any other browser it would display an error page but when accessed from a custom built application it would be fully functional.
One idea that has come up is to change the User-Agent string in the embedded browser inside the application to something custom. I'm not sure if this is viable though.
I hope this makes sense.
Thanks in advance.
-Ben
Any and all request headers can and will be spoofed. Authentication is the only plausible solution.
Changing the User-Agent string is a good method. I haven't tried it personally, but you should be able to alter the NSURLRequest object and change the user-agent before the request is made.
You could also use other custom data in the HTTP request to allow/block visits. You could add a query string to the URL or include some unique POST data.
Note this isn't a real security measure as anyone could fake any part of the HTTP request to gain access. Someone could easily read the HTTP traffic generated from your app and use that to figure out how to access the site with any browser.