Restrict S3 permissions to just website - amazon-web-services

I have people uploading video content and I'd like to restrict the video content to ONLY be streamed from my site. Since the video URLs in the video tag are easily accessible through the HTML source, I was to stop people from copying the direct s3 url and putting it in another tab.
I was looking over the docs here: http://docs.aws.amazon.com/IAM/latest/UserGuide/AccessPolicyLanguage_ElementDescriptions.html#Condition
But it wasn't immediately obvious to me.
Thanks for your help!

You need to make this bucket private and use the signed URL to give access only to your users on your website. Signed URLs have short life (and required policy baked into it) when you generate them. This will prevent misuse even if somebody steals the URLs (or sends you the faked referrer headers etc).
You can create these URLs manually (difficult to manage) or programmatically (some coding work required). In the second case, once your website user contacts your server, then generate and serve the auto-expiring URL. Use this URL then on your website.
Overview of Signed URLs - Amazon CloudFront.

Related

Any way we can share a specific item publicly from a private S3 bucket?

The question is pretty vague but here's the entire problem statement. I am using Django REST APIs, and I'm generating invocies for my clients. Using wkhtmltopdf, I'm able to generate a PDF file which gets automatically backed up to S3. Now, we need to retreive the said invoice once our client clicks on a link.
We're using pre-signed URLs right now, which last for 12 hours, right? Once that link expires, the entire backend fails.
I mean, even if we go for permanent pre-signed links, would there not be a security issue?
I could really use some guidance on this.
Now, we need to retreive the said invoice once our client clicks on a link.
We're using pre-signed URLs right now [...]
Only generate the pre-signed URL for a given S3 URI when the authenticated client clicks on the link. You can then give it a very short expiry.

Restrict all access to S3 static website except from our Elastic Beanstalk website

We have an Elastic Beanstalk instance that serves our PHP website (example.com). We just added an S3 bucket which serves a different static website (static.com).
The EB website (example.com) requires specific credentials which are supplied to the end-user for logging in. We would like to have the S3 website (static.com) only viewable to the logged-in users of the EB website (example.com) .
Use Cases:
A user is logged into “example.com”. Within the site there would be links to the files on “static.com”. Clicking on these links would take the user to the files on “static.com” and they could navigate around that site.
Someone has a URL to a page on “static.com”. Maybe someone shared that URL with them (that is expected behavior). When they attempt to load that URL in a browser, they are redirected to the login screen of “example.com” to login.
What is the best, and easiest, way to accomplish this? Cookies, Cloudfront, Lamda functions? “Signed URLs” sounded like a possible avenue, but the URLs cannot change over time. This is a requirement in case users do share the URLs (which is expected and ok). Example: Hey Johnny, check out the information at "static.com/docs/widget_1.html"
If you have private content, CloudFront signed URLs are the right choice to generate unique URLs for authenticated users of your application for a limited time. Each time a user loads a page, you generate new short-lived URLs.
If you'd like to enable someone to share links, one option is to provide users with a share option in your application that generates a SignedURL with a longer TTL of a fixed amount (e.g., 3 days) for sharing. Or enables the user to select the length of time the shareable link should be valid, with a maximum allowed time period of x hours/days. If the link expires, they can generate a new one in the application.

How can I grant access only if resource is accessed through my domain?

I have a bunch of videos and all of them are uploaded on Wistia. On Wistia, I have set up access for my domain, so they will play only when the videos are fetched from my domain.
If someone uses View Source and copies the video URL and pastes it in a separate browser window, they get an "access denied' message.
I'm thinking about moving my videos to Google Cloud Storage. So, my questions are:
Does Google cloud provide a similar domain restriction feature?
How can I set this up? For now, I've created a temporary bucket and uploaded a video and granted it public access. Then I copied the public link of the MP4 file and added to my website, and it obviously plays, but then any paid member can use View Source, copy the MP4 link and upload it to other streaming services for everyone to see.
EDIT
Is there a way to do this programmatically - like my website is in PHP - so something along the lines like - keep the bucket as restricted access and then through PHP - pass some key and retrieve the video file. Not sure if something like this is possible.
Thanks
I do not believe that there is an access control mechanism in Google Cloud Storage equivalent to the one you are using in Wistia.
There are several methods to restrict object access (see https://cloud.google.com/storage/docs/access-control) in GCS, but none of them are based upon where the request came from. The only one that kind of addresses your issue is to use Signed URLs. Basically, a user would go to your site, but instead of giving them the "real" URL of the object they are going to be using, your application retrieves a special URL that is time-limited. You can set the length of time it is valid for.
But if what you are worried about is people copying your video, presumably they could still see the URL someplace and copy the data from there if they did it immediately, so I don't think that really solves your problem.
Sorry I can't be more helpful.

Serve private, user-uploaded media from Google Cloud Storage

I'm evaluating using GCP for my new project, however, I'm still trying to figure out how to implement the following feature and what kind of costs it will have.
TL;DR
What's the best strategy to serve user-uploaded media from GCP while giving users full control on who will be able to access them?
Feature Description
As an User, I want to upload some kind of media (eg: image, videos, etc...) in a private and secure way.
The media must be visible by me and by a specific subgroup of users to which I've granted access to.
Anybody else must not be able to access the media, even if he obtained the URL.
The media content would then be displayed on the website.
Dilemma
I would like to use Cloud Storage to store all the media, however, I'm struggling to find a suitable solution for the authorization part.
As far as I can tell, features related to "Access Control" are mostly tailored at Project and Organisational level.
The closest feature so far are Signed URLs, but this doesn't satisfy the requirement of not being able to access it even if you have the URL, even though it expires soon after and perhaps it could be a good compromise.
Another problem with this approach is that the media cannot be cached at the browser level, which could save quite some bandwidth in the long run...
Expensive Solution?
One solution that came to my mind, is that I could serve it through a GCE instance by putting an App there that validate a user, probably through a JWT, and then stream it back while using the appropriate cache headers.
This should satisfy all requirements, but I'm afraid about egress costs skyrocketing :(
Thank you to whoever will help!
Signed URLs are the solution you want.
Create a service account that represents your application. When a user of your application wants to upload an object, vend them a signed URL for performing the upload. The new object will be readable only by your service account (and other members of your project).
When a user wants to view an object, perform whatever checks you like and then vend them a signed URL for reading the object. Set a short expiration time if you are worried about the URLs being shared.
I would not advise the GCE-based approach unless you get some additional benefit out of it. I don't see how it adds any additional security to serve the data directly instead of via a signed URL.

Hotlinking Twitter avatar images?

The Twitter API returns this value for the Twitter account 'image_url':
http://a1.twimg.com/profile_images/75075164/twitter_bird_profile_bigger.png
In my Twitter client webapp, I am considering hotlinking the HTTPS version of avatars which is hosted on Amazon S3 : https://s3.amazonaws.com/twitter_production/profile_images/75075164/twitter_bird_profile_bigger.png
Any best practices which would discourage me from doing this ? Do 3rd party Twitter client applications typically host their own copies of avatars ?
EDIT: To clarify, I need to use HTTPS for images because my webapp will use a HTTPS connection and I don't want my users to get security warnings from their browser about the page containing some content which is not authenticated. For example, Firefox is known to complain about mixed http/https content.
My problem is to figure out whether or not hotlinking the https URLs is forbidden by Twitter, since these URLs are not "public" from their API. I got them by analyzing their web client HTML source when connected to my Twitter account in HTTPS.
Are you thinking of storing the image URL in your application or retrieving it for the user as it is required?
If its the latter option then I don't see an issue with hot-linking the images. If you are storing the location of the image url in your own system then I see you having broken links whenever the images change (I'm sure they will change the URLs at some point in the future).
Edit
Ok, now i see your dilemma. I've looked through the API docs and there doesnt seem to be too much in terms of being able to get images served in HTTPS or getting the URL of the Amazon S3 image. You could possibly write a handler on your own server that would essentially cache & re-serve the HTTP image as HTTPS however thats a bit of un-neccesary load on your servers. Short of that I haven't come across a better solution. GL
the things seems updated since that.
Please check: https://dev.twitter.com/docs/user-profile-images-and-banners
The SSL-enabled path template for a profile image is indicated in the profile_image_url_https. The table above demonstrates how to apply the same variant selection techniques to SSL-based images.
Why would you want to copy the image to your own webspace? This will increase your bandwidth cost and you get cache consistency issues.
Use the URL that the API gives you.
I can see that you may want to cache the URL that the API returns for some time in order to reduce the amount of API calls.
If you are writing something like an iPhone app, it makes sense to cache the image locally (on the phone), in order to avoid web traffic altogether, but replacing one URL with another URL should not make a difference (assuming that the Twitter image server works reliably).
Why do you want HTTPS?