aws bucket policy to allow Facebook to show open graph images - amazon-web-services

I can't seem to get Facebook to have access to the resources I put in my open graph tags in my bucket policy.
This is my current policy, meant to prevent hotlinking but allow Facebook (and ideally other social networks) to access my images (and other resources)
{
"Version": "2012-10-17",
"Id": "idance and social only",
"Statement": [
{
"Sid": "Explicitly only allow only from specific referer.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::common-resources-idance-net/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"https://example.com/*",
"*facebook*",
"*twitter*",
"*google*"
],
"aws:UserAgent": [
"*facebook*",
"*twitter*",
"*google*"
]
}
}
}
]
}
Here's the Facebook Sharing Debugger link saying that (and presumably why) the scraping for the resources fails.
The error for the inaccessible image is:
could not be downloaded because it exceeded the maximum allowed sized of 8Mb or your server was too slow to respond.
But the image is not large and S3 is not slow. So I imagine that this is a bucket policy issue.
I'm not sure what I'm missing but can anyone shed some light on what I might change to make this work?
Update: I removed the policy entirely, and it seems to not had any effect. Perhaps some special header needs to be sent to Facebook when scraping?

Related

Cloudfront bucket policy for video on demand

I am creating a video on demand platform similar to netflix. I want the users that have purchased the subscription to view my videos to be able to watch them (not download them). I also do not want the users to be able to copy the source of the video url, and access it through a new tab (this is working for now, it says access denied).
So what I have done for now is this, I have copied the official code from amazon's documentation, which allegedly only allows the content (in my case the video) to be played on the website that I specify. This is the policy:
{
"Version": "2008-10-17",
"Id": "Policy1408118342443",
"Statement": [
{
"Sid": "Stmt1408118336209",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"https://mywebsite/*",
"https://mywebsite/*"
]
}
}
}
]
So what hapenned was, I was not able to play the video on my site and I was not able to access the video by direct URL. I have tried selecting the video file, and allowing "Read object" for public access, but that only made my video be direcctly accessable by URL, which I don't want.
My "Block public access" permissions are all currently off, because if they are On I cannot edit the bucket policy (it says "Access denied" when I hit save).
My question is, how do I protect my video content from bandwith theft? I don't want a person to buy my membership, then send the direct video link to his friends so everyone can see. Allegedly to amazon this is possible, but what seems to be the problem?
Also I am planning to use Cloudfront after I solve this issue, so hopefully that won't interfere.

AWS S3 Bucket Policy throws Access Denied Error

As per the link https://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.html --> I was trying to create and host a static page on AWS S3. But I'm having trouble providing public access to my bucket using bucket policy.
So, as soon as I paste
{
"Version":"2012-10-17",
"Statement":[{
"Sid":"PublicReadGetObject",
"Effect":"Allow",
"Principal": "*",
"Action":["s3:GetObject"],
"Resource":["arn:aws:s3:::mybucket-name.com/*"]
}]
}
it's throwing me access denied error.
in IAM, to my user id, I have associated below custom policy, but still, I'm getting the error message.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::mybucket-name.com",
"arn:aws:s3:::mybucket-name.com/*"
]
}
]
}
I have also linked this policy to my user name as well as role.
While creating the bucket, my "block public access" looks like this.
Also my ACL button I have provided public access to "List only".
So, can anyone help me what I'm missing here, I have looked into the different proposal provided here, still no luck. Can anyone give me any direction, like without getting lost?
You only assigned yourself permissions to edit content in the bucket. For a list of rights see the S3 docs.
You at least want to add s3:PutBucketPolicy to the list of your user permissions. But s3:PutBucketAcl and s3:PutBucketWebsite might also be useful.
Personally, i would likely just give s3:* to the user setting this up, or you might end up hitting this stumbling block again.

AWS IAM - How to disable users from making changes via the console, but allow API changes via CLI

For Amazon Webservices IAM, is there a way where I can create a role with some policies that only allow Read in the Console, yet allows Read/Write using the API/CLI/Terraform.
The purpose is to force usage of infrastructure-as-code to avoid configuration drift.
Any insights or references to Best practices are very welcome.
It's important to be clear that there is no fool-proof way to do this. No system can ever be sure how a request was made on the client side.
That being said, there should be a way to achieve what you are looking for. You will want to use the IAM condition aws:UserAgent (docs here) to prevent users from using the browser. Here is an example:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*"
},
{
"Effect": "Deny",
"Action": "*",
"Resource": "*"
"Condition": {
"StringLike": {
"aws:UserAgent": "console.amazonaws.com"
}
}
}
]
}
CloudTrail logs the UserAgents for requests, so you could use that to figure out which UserAgents to block. (docs here)

Items in my Amazon S3 bucket are publicly accessible. How do I restrict access so that the Bucket link is only accessible from within my app?

I have an Amazon S3 bucket that contains items. These are accessible by anyone at the moment with a link. The link includes a UUID so the chances of someone actually accessing it are very low. Nonetheless, with GDPR around the corner, I'm anxious to get it tied down.
I'm not really sure what to google to find an answer, and having searched around I'm not closer to my answer. I wondered if someone else had a solution to this problem? I'd like to only be able to access the resources if I'm clicking on the link from within my app.
According to the S3 documentation, you should be able to restrict access to S3 objects to certain HTTP referrers, with an explicit deny to block access to anyone outside of your app:
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringNotLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
}
]
}
The prerequisite for using this setup would be to build an S3 link wrapper service and hosting it at some site for your app.
This is a standard use-case for using a Pre-signed URL.
Basically, when your application generates the HTML page that contains a link, it generates a special URL that includes an expiry time. It then inserts that URL in the HTML link code (eg for an image, you would use: <img src='[PRE-SIGNED URL]'/>
The code to generate the pre-signed URL is quite simple (and is provided in most SDKs).
Keep your Amazon S3 bucket as private so that other people cannot access the content. Then, anyone with a valid pre-signed URL will get the content.

Correct Privileges in Amazon S3 Bucket Policy for AWS PHP SDK use

I have server S3 buckets belonging to different clients. I am using AWS SDK for PHP in my application to upload photos to the S3 bucket. I am using the AWS SDK for Laravel 4 to be exact but I don't think the issue is with this specific implementation.
The problem is unless I give the AWS user my server is using the FullS3Access it will not upload photos to the bucket. It will say Access Denied! I have tried first with only giving full access to the bucket in question, then I realized I should add the ability to list all buckets because that is probably what the SDK tries to do to confirm the credentials but still no luck.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::clientbucket"
]
}
]
}
It is a big security concern for me that this application has access to all S3 buckets to work.
Jeremy is right, it's permissions-related and not specific to the SDK, so far as I can see here. You should certainly be able to scope your IAM policy down to just what you need here -- we limit access to buckets by varying degrees often, and it's just an issue of getting the policy right.
You may want to try using the AWS Policy Simulator from within your account. (That link will take you to an overview, the simulator itself is here.) The policy generator is also helpful a lot of the time.
As for the specific policy above, I think you can drop the second statement and merge with the last one (the one that is scoped to your specific bucket) may benefit from some * statements since that may be what's causing the issue:
"Action": [
"s3:Delete*",
"s3:Get*",
"s3:List*",
"s3:Put*"
]
That basically gives super powers to this account, but only for the one bucket.
I would also recommend creating an IAM server role if you're using a dedicated instance for this application/client. That will make things even easier in the future.