AWS S3 Web hosting, hidingsource files - amazon-web-services

How to make files innacessible on AWS S3
First i want to apologize if this is a duplicate of any question. I have not found any information regarding my question.
I want to have a website hosted on AWS S3. The website uses some libraries that i dont want people have access to. If I make a standard policy:
{
"Version": "2008-10-17",
"Statement": [{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {"AWS": "*"},
"Action": ["s3:GetObject"],
"Resource": ["arn:aws:s3:::www.my-website.com/*"]
}]
}
I will make the bucket public and everyone will be able to get to those files. If I make all but those source files public, the website won't work. I tried replacing the "Resource": ["arn:aws:s3:::www.my-website.com/*"] with "NotResource": ["arn:aws:s3:::www.my-website.com/library/*"]. It made those files private, but the website was not working again.
I currently dont have my own registered domain.
Question
So my question is, how to make a website run without making source files public. Is it even posible?

You can add another statement that restricts access to source files. Explicit deny supersedes any permission you grant.
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {"AWS": "*"},
"Action": ["s3:GetObject"],
"Resource": ["arn:aws:s3:::www.my-website.com/*"]
},
{
"Sid": "Restrict source files",
"Effect": "Deny",
"Principal": {"AWS": "*"},
"Action": ["s3:GetObject"],
"Resource": ["arn:aws:s3:::www.my-website.com/src/*"]
}
]
}
I'm assuming your source files are in a src directory in the web root.
I use a different strategy for dealing with this: When I deploy static sites to s3, I deploy the compiled distributable files. The source files never make it to the bucket.

Related

React app on S3 bucket not working and Access Denied when trying to sync S3 on the command line

I followed an AWS tutorial to create an S3 bucket place a react app into it and ran into a ton of problems. Here are two:
I was able to manually upload the files that are in the build folder after running npm run build. However, the app doesn't display correctly when I go to the URL. It gives me some 404 errors in the console.
The second is that I would like to be able to use the sync command to push from the command line. The problem is that I get an access denied error.
aws s3 sync build/ s3://storygraf-react
fatal error: An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
I have confirmed that I am logged in under the right credentials. I came across some other tutorials that claimed that you need some additional policies within the S3 bucket to allow for the sync from the command line. The problem is that entering those policies results in a very unhelpful error.
Here is what I have tried, along with basically every variation on this. Most tutorials are incorrect and leave off the principal or something.
{
"Version": "2012-10-17",
"Id": "Policy1626945244527",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::storygraf-react/*"
},
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::storygraf-react/*"
},
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::storygraf-react/*"
}
]
}
I'm pretty frustrated that a basic tutorial leaves me in this position with no real help or explanation. This seems like something that should have been a slam dunk.

Items in my Amazon S3 bucket are publicly accessible. How do I restrict access so that the Bucket link is only accessible from within my app?

I have an Amazon S3 bucket that contains items. These are accessible by anyone at the moment with a link. The link includes a UUID so the chances of someone actually accessing it are very low. Nonetheless, with GDPR around the corner, I'm anxious to get it tied down.
I'm not really sure what to google to find an answer, and having searched around I'm not closer to my answer. I wondered if someone else had a solution to this problem? I'd like to only be able to access the resources if I'm clicking on the link from within my app.
According to the S3 documentation, you should be able to restrict access to S3 objects to certain HTTP referrers, with an explicit deny to block access to anyone outside of your app:
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"StringNotLike": {"aws:Referer": ["http://www.example.com/*","http://example.com/*"]}
}
}
]
}
The prerequisite for using this setup would be to build an S3 link wrapper service and hosting it at some site for your app.
This is a standard use-case for using a Pre-signed URL.
Basically, when your application generates the HTML page that contains a link, it generates a special URL that includes an expiry time. It then inserts that URL in the HTML link code (eg for an image, you would use: <img src='[PRE-SIGNED URL]'/>
The code to generate the pre-signed URL is quite simple (and is provided in most SDKs).
Keep your Amazon S3 bucket as private so that other people cannot access the content. Then, anyone with a valid pre-signed URL will get the content.

AWS Serverless Application Publish via Visual Studio

Using .Net Core, visual studio 2017 and AWS Toolkit for Visual 2017, I created a basic web api, the api works as designed.
However when it comes to publishing/deploying it, the first time works perfectly when the Stack doesnt exist, creates everything its suppose to. When I make a change and need to re-deployed/publish, it comes back with the following error.
Error creating CloudFormation change set: Stack [TestStack] already exists and cannot be created again with the changeSet [Lambda-Tools-636366731897711782].
Just above the error message is this
Found existing stack: False
Im wondering if there is something not quite right with it detecting if the Stack exists.
Im just wondering if Im missing something, or if this is actually be design, as for me to republish it I have to log into my AWS Console and go into the cloud formation section and delete the existing Stack.
Publish Dialog
Project Structure
After a bit of digging, and general trial and error. I believe this is actually to do with permissions of the user performing the publish. (The user in AWS)
I changed an inline policy to
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": [
"cloudformation:*"
],
"Resource": [
"*"
]
}
]
}
Where cloudformation:* used to be several lines for individual permissions.
This now successfully publishes over an existing Stack, however visual studio doesnt like it and crashes. (Although the update does go through to AWS)
AWS' Serverless Application Model is … very new still. And for lack of any documentation about what IAM permission one needs to deploy an App with their CLI, I've worked out this policy that seems to work, and only grants the least needed permissions for the task.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"lambda:UpdateFunctionCode",
"s3:PutObject",
"cloudformation:DescribeStackEvents",
"cloudformation:UpdateStack",
"cloudformation:DescribeStackResource",
"cloudformation:CreateChangeSet",
"cloudformation:DescribeChangeSet",
"cloudformation:ExecuteChangeSet",
"cloudformation:GetTemplateSummary",
"cloudformation:ListChangeSets",
"cloudformation:DescribeStacks"
],
"Resource": [
"arn:aws:lambda:*:123456789012:function:*-SAM-*",
"arn:aws:cloudformation:*:123456789012:stack/<STACK NAME OR GLOB>/*",
"arn:aws:cloudformation:<AWS_REGION>:aws:transform/Serverless-2016-10-31",
"arn:aws:s3:::aws-sam-cli-managed-default-samclisourcebucket-*/*"
]
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:ListBucketMultipartUploads"
],
"Resource": "arn:aws:s3:::aws-sam-cli-managed-default-samclisourcebucket-*"
},
{
"Sid": "VisualEditor2",
"Effect": "Allow",
"Action": "cloudformation:ValidateTemplate",
"Resource": "*"
}
]
}
Note:
Replace <STACK NAME OR GLOB> with something that best suits your needs, like:
* If you don't care which CloudFormation Stack this grants access to
*-SAM-* If you name your SAM CloudFormation apps with some consistency
Replace <AWS_REGION> with the region you're operating in.
The arn:aws:s3:::aws-sam-cli-managed-default-samclisourcebucket-* is the standard bucket naming that SAM CLI uses for creating a bucket for deploying CloudFormation Templates or Change Sets. You could alter this to explicitly be the name of the bucket SAM created for you.

Correct Privileges in Amazon S3 Bucket Policy for AWS PHP SDK use

I have server S3 buckets belonging to different clients. I am using AWS SDK for PHP in my application to upload photos to the S3 bucket. I am using the AWS SDK for Laravel 4 to be exact but I don't think the issue is with this specific implementation.
The problem is unless I give the AWS user my server is using the FullS3Access it will not upload photos to the bucket. It will say Access Denied! I have tried first with only giving full access to the bucket in question, then I realized I should add the ability to list all buckets because that is probably what the SDK tries to do to confirm the credentials but still no luck.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::clientbucket"
]
}
]
}
It is a big security concern for me that this application has access to all S3 buckets to work.
Jeremy is right, it's permissions-related and not specific to the SDK, so far as I can see here. You should certainly be able to scope your IAM policy down to just what you need here -- we limit access to buckets by varying degrees often, and it's just an issue of getting the policy right.
You may want to try using the AWS Policy Simulator from within your account. (That link will take you to an overview, the simulator itself is here.) The policy generator is also helpful a lot of the time.
As for the specific policy above, I think you can drop the second statement and merge with the last one (the one that is scoped to your specific bucket) may benefit from some * statements since that may be what's causing the issue:
"Action": [
"s3:Delete*",
"s3:Get*",
"s3:List*",
"s3:Put*"
]
That basically gives super powers to this account, but only for the one bucket.
I would also recommend creating an IAM server role if you're using a dedicated instance for this application/client. That will make things even easier in the future.

AWS: Cannot delete folders with current IAM policy

I am using the policy pasted below. This policy does almost everything I intend it to do. The user can get to the specified folder (/full/test/next/) within the specified bucket (BUCKETNAME). They can upload files, delete files, create new folders...etc.
However, they cannot delete folders created within this directory (i.e. cannot delete /full/test/next/examplefolder). I've been searching around and doing some modification but I have not found any answers. Any help would be much appreciated.
I apologize for any lack of clarity or incorrect terminology. I am new to AWS.
Two additional notes:
1. I can delete these folders from the main administrative account.
2. As the user, I do NOT have any rights within these folders (even if the user created the folders).
Pasted Code -
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUserToSeeBucketListInTheConsole",
"Action": ["s3:ListAllMyBuckets", "s3:GetBucketLocation"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::*"]
},
{
"Sid": "AllowRootAndHomeListingOfProperFolder",
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::BUCKETNAME"],
"Condition":{"StringEquals":{"s3:prefix":["","full/","full/test/", "full/test/next/", "full/test/next/*"],"s3:delimiter":["/"]}}
},
{
"Sid": "AllowAllS3ActionsInUserFolder",
"Effect": "Allow",
"Action": ["s3:*"],
"Resource": ["arn:aws:s3:::BUCKETNAME/full/test/next/*"]
}
]
}
Ok, I can confirm this that this is an issue with the Browser. I had the exact same problem and after a lot of head banging, I figured out that it was a trivial issue. I changed my browser and it worked. Also, I was able to delete the folder using AWS CLI as well as AWS Ruby SDK.
So, there is nothing wrong in your policy.