Static Site Deployment in AWS S3 with CloudFront - amazon-web-services

I am trying to set up a static website that has been configured to use index.html default documents. I have the following bucket policy set up in S3:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "Allow Public Access to All Objects",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::www.risendeadmc.com/*",
"Condition": {}
}
]
}
The 2nd step is I created a CloundFront distribution to distribute this S3 content.
Issue 1: Sub folders are still accessible by CDN domain name but index.html no longer load as default for document and hitting folder causing a content download.
I then set up a A record Alias to the CDN distribution in Route53 and now nothing resolves with a 403 Forbidden error no matter what I use.
Any configuration advise to resolve would be greatly appreciated.
What I am looking for is the ability to use my domain set up in Route53 to point to the CloudFront Distribution to provide access (with index.html default) content access.
I would like to keep root and sub folder default access points to non file suffixed endpoint reference:
http://mydomain.com/root
or
http://mydomain.com/root/sub/subroot
rather than address index.html

You will want to make sure that you are setting your default root object in Amazon S3 and that your origin is the S3 Website endpoint.
Example origin: www.example.com.s3-website-us-east-1.amazonaws.com
http://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.html

Related

The website hosted on EC2 not able to access S3 image link

I have assigned a role of Fulls3Access to EC2. the website on EC2 are able to upload and delete S3 objects but access to the s3 asset url are denied(which mean I can't read the images). I have enable the block public access settings. Some of folders I want to make it confidential and only the website can access those. I have tried to set conditions on public read like sourceIp and referer url on bucket policy, but below doesn't work, the images on website still don't display. Anyone have ideas to enable and also restrict read s3 bucket access to the website only?
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::bucketname/assets/*", ]
},
{
"Sid": "AllowIP",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::bucketname/private/*",
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"ip1/32",
"ip2/32", ]
}
}
}
]
}
If you're trying to serve these assets in the users browser via an application on the EC2 host then the source would not be the EC2 server, instead it would be the users browser.
IF you want to restrict assets there are a few options to take whilst allowing the user to see them in the browser.
The first option would be to generate a presigned URL using the AWS SDK. This will create an ephemeral link that will expire after a certain length of time, this would require generation whenever the asset would be required which would work well for sensitive information that is not access frequently.
The second option would be to add a CloudFront distribution in front of the S3 bucket, and use a signed cookie. This would require your code to generate a cookie which would then be included in all requests to the CloudFront distribution. It allows the same behaviour as a signed URL but only requires to be generated once for a user to access all content.
If all assets should only be accessed from your web site but are not considered sensitive you could also look at adding a WAF to a CloudFront distribution in front of your S3 bucket. This would be configured with a rule to only allow where the "Referer" header matches your domain. This can still be bypassed by someone setting that header in the request but would lead to less crawlers hitting your assets.
More information is available in the How to Prevent Hotlinking by Using AWS WAF, Amazon CloudFront, and Referer Checking documentation.

Connect Namecheap domain with amazon S3 buckets

I'm trying to connect my domain registered in Namecheap to my S3 buckets.
I've checked many questions related to this in here but my configuration seems to be ok.
I'm able to access the website through the static website endpoint provided by AWS but when I enter my custom domain in the browser, it takes a long while to load and finally says the page could not be loaded.
I waited over 48 hours and I tried cleaning my cache several times.
My boyfriend can access it in his phone but I cannot access it using any of my devices (I've even tried using my mobile data). I also asked my mom (she lives in another country) to try to access it and she cannot.
I replaced my domain name with example.com in the pictures below.
Here are my host records
S3 buckets:
"example.com" is my main bucket. "www.example.com" just redirects all the requests to the main bucket.
This is the bucket policy I'm using in the main bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::example.com/*"
}
]
}
and the one I'm using in the secondary bucket(www.example.com)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::www.example.com/*"
}
]
}
Static website hosting configuration of the main bucket:
Static website configuration of the secondary bucket:
I tried pinging the domain and it returns
When I try to access the IP address shown in the ping command, it just shows me the main S3 page.
Ok, so I figured out what was wrong.
My domain ends in .dev. I didn't know but this extension is only for HTTPS.
Most browsers know the list of extensions that only serve HTTPS and when you type http://example.com, they automatically redirect to https://example.com.
My boyfriend's phone browser (Samsung Internet) apparently does not redirect automatically and so it could render my website on http://.
So, to solve my issue I had to get an SSL certificate for my domain and install it on AWS.
This meant importing the certificate using ACM, then creating a CloudFront distribution that uses that certificate and points to the S3 bucket.
It now works :)

Correct way to host SPA using S3, Cloudfront and Route53

I built a react app and am trying out different hosting services. Using an s3 bucket to store files then using cloud front distribution seems solid so I'm trying it out.
I built my React app and added this script to package.json
aws s3 sync build/ s3://<bucket-name>
Then I created a distribution network using the bucket name endpoint that was listed under s3 properties and that status now says deployed.
Now I have a domain name that I purchased on Godaddy. I changed the nameserver's to the 4 provided by AWS Route53 in the Hosted Zone. This also created 2 record sets. I then added a 3rd record set, ALIAS and the target is the cloudfront.net url.
Now I'm seeing two different results.
On safari browser, Safari throws a 403 Error.
On chrome I get a message saying "Your connection is not private" even though I can view the valid cloudfront certificates in the url bar.
I have bucket policy on my s3 that looks like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::<bucket-name>/*"
]
}
]
}
Was there something I missed?

AWS Bucket Policy Invalid Resource

I'm having some trouble with AWS Bucket policies, I followed the instruction and it doesn't let me set the policy, so I can't get my domain to work with the buckets.
Here is a picture. The tutorial told me to replace example.com with my bucket name.
I've been trying to set up my buckets with my domain for over a month now and I just can't seem to get it going. I already purchased my domain, and it's the exact domain name I want, so I don't want to be forced to go to Bluehost with a new domain.
It is quite simple:
Your bucket is called www.justdiditonline.com
Your bucket policy is attempting to create a rule for a bucket named justdiditonline.com
The bucket names do not match
Solution: Use a policy with the correct bucket name:
{
"Id": "Policy1",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1",
"Action": [
"s3:GetObject"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::www.justdiditonline.com/*",
"Principal": "*"
}
]
}
I notice you have another bucket called justdiditonline.com. Your existing policy would work on that bucket.
The Setting Up a Static Website Using a Custom Domain instructions detail what to do, and they work fine with an external DNS service using a CNAME to point to the static website URL. The main steps are:
Create a bucket with the domain name www.justdiditonline.com
Add a bucket policy to make content public, or make sure the individual objects you want to serve are publicly readable
Activate Static Website Hosting on the bucket, which will return a URL like: www.justdiditonline.com.s3.amazonaws.com
Create a DNS entry for www.justdiditonline.com with a CNAME pointing to the Static Website Hosting URL

Amazon S3 Bucket policy, allow only one domain to access files

I have a S3 bucket with a file in it. I only want a certain domain to be able to access the file. I have tried a few policies on the bucket but all are not working, this one is from the AWS documentation.
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests originated from www.example.com and example.com",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket-name/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"http://www.phpfiddle.org/*",
"http://phpfiddle.org/*"
]
}
}
}
]
}
To test the file, i have hosted a code on phpfiddle.org and have this code. But i am not able to access this file neither by directly accessing from the browser nor by the phpfiddle code.
<?php
$myfile = file_get_contents("https://s3-us-west-2.amazonaws.com/my-bucket-name/some-file.txt");
echo $myfile;
?>
Here are the permissions for the file, the bucket itself also has the same permissions + the above policy.
This is just an example link and not an actually working link.
The Restricting Access to a Specific HTTP Referrer bucket policy is only allow your file to be accessed from a page from your domain (the HTTP referrer is your domain).
Suppose you have a website with domain name (www.example.com or example.com) with links to photos and videos stored in your S3 bucket, examplebucket.
You can't direct access your file from your browser (type directly the file URL into browser). You need to create a link/image/video tag from any page in your domain.
If you want to file_get_contents from S3, you need to create a new policy to allow your server IP (see example). Change the IP address to your server IP.
Another solutions is use AWS SDK for PHP to download the file into your local. You can also generate a pre-signed URL to allow your customer download from S3 directly for a limited time only.