Connect Namecheap domain with amazon S3 buckets - amazon-web-services

I'm trying to connect my domain registered in Namecheap to my S3 buckets.
I've checked many questions related to this in here but my configuration seems to be ok.
I'm able to access the website through the static website endpoint provided by AWS but when I enter my custom domain in the browser, it takes a long while to load and finally says the page could not be loaded.
I waited over 48 hours and I tried cleaning my cache several times.
My boyfriend can access it in his phone but I cannot access it using any of my devices (I've even tried using my mobile data). I also asked my mom (she lives in another country) to try to access it and she cannot.
I replaced my domain name with example.com in the pictures below.
Here are my host records
S3 buckets:
"example.com" is my main bucket. "www.example.com" just redirects all the requests to the main bucket.
This is the bucket policy I'm using in the main bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::example.com/*"
}
]
}
and the one I'm using in the secondary bucket(www.example.com)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::www.example.com/*"
}
]
}
Static website hosting configuration of the main bucket:
Static website configuration of the secondary bucket:
I tried pinging the domain and it returns
When I try to access the IP address shown in the ping command, it just shows me the main S3 page.

Ok, so I figured out what was wrong.
My domain ends in .dev. I didn't know but this extension is only for HTTPS.
Most browsers know the list of extensions that only serve HTTPS and when you type http://example.com, they automatically redirect to https://example.com.
My boyfriend's phone browser (Samsung Internet) apparently does not redirect automatically and so it could render my website on http://.
So, to solve my issue I had to get an SSL certificate for my domain and install it on AWS.
This meant importing the certificate using ACM, then creating a CloudFront distribution that uses that certificate and points to the S3 bucket.
It now works :)

Related

Redirection from HTTP to HTTPS in CloudFront

I have created a CloudFront distribution for the static website, but my website does not work on http anymore. It works fine with S3 endpoint, but gives a blank page on CloudFront endpoint and my website. Check the images for reference.
I have faced similar issue where my https://url.com was giving me blank page. In my case I have made few changes in my distribution which helped me to resolve the issue:
I have removed the index.html as the root object as my code did not
have index.html reflecting to anything.
I have also changed my allowed methods from just GET & HEAD to
GET,HEAD,OPTIONS.
There can be 3 problem areas:
DNS :- Check if its pointed to correctly to cloudfront distribution.
S3:- Make sure there's GetObject allowed either to everyone or Cloudfront user (if you plan to restrict content via cloudfront)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::XXXX/"
}
]
}
Some wierdo issue with your project files or Cloudfront cache.
Try invalidating cloudfront cache or upload the project on a new bucket.

The website hosted on EC2 not able to access S3 image link

I have assigned a role of Fulls3Access to EC2. the website on EC2 are able to upload and delete S3 objects but access to the s3 asset url are denied(which mean I can't read the images). I have enable the block public access settings. Some of folders I want to make it confidential and only the website can access those. I have tried to set conditions on public read like sourceIp and referer url on bucket policy, but below doesn't work, the images on website still don't display. Anyone have ideas to enable and also restrict read s3 bucket access to the website only?
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicRead",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::bucketname/assets/*", ]
},
{
"Sid": "AllowIP",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": [
"arn:aws:s3:::bucketname/private/*",
],
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"ip1/32",
"ip2/32", ]
}
}
}
]
}
If you're trying to serve these assets in the users browser via an application on the EC2 host then the source would not be the EC2 server, instead it would be the users browser.
IF you want to restrict assets there are a few options to take whilst allowing the user to see them in the browser.
The first option would be to generate a presigned URL using the AWS SDK. This will create an ephemeral link that will expire after a certain length of time, this would require generation whenever the asset would be required which would work well for sensitive information that is not access frequently.
The second option would be to add a CloudFront distribution in front of the S3 bucket, and use a signed cookie. This would require your code to generate a cookie which would then be included in all requests to the CloudFront distribution. It allows the same behaviour as a signed URL but only requires to be generated once for a user to access all content.
If all assets should only be accessed from your web site but are not considered sensitive you could also look at adding a WAF to a CloudFront distribution in front of your S3 bucket. This would be configured with a rule to only allow where the "Referer" header matches your domain. This can still be bypassed by someone setting that header in the request but would lead to less crawlers hitting your assets.
More information is available in the How to Prevent Hotlinking by Using AWS WAF, Amazon CloudFront, and Referer Checking documentation.

Correct way to host SPA using S3, Cloudfront and Route53

I built a react app and am trying out different hosting services. Using an s3 bucket to store files then using cloud front distribution seems solid so I'm trying it out.
I built my React app and added this script to package.json
aws s3 sync build/ s3://<bucket-name>
Then I created a distribution network using the bucket name endpoint that was listed under s3 properties and that status now says deployed.
Now I have a domain name that I purchased on Godaddy. I changed the nameserver's to the 4 provided by AWS Route53 in the Hosted Zone. This also created 2 record sets. I then added a 3rd record set, ALIAS and the target is the cloudfront.net url.
Now I'm seeing two different results.
On safari browser, Safari throws a 403 Error.
On chrome I get a message saying "Your connection is not private" even though I can view the valid cloudfront certificates in the url bar.
I have bucket policy on my s3 that looks like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::<bucket-name>/*"
]
}
]
}
Was there something I missed?

AWS static site endpoint not loading

I'm trying to point a domain to an S3 bucket set up for static website hosting. I changed the original DNS nameservers to use AWS nameservers instead and set up the following DNS records:
There is an alias A record for the domain itself as well as one for www.
When I try to go to the domain, it takes me to the company's site where the nameservers are managed (domainspricedright.com) and it says it's a parked site, or it just loads forever then fails.
When I try to go to the endpoint URL itself, it fails to ever load, which maybe means there is some permissions issue with the bucket? The endpoint is:
http://sunrisevalleydds.com.s3-website-us-east-1.amazonaws.com/
The bucket policy I have in place is:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::sunrisevalleydds.com/*"
}
]
}
Perhaps it's a propagation delay? I don't know really how to debug it.
Edit: The endpoint loads now. But http://sunrisevalleydds.com and http://www.sunrisevalleydds.com fail to load. Still not sure if this is a delay.
Seems to have been a propagation delay. URLs are now loading.

Static Site Deployment in AWS S3 with CloudFront

I am trying to set up a static website that has been configured to use index.html default documents. I have the following bucket policy set up in S3:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "Allow Public Access to All Objects",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::www.risendeadmc.com/*",
"Condition": {}
}
]
}
The 2nd step is I created a CloundFront distribution to distribute this S3 content.
Issue 1: Sub folders are still accessible by CDN domain name but index.html no longer load as default for document and hitting folder causing a content download.
I then set up a A record Alias to the CDN distribution in Route53 and now nothing resolves with a 403 Forbidden error no matter what I use.
Any configuration advise to resolve would be greatly appreciated.
What I am looking for is the ability to use my domain set up in Route53 to point to the CloudFront Distribution to provide access (with index.html default) content access.
I would like to keep root and sub folder default access points to non file suffixed endpoint reference:
http://mydomain.com/root
or
http://mydomain.com/root/sub/subroot
rather than address index.html
You will want to make sure that you are setting your default root object in Amazon S3 and that your origin is the S3 Website endpoint.
Example origin: www.example.com.s3-website-us-east-1.amazonaws.com
http://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.html