I have created a CloudFront distribution for the static website, but my website does not work on http anymore. It works fine with S3 endpoint, but gives a blank page on CloudFront endpoint and my website. Check the images for reference.
I have faced similar issue where my https://url.com was giving me blank page. In my case I have made few changes in my distribution which helped me to resolve the issue:
I have removed the index.html as the root object as my code did not
have index.html reflecting to anything.
I have also changed my allowed methods from just GET & HEAD to
GET,HEAD,OPTIONS.
There can be 3 problem areas:
DNS :- Check if its pointed to correctly to cloudfront distribution.
S3:- Make sure there's GetObject allowed either to everyone or Cloudfront user (if you plan to restrict content via cloudfront)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::XXXX/"
}
]
}
Some wierdo issue with your project files or Cloudfront cache.
Try invalidating cloudfront cache or upload the project on a new bucket.
Related
I'm an AWS noob setting up a hobby site using Django and Wagtail CMS. I followed this guide to connecting an S3 bucket with django-storages. I then added Cloudfront to my bucket, and everything works as expected: I'm able to upload images from Wagtail to my S3 bucket and can see that they are served through Cloudfront.
However, the guide I followed turned off Block all public access on this bucket, which I've read is bad security practice. For that reason, I would like to set up Cloudfront so that my bucket is private and my Django media files are only accessible through Cloudfront, not S3. I tried turning Block all public access back on, and adding this bucket policy:
"Sid": "2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXXXXXXX"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-s3-bucket/*"
The problem I'm encountering is that when I have Block all public access turned on, I receive AccessDenied messages on all my files. I can still upload new images and view them as stored in my AWS console. But I get AccessDenied if I try to view them at their CloudFront or S3 URLs.
What policies do I need to fix so that I can upload to my private S3 bucket from Django, but only allow those images to be viewable through CloudFront?
Update 1 for noob confusion: Realized I don't really understand how CDNs work and am perhaps confused about caching. Hopefully my edited question is clearer.
Update 2: Here's a screenshot of my CloudFront distribution and a screenshot of origins.
Update 3 (Possible solution): I seem to have this working after making a change to my bucket policy statements. When I created the OAI, I chose Yes, update the bucket policy, which added the OAI to my-s3-bucket. That policy was appended as a second statement to the original one made following the tutorial I linked above. My entire policy looked like this:
{
"Version": "2012-10-17",
"Id": "Policy1620442091089",
"Statement": [
{
"Sid": "Stmt1620442087873",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-s3-bucket/*"
},
{
"Sid": "2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXXXXXXX"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-s3-bucket/*"
}
]
}
I removed the original, top statement and left the new OAI CloudFront statement in place. My S3 bucket is now private and I no longer receive AccessDenied on CloudFront URLs.
Does anyone know if conflicting statements can have this effect? Or is it just a coincidence that the issue resolved after removing the original one?
I'm working on an amplify react app. From within the app, I poll an S3 bucket that I expect to be populated after a couple of minutes. For clarity the flow would be:
A user uploads a text file to S3 and the app gets the file url in the response
The app then sends a request to aws api-gateway with the S3 bucket uri, which triggers a lambda that then calls aws textract, which in turn triggers a second lambda which writes to an S3 bucket. The first lambda returns a jobId to the app.
The app will then POLL an S3 bucket in order to get the response file using the jobId and display that file to the user.
Initially when polling the bucket, I would use the results bucket url in my api call.
This works fine when running the app locally, as the bucket url uses the http protocol, returning a status code 404 until the file is created.
I then created a cloudfront distribution for the app, and the polling request return the error: xhr.js:178 Mixed Content: The page at '<>my cloudfront url>' was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint '<my endpoint>'. This request has been blocked; the content must be served over HTTPS. which makes perfect sense.
However changing the viewer protocol behaviour in the app's cloudfront distribution from Redirect HTTP to HTTPS to HTTP and HTTPS didn't seem to work.
So I thought I could create a second cloudfront distribution for the S3 results bucket as that uses https.
Yet, when I now run the api call, I get a 403 status code and no longer the 404.
So I tried setting up a custom error response, mapping the 403 error to 404. I waited a good while as cloudfront can take some time, but this still doesn't seem to have made any difference.
Changing the code in my app to expect a 403 instead of a 404 works and after a while once the file has been written to the results S3 bucket I get the file and display it on the app. But I don't want to expect a 403 as this is entirely the wrong code for something that doesn't exist.
I have a several questions here:
Is this the correct approach (having a cloudfront distribution for the results S3 bucket)?
Why would I get a 403 when using the cloudfront url instead of the 404 I was getting when using the S3 results url?
If point 1 is the correct approach, what do I do to fix point 2?
So the solution seems to be, not defining the special error responses, but actually changing the S3 bucket access policy, as described here.
My bucket policy was initially:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::my-results/*"
}
]
}
I then updated it to:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::my-results/*"
},
{
"Sid": "PublicListBucket",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:ListBucket"
],
"Resource": "arn:aws:s3:::my-results"
}
]
}
This seemed to solve my issue. However it doesn't seem to explain why I would get a 404 before using cloudfront and then a 403 when using cloudfront.
I'm trying to connect my domain registered in Namecheap to my S3 buckets.
I've checked many questions related to this in here but my configuration seems to be ok.
I'm able to access the website through the static website endpoint provided by AWS but when I enter my custom domain in the browser, it takes a long while to load and finally says the page could not be loaded.
I waited over 48 hours and I tried cleaning my cache several times.
My boyfriend can access it in his phone but I cannot access it using any of my devices (I've even tried using my mobile data). I also asked my mom (she lives in another country) to try to access it and she cannot.
I replaced my domain name with example.com in the pictures below.
Here are my host records
S3 buckets:
"example.com" is my main bucket. "www.example.com" just redirects all the requests to the main bucket.
This is the bucket policy I'm using in the main bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::example.com/*"
}
]
}
and the one I'm using in the secondary bucket(www.example.com)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::www.example.com/*"
}
]
}
Static website hosting configuration of the main bucket:
Static website configuration of the secondary bucket:
I tried pinging the domain and it returns
When I try to access the IP address shown in the ping command, it just shows me the main S3 page.
Ok, so I figured out what was wrong.
My domain ends in .dev. I didn't know but this extension is only for HTTPS.
Most browsers know the list of extensions that only serve HTTPS and when you type http://example.com, they automatically redirect to https://example.com.
My boyfriend's phone browser (Samsung Internet) apparently does not redirect automatically and so it could render my website on http://.
So, to solve my issue I had to get an SSL certificate for my domain and install it on AWS.
This meant importing the certificate using ACM, then creating a CloudFront distribution that uses that certificate and points to the S3 bucket.
It now works :)
I'm trying to point a domain to an S3 bucket set up for static website hosting. I changed the original DNS nameservers to use AWS nameservers instead and set up the following DNS records:
There is an alias A record for the domain itself as well as one for www.
When I try to go to the domain, it takes me to the company's site where the nameservers are managed (domainspricedright.com) and it says it's a parked site, or it just loads forever then fails.
When I try to go to the endpoint URL itself, it fails to ever load, which maybe means there is some permissions issue with the bucket? The endpoint is:
http://sunrisevalleydds.com.s3-website-us-east-1.amazonaws.com/
The bucket policy I have in place is:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::sunrisevalleydds.com/*"
}
]
}
Perhaps it's a propagation delay? I don't know really how to debug it.
Edit: The endpoint loads now. But http://sunrisevalleydds.com and http://www.sunrisevalleydds.com fail to load. Still not sure if this is a delay.
Seems to have been a propagation delay. URLs are now loading.
I am trying to set up a static website that has been configured to use index.html default documents. I have the following bucket policy set up in S3:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "Allow Public Access to All Objects",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::www.risendeadmc.com/*",
"Condition": {}
}
]
}
The 2nd step is I created a CloundFront distribution to distribute this S3 content.
Issue 1: Sub folders are still accessible by CDN domain name but index.html no longer load as default for document and hitting folder causing a content download.
I then set up a A record Alias to the CDN distribution in Route53 and now nothing resolves with a 403 Forbidden error no matter what I use.
Any configuration advise to resolve would be greatly appreciated.
What I am looking for is the ability to use my domain set up in Route53 to point to the CloudFront Distribution to provide access (with index.html default) content access.
I would like to keep root and sub folder default access points to non file suffixed endpoint reference:
http://mydomain.com/root
or
http://mydomain.com/root/sub/subroot
rather than address index.html
You will want to make sure that you are setting your default root object in Amazon S3 and that your origin is the S3 Website endpoint.
Example origin: www.example.com.s3-website-us-east-1.amazonaws.com
http://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.html