I'm trying to point a domain to an S3 bucket set up for static website hosting. I changed the original DNS nameservers to use AWS nameservers instead and set up the following DNS records:
There is an alias A record for the domain itself as well as one for www.
When I try to go to the domain, it takes me to the company's site where the nameservers are managed (domainspricedright.com) and it says it's a parked site, or it just loads forever then fails.
When I try to go to the endpoint URL itself, it fails to ever load, which maybe means there is some permissions issue with the bucket? The endpoint is:
http://sunrisevalleydds.com.s3-website-us-east-1.amazonaws.com/
The bucket policy I have in place is:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::sunrisevalleydds.com/*"
}
]
}
Perhaps it's a propagation delay? I don't know really how to debug it.
Edit: The endpoint loads now. But http://sunrisevalleydds.com and http://www.sunrisevalleydds.com fail to load. Still not sure if this is a delay.
Seems to have been a propagation delay. URLs are now loading.
Related
I'm an AWS noob setting up a hobby site using Django and Wagtail CMS. I followed this guide to connecting an S3 bucket with django-storages. I then added Cloudfront to my bucket, and everything works as expected: I'm able to upload images from Wagtail to my S3 bucket and can see that they are served through Cloudfront.
However, the guide I followed turned off Block all public access on this bucket, which I've read is bad security practice. For that reason, I would like to set up Cloudfront so that my bucket is private and my Django media files are only accessible through Cloudfront, not S3. I tried turning Block all public access back on, and adding this bucket policy:
"Sid": "2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXXXXXXX"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-s3-bucket/*"
The problem I'm encountering is that when I have Block all public access turned on, I receive AccessDenied messages on all my files. I can still upload new images and view them as stored in my AWS console. But I get AccessDenied if I try to view them at their CloudFront or S3 URLs.
What policies do I need to fix so that I can upload to my private S3 bucket from Django, but only allow those images to be viewable through CloudFront?
Update 1 for noob confusion: Realized I don't really understand how CDNs work and am perhaps confused about caching. Hopefully my edited question is clearer.
Update 2: Here's a screenshot of my CloudFront distribution and a screenshot of origins.
Update 3 (Possible solution): I seem to have this working after making a change to my bucket policy statements. When I created the OAI, I chose Yes, update the bucket policy, which added the OAI to my-s3-bucket. That policy was appended as a second statement to the original one made following the tutorial I linked above. My entire policy looked like this:
{
"Version": "2012-10-17",
"Id": "Policy1620442091089",
"Statement": [
{
"Sid": "Stmt1620442087873",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-s3-bucket/*"
},
{
"Sid": "2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXXXXXXX"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-s3-bucket/*"
}
]
}
I removed the original, top statement and left the new OAI CloudFront statement in place. My S3 bucket is now private and I no longer receive AccessDenied on CloudFront URLs.
Does anyone know if conflicting statements can have this effect? Or is it just a coincidence that the issue resolved after removing the original one?
I have created a CloudFront distribution for the static website, but my website does not work on http anymore. It works fine with S3 endpoint, but gives a blank page on CloudFront endpoint and my website. Check the images for reference.
I have faced similar issue where my https://url.com was giving me blank page. In my case I have made few changes in my distribution which helped me to resolve the issue:
I have removed the index.html as the root object as my code did not
have index.html reflecting to anything.
I have also changed my allowed methods from just GET & HEAD to
GET,HEAD,OPTIONS.
There can be 3 problem areas:
DNS :- Check if its pointed to correctly to cloudfront distribution.
S3:- Make sure there's GetObject allowed either to everyone or Cloudfront user (if you plan to restrict content via cloudfront)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::XXXX/"
}
]
}
Some wierdo issue with your project files or Cloudfront cache.
Try invalidating cloudfront cache or upload the project on a new bucket.
I'm trying to connect my domain registered in Namecheap to my S3 buckets.
I've checked many questions related to this in here but my configuration seems to be ok.
I'm able to access the website through the static website endpoint provided by AWS but when I enter my custom domain in the browser, it takes a long while to load and finally says the page could not be loaded.
I waited over 48 hours and I tried cleaning my cache several times.
My boyfriend can access it in his phone but I cannot access it using any of my devices (I've even tried using my mobile data). I also asked my mom (she lives in another country) to try to access it and she cannot.
I replaced my domain name with example.com in the pictures below.
Here are my host records
S3 buckets:
"example.com" is my main bucket. "www.example.com" just redirects all the requests to the main bucket.
This is the bucket policy I'm using in the main bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::example.com/*"
}
]
}
and the one I'm using in the secondary bucket(www.example.com)
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::www.example.com/*"
}
]
}
Static website hosting configuration of the main bucket:
Static website configuration of the secondary bucket:
I tried pinging the domain and it returns
When I try to access the IP address shown in the ping command, it just shows me the main S3 page.
Ok, so I figured out what was wrong.
My domain ends in .dev. I didn't know but this extension is only for HTTPS.
Most browsers know the list of extensions that only serve HTTPS and when you type http://example.com, they automatically redirect to https://example.com.
My boyfriend's phone browser (Samsung Internet) apparently does not redirect automatically and so it could render my website on http://.
So, to solve my issue I had to get an SSL certificate for my domain and install it on AWS.
This meant importing the certificate using ACM, then creating a CloudFront distribution that uses that certificate and points to the S3 bucket.
It now works :)
I've been converting an existing application to an EC2, S3 and RDS model within AWS, so far it's going well but I've got a problem I can't seem to find any info on.
My Web application accesses the S3 box for images and documents, the way this is stored is by client code,
Data/ClientCode1/Images
Data/ClientCode2/Images
Data/ClientABC/Images -- etc
The EC2 hosting the web application also works within a similar structure, so www.programname.com/ClientCode1/Index.aspx as an example, this has working security to prevent cross client access.
Now when www.programname.com/ClientCode1/Index.aspx goes to access the S3 for images, I need to make sure it can only access the ClientCode1 folder on the S3, the goal is to prevent client A seeing the images/documents of client B if you had a tech sort trying.
Is there perhaps a way to get the page referrer, or is there a better approach to this issue?
There is no way to use the URL or referrer to control access to Amazon S3, because that information is presented to your application (not S3).
If all your users are accessing the data in Amazon S3 via the same application, it will be the job of your application to enforce any desired security. This is because the application will be using a single set of credentials to access AWS services, so those credentials will need access to all data that the application might request.
To clarify: Amazon S3 has no idea which page a user is viewing. Only your application knows this. Therefore, your application will need to enforce the security.
I found the solution, seems to work well
{
"Version": "2012-10-17",
"Id": "http referer policy example",
"Statement": [
{
"Sid": "Allow get requests referred by www.example.com and example.com.",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::clientdata/Clients/Client1/*",
"Condition": {
"StringLike": {"aws:Referer": ["http://www.example.com/Client1/*","http://example.com/Client1/*"]}
}
},
{
"Sid": "Explicit deny to ensure requests are allowed only from specific referer.",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::clientdata/Clients/Client1/*",
"Condition": {
"StringNotLike": {"aws:Referer": ["http://www.example.com/Client1/*","http://example.com/Client1/*"]}
}
}
]
}
This allows you to check the referer to see if the URL is from a given path, in my case I have each client sitting in their own path, the bucket follows the same rule, in the above example only a user coming from Client1 can access the bucket data for Client1, if I log in to Client2 and try force an image to the Client1 bucket I'll get access denied.
I'm trying to secure access to an internal static website.
Everyone in the company is using a VPN to access our Amazon VPC so I would like to limit access to that site if you're using the VPN.
So I found out this documentation on AWS to use VPC endpoint which seems to be what I'm looking for.
So I created a VPC endoint with the folowing policy.
{
"Statement": [
{
"Action": "*",
"Effect": "Allow",
"Resource": "*",
"Principal": "*"
}
]
}
On my S3 bucket, I verified that I could access index.html both from the regular Web and from the VPN.
Then I added the following bucket Policy to restrict to only the VPC Endpoint.
{
"Id": "Policy1435893687892",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1435893641285",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::mybucket/*",
"Principal": {
"AWS": [
"arn:aws:iam::123456789:user/op"
]
}
},
{
"Sid": "Access-to-specific-VPCE-only",
"Action": "s3:*",
"Effect": "Deny",
"Resource": ["arn:aws:s3:::mybucket/*"],
"Condition": {
"StringNotEquals": {
"aws:sourceVpce": "vpce-1234567"
}
},
"Principal": "*"
}
]
}
Now Regular Web gets a 403 but I also get a 403 when I'm behind the company VPN.
Am I missing something?
#Michael - sqlbot is right.
It seems what you are doing is restrict access to the S3 bucket where you store that static web content to requests coming from a particular AWS VPC, using a VPC endpoint.
VPC endpoints establish associations between AWS services, to allow requests coming from INSIDE the VPC.
You can't get what you want with VPC and S3 ACL configuration, but you can get it with ACL and some VPN configuration.
Let's assume connecting to your company's VPN doesn't mean that all the traffic, including Internet traffic between the VPN clients and AWS S3 will be routed through that VPN connection, because that's how sane VPN configuration usually works. If that's not the case, ommit the following step:
Add a static route to your S3 bucket to your VPN server configuration, so every client tries to reach the bucket through the VPN instead of trying to establish a direct internet connection with it. For example, on OpenVPN, edit server.conf, adding the following line:
push "route yourS3bucketPublicIP 255.255.255.255"
After that you will see that when a client connects to the VPN it gets an extra entry added to its routing table, corresponding to the static route that tells it to reach the bucket trough the VPN.
Use S3 bucket ACLs "IpAddress" field to set the configuration you want. It should look something like this:
.
{
"Version": "2012-10-17",
"Id": "S3PolicyId1",
"Statement": [
{
"Sid": "IPAllow",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::examplebucket/*",
"Condition": {
"IpAddress": {"aws:SourceIp": "54.240.143.0/24"},
"NotIpAddress": {"aws:SourceIp": "54.240.143.188/32"}
}
}
]
}
You use IpAddress field to allow an IP or range of IPs using CIDR notation, and NotIpAddress field the same way for restricting an IP or range of IPs (you can ommit that one). That IP (or range of IPs) specified on IpAddress should be the public address(es) of the gateway interface(s) that route(s) your company's VPN Internet traffic (the IP address(es) S3 sees when somebody from your VPN tries to connect to it).
More info:
http://www.bucketexplorer.com/documentation/amazon-s3--access-control-list-acl-overview.html
http://aws.amazon.com/articles/5050/
http://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html#example-bucket-policies-use-case-3
https://openvpn.net/index.php/open-source/documentation/howto.html
Actually, #Michael - sqlbot was right until until May 15, 2015. What you do is correct. You found the documentation (again, correctly) that allows you to set up S3 bucket within VPC (probably with no access from outside world), the same way you set up your EC2 machines. Therefore,
On my S3 bucket, I verified that I could access index.html both from the regular Web and from the VPN.
is a problem. If you didn't make mistakes, you shouldn't be able to access the bucket from regular Web. Everything that you did afterwards is irrelevant - because you didn't create S3 bucket inside your VPN-connected VPC.
You don't give much details as to what you did in your very first step; the easiest is probably to delete this bucket and start from the beginning. With the need to set up route tables and what not it is easy to make a mistake. This is a simper set of instructions - but it doesn't cover as much ground as the document that you followed.
But any links that predate this capability (that is, any links before May 2015) are irrelevant.