Just hosted a website on amazon aws in a s3 bucket. When I move around in the website the URL doesn't change, even if the link redirect on a page with a different path.
I read around that it has something to do with iframes, even though I'm not sure what they are.
Regardless, I'm just wondering whether it's possible with the aws s3 to make so that by moving around in the website, the URL gets updated as well.
For testing purposes, this is the link to the website, and to go to another part of the website, just scroll down and click on the website image.
Thank you!
I've manage to find out how to connect the web hosting s3 bucket to the freenom free domain provider.
The s3 bucket needs to have the same name as your domain + the "www". In my example my domain was paolo-caponeri.ga, the bucket needs to be www.paolo-caponeri.ga
Then in the freenom domains manager you need to go the name servers section, select the "Use default nameservers" and then press "save"
Finally you need to go to the freenom DNS manager and add a new CNAME record with "www" on the left and the full link to the s3 bucket provided in the amazon s3 properties on the right; in my case it was "www.paolo-caponeri.ga.s3-website.eu-central-1.amazonaws.com"
And that's it, after a while you should be able to connect to your website without having the URL being masked.
(thank you to Frederic Henri, who got me much closer to the answer!)
NB: I have no experience with freenom so those are more advices than a proven solution.
It seems freenom is doing frame forwarding and you would need instead a "A" / "CNAME" referral.
Your site runs fine if you go to http://testpages.paolo.com.s3-website.eu-central-1.amazonaws.com/ and as such bypass the freenom redirection.
A quick search on freenom seems it could be possible: https://my.freenom.com/knowledgebase.php?action=displayarticle&id=4
Related
I have set up an S3 bucket to reroute all traffic to example.com to www.example.com with https according to this very poor AWS guide. It works for example.com and http://example.com.
But when I access https://example.com it hangs for a little while and then routes to a blank page. Why is it so difficult to redirect a URL I own to another one in AWS and how do I fix this?
Edit:
I am now configuring CloudFront distributions and trying to find one decent tutorial explaining how to perform this seemingly simple task.
Did you miss this line in the link you provided:
Note: The sites must use HTTP, because the redirect can't connect to Amazon S3 over HTTPS.
You are trying to do something that is expliciting called out as not being possible in the docs.
BTW: If you want to use https to service static s3 websites, using cloudfront if often the easiest and quickest way to do that.
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-https-requests-s3/
So I finally figured this out and I am going to describe here what worked for me. E.J.'s answer above was a helpful pointer but wasn't specific enough to make this the absolutely trivial task I would hope this to be, even for a first-timer.
Here are the exact steps required, with some prior notes.
Two notes:
You HAVE to setup an SSL certificate with AWS to re-route after https. As an organisation AWS has not yet reached the place where automatic certificate management is... well... automatic. You have to use what I might call AWS "Extremely Manual" ACM.
You need an AWS S3 bucket (make it have the name of the domain your are routing FROM).
Steps:
Follow this guide to setup a S3 bucket that will route (without HTTPs) from example.com to www.example.com (or vice versa I guess)
Navigate to the absolute eye-sore that is Amazon CloudFront
Click everywhere until you find a button to "create distribution"
Set "Origin Domain Name" to the link for the bucket created in step 1. DO NOT use the one AWS recommends, you have to go to the bucket and copy the end-point manually, the one AWS fills-in automatically will not work. It should look like this: example.com.s3-website-eu-west-1.amazonaws.com but location and stuff will be different obviously. Not sure why AWS recommends the wrong end-point but that is the least of my concerns about this process.
This guide works for the rest of the CloudFront distribution creation but is not super specific and points to this mess at one important part. The other steps are okay but when creating an SSL certificate just click that "Request or Import a Certificate with ACM" button (you will have to refresh after creating a certificate because Ajax didn't exist when the AWS console was made 200 years ago)
And the most important step, take the link or whatever it is to your CloudFront distribution (which will look like this: d328r8fyg.cloudfront.net, this one is fake because apparently you're not supposed to share them), and make the A record for example.com created in step 1 point to that CF distro instead of pointing directly to your bucket.
And voila, only took about 3 hours to get a URL to redirect somewhere securely. Not sure why people expect us to make it to Mars when the largest company in the world can't point one url to another and Microsoft Image Editor still can't crop to a specific pixel dimension.
Anyway. I'm glad this is over.
I have tried several tutorials on how to set up https via CloudFront, but nothing is working. I am hosting from an S3 bucket and the app works fine via the http protocol, but I need it to be https.
Does anyone have a very thorough tutorial on how to make this work?
Some tutorials explain how to go about setting up a certificate, some explain how to use CloudFront to handle its distribution and I even found a CloudFront tutorial that explains how not using a link from the CloudFront setup forces the wrong region to be created for a certificate, so I even tried that.
I have not found anything that explains exactly what needs to be done for this very common setup, so I am hoping that someone here has some helpful resources.
I think the main issue I had when setting up a CloudFront distribution for an S3 static webhosting bucket was in the Orign Domain Name.
When you create a new distribution, under Origin Settings, the Origin Domain Name field works as a drop-down menu and lists your buckets. Except that picking a bucket from that list doesn't work for static webhosting. You need to specifically put the endpoint for the bucket there, for example:
mywebhostingbucket.com.s3-website-sa-east-1.amazonaws.com
And for custom domains, you must set up the CNAMEs under Distribution Settings, Alternate Domain Names (CNAMEs), and then make sure you have your custom SSL certificate in the us-east-1 region.
Then you can configure the alias record set for the CloudFront distribution.
Here is a complete answer for setting up a site with https.
I had everything in this document completed:
https://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.html
And it worked to get the site live via http, but in order to add https, I needed to do the following:
I had requested a certificate for whatever.com, and tried several suggestions after that. But there were a couple of things missing.
To route traffic for the domain (whatever.com) to CloudFront distribution, you will need to clear the current value of the A record and fill in distribution domain name.
Several documents that I viewed said to point the whatever.com S3 bucket to the www.whatever.com S3 bucket, and use the second one to drive the site. Since CloudFront can serve multiple domain name, you may set CNAME of distribution with both, but you will need to set A record for both to distribution AND request an ACM certificate with both domain names (with and without the www). Also, I did ask this, so if you already have a certificate, you can't edit it to do this, which means you'll need to request a new one that has both whatever.com and www.whatever.com
After all of this, I still got "Access Denied" when I went to my site, so to fix this issue, I had to create a new origin in CloudFront with 'Origin Domain Name' set to the full address of the S3 bucket (without the http), and then set the Default (*) Behavior to the S3-Website-.....whatever.com bucket.
After all of this, my site was accessible via http AND https. I hope this helps anyone who experienced this challenge.
This is another problem I am having, here is the info.
My endpoint for the first bucket works, but not the 2nd bucket, and using my domain name in the browser just doesn't work at all.
Once again I have taken screenshots, this is driving me insane, I have no idea how to correct it, it just doesn't find my root or something.
Record Sets
Buckets
I can't post more than 2 links
If anyone can help me, even PM me, I'll give my login details to take a look, because I'm completely stumped.
You should follow the directions on Setting Up a Static Website Using a Custom Domain.
For the first bucket, the steps are:
Create an Amazon S3 bucket with the same name as your domain (justdiditonlne.com)
Add a bucket policy to make the content public, or modify desired objects to make them publicly accessible
Turn on Static Website Hosting, and you will receive a URL like: justdiditonline.com.s3.amazonaws.com
Create a Route 53 entry for justdiditonline.com, type A, set Alias=Yes and enter the static website hosting URL
For the second bucket:
Create an Amazon S3 Bucket with the desired domain name (www.justdiditonline.com)
Turn on Static Website Hosting, but this time select Redirect all requests to another host name. Enter www.justdiditonline.com.
This should be easy as there is no shortage of pages on custom domains and S3 but for some reason I can't seem to get it to work as expected.
I have a S3 Bucket full of videos. The S3 bucket is called for example "videos.foo.com". I bought the domain "videos.foo.com" and set it up in cloudflare with the cname "videos.foo.com" pointing to "videos.foo.com.s3-website-us-east-1.amazonaws.com".
I can view files in my bucket by going to there full url such as "videos.foo.com.s3-website-us-east-1.amazonaws.com/myvideo.mpg".
My problem is I can't view them by going to "videos.foo.com/myvideo.mpg".
I tried enabling "Redirect all requests to another host name" and entering "videos.foo.com" but that didn't work either. To note, I will 'not' be hosting a site at "videos.foo.com" just serving files.
All the files have permissions everyone: open/download.
If anything sees the error in my ways please let me know. In the mean time I'll keep searching and going through trial and error. Thanks!
Nginx S3 proxy helps to solve that problem, please check more details: https://stackoverflow.com/a/44749584/290338
A bucket can be named differently, the only performance rule is to have EC2 and a bucket in the same geo location.
You may need to configure the Route 53 record as an alias rather than a cname. In R53, edit your record set and configure it like this -
Type: A - IPv4 address
Alias: Yes
When you click in the Alias Target textbox it will drop down a list that contains your S3 bucket (if it's properly configured). It can take a while to load that list so be patient. Select your bucket and hit save.
Check out this document for more info - http://docs.aws.amazon.com/AmazonS3/latest/dev/website-hosting-custom-domain-walkthrough.html
I'm trying to host a simple static website from my AWS account with S3. I had an old dusty account with lots of strange settings from testing over the years, I also had an S3 account with a 'mypersonaldomain.com' and 'wwww.mypersonaldomain.com' bucket. Anyways I wanted to start fresh so I canceled the account to start new.
Now when I go to create a 'mypersonaldomain.com' and 'www.mypersonaldomain.com' it says the bucket name is taken even though the account was deleted a while ago. I had assumed that amazon would release the bucketname back to the public. However when I deleted the account, I didn't explicitly delete the buckets beforehand.
I'm under the impression to use S3 for static website hosting the bucket names need to match the domain name for the DNS to work. However If I can't create a bucket with the proper name is there anyway I can use S3 for static hosting? Its just a simple low traffic website that doesn't need to be in an EC2 instance.
FYI I'm using Route 53 for my DNS.
[note 'mypersonldomain.com' is not the actual domain name]
One way to solve your problem would be to store your website on S3 but serve it through CloudFront:
You create a bucket on S3 with whatever name you like (no need to match the bucket's name to your domain name).
You create a distribution on CloudFront.
You add an origin to your distribution pointing to your bucket.
You make the default behavior of your distribution to grab content from the origin created on the previous step.
You change your distribution's settings to make it respond to your domain name, by adding a CNAME
You create a hosted zone on Route 53 and create ALIAS entries pointing to your distribution.
That should work. By doing this, you have the added benefit of potentially improving performance for your end users.
Also note that CloudFront has been included in the Free Tier a couple months ago (http://aws.amazon.com/free).
In my opinion, this is a tremendous design flaw in the way that S3 requires that bucket names are universally unique across all users.
I could, for example, make buckets for any well known company name (assuming they weren't taken) and then the legitimate users of those domains would be blocked from using them for the purpose of creating a static s3 website if they ever wanted to.
I've never liked this requirement that s3 bucket names be unique across all users - a major design flaw (which I am sure had a legitimate reason when it was first designed), but can't imagine that if AWS could go back to the drawing board on this that they wouldn't re-think it now.
In your case, with a delete account, it is probably worth dropping a note to s3 tech support - they may be able to help you out quite easily.
I finally figured out a solution that worked for me. For my apex domain name I am using CloudFront which isn't an issue that someone already has my bucket name. The issue was the www redirect. I needed a server to rewrite the URL and issue a permanent redirect. Since I am using Elastic Beanstalk for my API I leveraged Nginx to accomplish the redirect. You can read all the details on my post here: http://brettmathe.com/aws/aws-apex-redirect-using-elastic-beanstalk/