I want to host a static website and will access it via DirectConnect with a custom domain + HTTPS. I think CloudFront + S3 is not suitable in this case as traffic will go through internet (correct me if I'm wrong). What/where should I host my website? Thanks in advance.
I am not sure you need Direct Connect for your use case. Direct Connect is to connect on-premises data center with aws with private connection. It takes a lot of work to set this up like a telecom provider setting up a router at aws locations where they connect this router with aws's etc. This is a big project and costs money. I highly doubt you need this to host a static website.
You can host your static website in S3 and probably buy a domain name in route 53, and map your S3 bucket to this domain name so you can access this site on the internet (public site). There are many tutorials to set this up.
Related
I have a domain hosted through Google. I'm using Google Workspace for a lot of my day-to-day operations (e.g. Drive, Gmail, etc). I'm using AWS as my infrastructure and business logic for my application. I'm having trouble making my site support TLS. If you visit it now, you get this on chrome and I can't seem to make HTTPS requests work.
I have my domain pointing to AWS via Custom Name Server.
My route 53 has the NS type records listed under the hosted zone
I've tried to request a Certificate from AWS to make it work.
My problem is I don't know how to tell Google about it. How do you let Google know about the certificate so I can make my site HTTPS?
I believe approaching Google is not going to solve your issue as in the above case Google is only responsible to host your domain . So DNS setup is only responsible to route requests to your site and not making your site more secured.
I also found that you are exposing your site as http rather than https and thats why your site is unsecured.
Is your site is running on a web server or is it hosted on S3 as static web site ?
Note: you cant enable https on S3 static website.
The workaround to above problem is below :
Route53 has A record to pointing to ALB (configured with ACM) distributing traffic to Ec2 instances running your web application.
If anyone is still looking. I wanted to keep it cheap with a simple S3 static website. If you want to maintain the S3 part, make a CloudFront distribution (if you haven't already.
Inside the CloudFront under the main settings, use a Certificate you made from Certificate Manager.
Then head over to Route53 (even if the domain is hosted via Google) and route the "A" name record to the CloudFront. NOTE: make sure the "Alternate Domain" name is filled in or else it won't see it.
Let it update for about a minute or two and it will show https
I am hosting a static website with AWS S3 and CloudFront but came up with the problem that I can't receive emails on the registrars email server (strato.de).
The registrar where I reserved my domain name and email server is currently "Strato.de"
In order to host my static website I created a S3 Bucket on AWS and a CloudFront distribution to use TLS/SSL and HTTPS.
I configured my registrar to point to the aws nameservers in the Route 53 configuration, this works perfectly and my website is publibly available.
The problem I am facing is that my emails are also redirected to the aws configuration because the nameservers transfer all traffic instead of only my website.
T
To solve this problem I thought about creating an A-record in my registrar and point to the IP of the CloudFront distribution. Unforntunately they don't use static IP-Adresses. Secondly if I use the S3 bucket directly instead of CloudFront there would be not HTTPS.
I am a beginner in this field and just want to receive emails that are sent to the domain name I reserved at the registrar and at the same time host my website via CloudFront.
I appreciate any help.
Unfortunately, it's not possible, I had a call with Strato and they said you have to use their DNS in order to benefit from their mail service.
My advice will be to use Google suite or Zoho who have more experience in the field, as well you will find a lot of articles explaining how to solve this common issue.
A "rival" application has admitted that they are scraping all of my static content (images/audio files) that are hosted on AWS Cloudfront.
Is it possible to put a block on the content being accessed unless it is requested from my web domain?
For example - https://d2z2xv99psdbxu.cloudfront.net/audio/SF697497-01-01-01.mp3 can only be played if it is played from xyz.com
I had thought about only allowing access from my server IP but I am also using Cloudflare CDN. Is there a work around?
AWS CloudFront supports custom ACLs (Access Control List) via AWS WAF.
You should be able to limit requests to your own domain in the ACL.
Here are a couple of similar scenarios:
https://aws.amazon.com/blogs/security/how-to-prevent-hotlinking-by-using-aws-waf-amazon-cloudfront-and-referer-checking/
https://docs.aws.amazon.com/waf/latest/developerguide/web-acl-associating-cloudfront-distribution.html
Hope it helps!
I'm sure this is a fairly simple question regarding EC2 and S3 on AWS.
I have a static website hosted on S3 which connects to a MongoDB server on an EC2 instance which I want to secure. Currently it's open to all of the internet 0.0.0.0/0 on port 27017, which is the MDB default. I want to restrict the inbound traffic to only requests from the S3 static web site however for security reasons. Apparently S3 does not supply fixed addresses which is causing a problem.
My only thought was to open the port to all IP ranges for the S3 region I am in. This doc on AWS explains how to find these. Although they are subject to change without notice.
http://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html
Would this be the way to proceed or am I missing something obvious here. Another way to assign an IP to S3 perhaps?
S3 is a storage service, not a compute service so it cannot make a request to your MongoDB. When S3 serves static webpages, your browser will render it and when a user clicks on a link to connect to your MongoDB, the request goes to MongoDB from the user's computer.
So MongoDB sees the request coming from the user's IP. Since you do not know where the user is coming from (or the IP range), you have no choice but to listen to traffic from any IP.
I think it is not possible to allow only for your s3 hosted site to access your DB inside the ec2 since s3 does not offer an IP address for you.
So its better to try an alternative solution such as instead of directly access DB, proxy through a https service inside your ec2 and restrict the inbound traffic for your mondo db port
s3 wont request your mongodb server on ec2 instance . From my understanding your js files in browser would request the mongodb running on ec2 instance . In that case you have to add message headers in the mongodb configuration files to allow CORS .
CORS: enter link description here
I am building a project which will be using the Dropbox API to read and write files to and from Dropbox. I have noticed that the endpoint URL is linked to an Amazon ELB, and i am wondering is there an AWS internal API i could use, which may save both me and Dropbox some money by making internal to Amazon requests, not external requests?
Host of Dropbox API is api.dropbox.com and resolves to 199.47.218.158.
That does not look like it belongs in one of the EC2 public IPs.
See: https://forums.aws.amazon.com/ann.jspa?annID=1528
Anyway, even if it is, it is not possible to determine the internal IP unless they publish the elastic IP DNS name (that looks like ec2-xx-xx-xx-xx.us-west-2.compute.amazonaws.com).
A little known tip:
If you query an Elastic IP's DNS name from within an EC2 instance, you will get an internal IP.