Route53 - DNS resolution to a specific port on an EC2 instance - amazon-web-services

I have a website that I have hosted on an EC2 instance that runs on port 3000. (e.g. 3.27.83.19:3000 - assuming the IP address of the EC2 instance is 3.27.83.19)
I have a domain that I have already bought mydomain.com through AWS that I already see in Hosted Zones.
How can I set-up Route53 so that when someone hits "mydomain.com", it takes them to 3.27.83.19:3000 rather than 3.27.83.19
Thanks!

point domain to instance ip
To point example.com to 3.27.83.19 you simply need to create an A record in route53
point domain to load balancer
To access the website running port 3000 on an EC2 instance through https://example.com, you need a service that accepts traffic on https://example.com and then forward the traffic to the EC2 instance on port 3000. You can easily do it with an AWS application load balancer. I like this approach.
There are many benefits using an application load balancer. The important one is that you can configure the SSL certificate easily. The application load balancer also supports host based routing which allows you to host multiple websites.
If you are looking for less expensive solutions, you can also go for setting up an nginx proxy inside the ec2 instance. I personally don't like this approach because you will need to configure SSL at the application level.
https://aws.amazon.com/premiumsupport/knowledge-center/public-load-balancer-private-ec2/
Hope this helps.

Related

How to run a ec2 instance as subdomain in siteground?

I have a Wordpress website with a GoDaddy domain being hosted on SiteGround using the nameservers. I am looking to switch to a React App which is currently running on an EC2 instance in AWS. I want to run the ec2 instance (aka the react app) on a subdomain like beta.domain.com inside SiteGround while still keeping the Wordpress website since its a part of my business. I tried creating a subdomain in SiteGround and then pointed it to my EC2 instance elastic IP (the public ipv4) using an A record but it is showing "This site can't be reached" error once I go to beta.domain.com.
What am I doing wrong? How do I run the EC2 instance in a subdomain hosted in SiteGround?
EDIT
Thank you, everyone, for your help. The problem was the SSL certificate for the HTTPS. The website wasn't coming on due to the HTTPS setup on the Nginx on the EC2 instance. After I put in the details of the certificate it runs properly with just the A record.
Any public address in the AWS environment are never accessible from outside the security groups. Even if you try to ssh from your own machine and if it is not in the inbound rule of the security group of your EC2 instance. I feel there are 3 ways out here.
1.) Adding an all traffic rule in your EC2 Security group inbound rule. This is not recommended as it opens all traffic to your machine.(additional tip: set up secure ssh key with the machine)
2.) Use an ELB to route traffic to your EC2 instance. ELB will provide you with a DNS record which can be used an a CNAME in godaddy(Point 3 shows how to map it as a A record in GoDaddy)
3.) Using Route 53 Hosted Zones - You could delegate your DNS to be managed by AWS Route 53. This way all traffic will be routed to your machine by AWS R53.
Another tip: Elastic IP can also be used which are like permanent static IP Addresses accessible from across internet. This provided a secure communication method to your instances.
Let me know what could be the favorable solution for you. I could help you out further
If you have registered your domain name with Goaddy, you can create subdomain in Godaddy as CNAME and point it to static IP address of your ec2 instance. Here is a link to guide you.
Also your main domain name will point to your Wordpress website on SiteGround.
Now that you have EC2 instance, you can also run a wordpress site on that instance if you like.

Using an elastic IP with an AWS Load Balancer

It sounds like I cannot use an elastic ip with AWS Application Load Balancer.
I currently own a domain through GoDaddy and the DNS server points to the load balancer via the CNAME. However, if the load balancer dies and gets recreated, its url changes and I then have to change the CNAME and wait for the change to propagate.
There must be a solution around this - what is it?
It looks like the solution might be to use two load balancers - https://aws.amazon.com/blogs/networking-and-content-delivery/using-static-ip-addresses-for-application-load-balancers/, but this seems really excessive - I have a small application right now.
As far as I know, the only way to have a fixed static-IP for a LB is to use a Network Load Balancer.
As stated here
Support for static IP addresses for the load balancer. You can also assign one Elastic IP address per subnet enabled for the load balancer.
An Elastic Load Balancer retains its DNS name as long as you don't replace it manually. If you still want to have a temporary, low-cost solution to this problem, you can consider the following approach:
Assuming the application is deployed in a private subnet, I would proxy the traffic through an EC2 instance until your primary DNS changes propagate.
Launch a small EC2 instance and attach an Elastic IP to it (consider your bandwidth requirements to determine which size).
Configure a proxy (nginx) to forward traffic to your application.
Configure active-passive DNS failover using ELB DNS name and EIP.

Pointing a domain to securely connect to an ec2 instance running a python app

Say I have an AWS ec2 instance that is running a python application on a certain port say 8000. Also imagine I have a domain name say www.abcd.com that I own. What does it take to make my website use https and securely redirect to the app on my ec2 that is listening on port 8000? Is this even possible to do or do I need something like nginx in between?
Firstly you will need to ensure that your EC2 is in a public subnet with a public IP, it will also need its security group open on whatever port you are hitting it on (8000). At this point you should be able to hit your application on public ip:port.
Now if you want to do the above while using a domain you will want to use AWS's Route 53 service. From this you can create a DNS routing using your domain. You will want to create a route from: application.example.com to your instances public ip. After doing so you should be able to visit: application.example.com and hit your application. In doing the following it is possible now to make your EC2 instance private.
Now if you wish to include HTTPS ontop of this, the best way would be to create a public load balancer with a certificate attached, this would accept HTTPS traffic from your user, then forward that traffic over HTTP to your EC2 on a selected port (8000).
After doing this you will want to change your Route53 entry to point to your load balancer instead of directly at your EC2.
Yes, it is totally possible.
Here is step wise procedure to do it :-
you need to create hosted zone on Route-53 services of amazon
Then it use ns to connect with your domain ( wherever you have registered)
Then you need to connect your ec2 instance ip with your hosted zone
Now you can access your ec2 instances using this domain, but it will be not https
For https, you need certificate, which you can avail from aws certificate-manager
After obtaining the certificate, Follow the steps from this blog How to set up HTTPS for your domain on AWS.
NOTE:- This is just uber point, follow it and look for more insight to how you exactly do it in your case. I followed this step while deploying using elastic-beanstalk.

How to add SSL certificate and make website HTTPS in AWS cloud?

I have installed wordpress site on just 1 EC2 instance which is running on 1AZ (means 1 public subnet). I have bought a domain as well using Route 53. Currently my site is HTTP only which I want to make it HTTPS. I have got SSL certificate from AWS Certificate Manager as well. Using Route 53, currently I have mapped A record IP V4 mapping with my EC2 instance public IP V4 address.
I'm facing issue here in changing my site to HTTPS from current HTTP. Since my site is deployed on just 1 AZ (public subnet), I cannot add Elastic Load Balancer in front of my EC2 instance as it requires minimum 2 public subnets (this is what my understanding is). If my side was deployed on 2 AZs (2 public subnets) then easily I would have configured Application Load Balancer with these 2 subnets and would have used the SSL certificate which is stored in AWS certificate manager but in my case it is just 1 AZ/1subnet.
Question 1) Is it necessary to have 2 public subnets to configure elastic load balancer? Can't I configure load balancer with just 1 subnet like in my case? If yes then please advise how to do it?
Question 2) Is load balancer really necessary between Route 53 and EC2 instance to make the site HTTPS? Can I configure Route 53 and SSL certificate to listen to EC2 instance directly and make the site HTTPS?
Please assist here to make my site HTTPS. Thanks
1) Yes, an ELB requires two subnets - but you dont have to have a server running in both subnets (but obviously you dont get the benefit/cost of dual servers). Within AWS goto the VPC section and create a new subnet inside the correct VPC - you should then be able to create an ELB (it may complain about the 2nd subnet - but if there are no instances inside that subnet it doesnt really matter).
2) No, but if you want to use the free ACM certificate it must be installed at the Load Balencer or CloudFront distribution level. There's nothing stopping you installing your own certificate on your EC2, configuring apache to use it and then renewing it as required. Take a look at LetsEncrypt for free certificates, or buy a cert online.
Few things to bare in mind:
"Best Practise" for TLS/HTTPS is constantly changing. AWS take the headache out of this by providing policies so updating to the latest standard is very simple and requires no changes to your ec2 (as its talking to the ELB via port 80)
If you decide to manage your own certificate, take a look at the SSL Labs certificate tester (https://www.ssllabs.com/ssltest/) to help you ensure your configuration is correct.
Let me answer the questions inline.
Question 1) Is it necessary to have 2 public subnets to configure
elastic load balancer? Can't I configure load balancer with just 1
subnet like in my case? If yes then please advise how to do it?
Yes. You must specify subnets from at least two Availability Zones to increase the availability of your load balancer. This is why you need at least two subnets (Minimum one subnet in each Availability Zone). When you run the EC2 instances, it is also recommended to run them in both Availability Zones (Given to the Load Balancer) with Auto Scaling for high availability and fault tolerance.
Question 2) Is load balancer really necessary between Route 53 and EC2
instance to make the site HTTPS? Can I configure Route 53 and SSL
certificate to listen to EC2 instance directly and make the site
HTTPS?
It is necessory if you are using Amazon Certificate Manager (ACM) Issued SSL certificates. Otherwise, if you use an externally purchased SSL certificate, you can configure the SSL certificate at your EC2 instance web server level.
Note: An alternative approach is to use AWS CloudFront as a proxy (Also for SSL termination using ACM certificate) and proxy the requests to the EC2 instance (If you don't want to pay for the Load Balancer where CloudFront costs are based on the number of requests unlike an hourly charge for Load Balancer).

Assigning Static IP Address to AWS Load Balancer

How can I assign a static IP address to a ELB. Seems like I cannot.
Some articles online asks to create a Route 53 record but this requires changing CNAME of domain which also redirect email traffic. I just want to change A record not CNAME.
Some articles also mention that I can use a EC2 instance as a reverse proxy. But will a single proxy be able to handle a lot of traffic?
Any solution for this?
AWS' Elastic Load Balancer is actually elastic on two levels as described here:
http://shlomoswidler.com/2009/07/elastic-in-elastic-load-balancing-elb.html
The first level is the load balancer itself. In order to make sure that ELB can scale to whatever volume you have and burst to whatever volume you suddenly encounter, AWS assigns a 'static' DNS hostname (e.g. MyDomainELB-918273645.us-east-1.elb.amazonaws.com). That hostname points to multiple IP addresses. You can see that (from a command line) by running
$ host MyDomainELB-918273645.us-east-1.elb.amazonaws.com
MyDomainELB-918273645.us-east-1.elb.amazonaws.com 172.31.7.2
MyDomainELB-918273645.us-east-1.elb.amazonaws.com 172.31.11.33
The second form of elasticity within the ELB is obviously then ELB directing the query to one of your EC2 instances in the pool.
So, you can see that trying to assign a static IP address to the load balancer would be self-defeating.
Using an EC2 instance as a reverse proxy would also seem self-defeating as you would then create a bottleneck before even getting to the ELB. Might as well just create your own load balancer.
The recommended solution (which you've pointed out) is to create a CNAME that points to the ELB hostname (which won't change).
i.e. my-app.mycompany.com ->
MyDomainELB-918273645.us-east-1.elb.amazonaws.com
This would allow you to integrate your scalable application, behind the ELB within your domain.
I'm not sure I fully understand why you cannot create a CNAME in your DNS or what that has to do with directing email traffic, can you explain?
A new feature in AWS (I believe it was announced at Re:Invent 2017) allows for static IPs with Network Load Balancers (NLB). NLB can only handle layer 4 (TCP) and not HTTP specifics (layer 7).
You can assign one Elastic IP address per availability zone.
For details see the AWS blog post or the NLB documentation.
The "Classic Load Balancer" and "Application Load Balancer" do not support static IPs. If you need a feature only provided by those, you have to fall back to the CNAME solution described above.
A blog was recently published by AWS support on this topic leveraging NLB to provide static IP to Classic and Application load balancer - https://aws.amazon.com/blogs/networking-and-content-delivery/using-static-ip-addresses-for-application-load-balancers/
Summary of solution as described by the post
We end up with a TCP listener on a NLB that accepts traffic and forwards it to an internal ALB. The ALB terminates TLS, examines HTTP headers, and routes requests based on your configured rules to target groups with your instances, servers, or containers. The AWS Lambda function keeps everything in sync by watching the ALB for IP address changes and updating the NLB target group. In the end we’ll have a few static IP addresses that are easy for whitelisting, and we won’t lose any of the benefits of ALB. Note that we will be sending all of the traffic through two load balancers
I found setting up AWS Global Accelerator very straight forward and simple. It created 2 static IP Addresses and a static DNS pointing to my Application load balancer.
Configuring Global Accelerator
Set listeners as TCP port 80, 443
Select your load balancer endpoint (AWS Global Accelerator Configuration)
Add cname record for your dns pointing to the static dns it created
(mywebsite.com > globalacceleratorDNS.com). If any client needs to
whitelist, give them the 2 static IP it created
Pricing is $18 per month + a few pennies per GB of data transfer.
I'm pretty sure its cheaper than the NLB, Nat Gateway, Elastic IP setup.
https://docs.aws.amazon.com/global-accelerator/latest/dg/about-accelerators.html
For little traffic, it might be a solution to set up an EC2 Instance running Nginx as a forwarding proxy.
So you can use the EC2's static IP Address to forward your traffic resolving the ALB's DNS name.
However, it's a kind of a hack, but using a Global Accelerator or an NLB seems to me also like a hack :-)
Unlike the Network Load Balancer, the Application Load Balancer (ALB) does not support Elastic IPs, but that's not the worst part. If you use Route 53 together with the ALB, the DNS automatically sets the TTL to 60 seconds. This appears to be causing problems for our institutional - mainly government - customers running older Windows DNS servers. They just can't keep up with the ALB's Listener changing its public-facing IP on such a short notice. Older DNS infrastructure is either not respecting or is not capable of handling such aggressive TTL.
While I don't like it, AWS recommends to put a Network Load Balancer in front of the Application Load Balancer, per here: https://aws.amazon.com/blogs/networking-and-content-delivery/using-static-ip-addresses-for-application-load-balancers/