few domains in private subnet behind single alb with ssl - amazon-web-services

i want to have few domains for ec2 instances with ssl behind alb, 2 of them in private zone,.
i have pretty simple config but have no idea how to resolve this
What i have:
1 ec2 instance for Frontend app with nginx frontend.example.com
1 ec2 instance for Backend app backend backend.example.com
1 ec2 instance for Frontend DEV with nginx frontend.devexample.com
1 ec2 instance for Backend app backend backend.devexample.com
all instances are in 1 vpc
1 ALB for ssl (with few certs for domains)
route53 for domains
at present moment, all 4 instance are in public zone, so domains as aliases point to alb, alb terminates SSL for all domains, alb based on hosts redirects to each instance
what i want:
hide backend instances in private zone, but still i want to have access with domain name and still with ssl
as i see this for now:
domains through Route 53 point to ALB
ALB points to 2 vpc
each vpc has front in public subnet and back in private subnet
but in this case i can't write rules for alb to point to host, because it should point to vpc.
please help me, any suggestion will be really appreciated.

It is quite common to use separate VPCs for Development and Production. This ensures that the two systems do not impact each other.
The typical configuration is:
A Load Balancer in the public subnet(s)
EC2 instances in the private subnet(s)
Normally, a Load Balancer is used to distribute traffic to multiple EC2 instances. If you only have one Front-end instance, then you do not really need a Load Balancer.

Related

Load balancers for a decoupled web application on AWS

I'm not completely familiar with the load balancers in AWS. So, the idea is to set up a VPC with a public subnet and a private subnet. The instances and the ASG for the front-end will be in the public subnet, and the instances and the ASG for the backend will be in the private subnet. My question is which load balancer should I place between the front-end and the backend, and is it supposed to go in the public or the private subnet?
Any help is appreciated.
Hello I recommend you to use S3 + CloudFront for your web if its react app ( html,JS...) you can earn a lot with S3 + Cloudfront to have a serveless web hosting high scalable, and security also !!!
Regarding the back end part, the best practice is to put an ELB in your public subnet wich redirect traffic api to your back-end to a target group ASG in your private subnet.
You can add a certificate https ACM to your Alb to perform the transit security.
and the traffic from your ALB and instances ASG is in the http (port 80)
Finally the query will come from the client device wich whill get the app from cloudfront/S3 and perform a call to your ELB in public wichi redirect to your instances in private subnet.

How to route traffic to ECS Fargate instance without an Application Load Balancer

I have a Fargate instance running on port 3000. For this service "Service Discovery" is enabled, and corresponding hosted zone is created in Route 53. I have added name servers from this hosted zone in my domain registrar(GoDaddy) DNS setting.
I want to route all traffic from my domain to this Fargate instance. Currently, I don't see a need to add an ALB since the traffic is very little and routing is simple. So I want to know the following
Is it possible to route my traffic from Route 53 to the Fargate instance running on port 3000 without an ALB? If Yes, how can I do it?
Is ALB required for configuring SSL? Or I can do it without an ALB?
See this article under the heading External Networking.
TL;DR is to create a VPC with a public subnet and an attached IP address via an internat gateway, and ensure your Fargate cluster/task is running in that VPC.
If you want to run SSL without a load balancer (which one of it's responsibilities can be for terminating SSL, you will need to terminate the SSL certificates yourself from your Fargate task.

Routing for private EC2 instances behind Load Balancers in a VPC

I have 4 EC2 instances, 2 in the private subnet of each availability zone (2AZs), one hosting App A and the other hosting AppB. The instances are behind 2 internet-facing ALBs (one ALB for each app), with route 53 routing the traffic to the corresponding ALB based on the sub-domain name, and 2 NAT Gateways in each public subnet routing internet traffic for the private instances.
I want appA and AppB to communicate using HTTPs using the domain name of each app.
Will the traffic for each Application come from the load balancer? Since each EC2 instance allows the traffic only from the security group of its ALB.
Should the security group for each app allow the traffic from the other or the traffic will come from the load balancer?
I have DNS resolution activated for the VPC.
For traffic within the setup you've described the ingress traffic for the domains will enter through the load balancer which will then forward the requests to the EC2 instance(s). As long as the load balancers security group allows the inbound traffic you will receive the traffic.
For egress traffic for your application it will depend on both your EC2 and routing configuration.
If your EC2 instance resides within a public subnet (and has a public IP address) then the EC2 will route traffic via the internet gateway. If your EC2 instance is in a private subnet you will need to use either a NAT Gateway or NAT instance to route traffic to the internet.
These options will be configured within the route table for the subnet(s) that are applicable, in addition the outbound security group rules for your EC2 instance will need to allow access to the destination ip, port and protocol that you want. By default the security group will allow all outbound access.

Adding an additional static IP to elastic beanstalk instances

I have a few elastic beanstalk applications on the same VPC (which can also be reduced to one application), and I'd like them to be accessible both via one IP address (both inbound and outbound traffic), and via their own URL. I've seen that this can be done via NAT, but I haven't found documentation on whether this is all traffic (in both directions) and if it can be done alongside the original endpoints. Another question is whether there is a better way to do this.
NAT is used to provide access to internet for instances in private subnets. In this case all instances in the subnet will have the same external IP. But you won't be able to access your private instances using that IP, it's only for outbound traffic.
In your case I'd go with a ELB. Following the best practices, keep your instances with running applications in private subnets and:
Have an external facing ELB in public subnets (you'll need at least 2 public subnets in different AZs).
Create a Target Group and add your instances with running apps to it.
Assign the Target Group to the listener on your ELB.
Configure the security groups on ELB and app instances to allow the traffic on the port the applications are serving (usually it's 8080).
As a result you'll have your instances accessible by the ELB URL. If you want to have a pretty URL, you can configure it in Route 53 and resolve it to the ELB URL.
Its not possible by using aws provided NAT cluster but can be achieved by hosting a box with both Load balancer and NAT running in the same instance with EIP, map your domain with that IP for incoming traffic, for outgoing traffic in the route table of private app subnet you configure the NAT as target for all the 0.0.0.0/0 route, But it is not the recommended approach since the front facing instance becomes SPOF.
The recommended way is using ELB as a front facing and NAT cluster as outgoing for high HA.

How to add SSL certificate and make website HTTPS in AWS cloud?

I have installed wordpress site on just 1 EC2 instance which is running on 1AZ (means 1 public subnet). I have bought a domain as well using Route 53. Currently my site is HTTP only which I want to make it HTTPS. I have got SSL certificate from AWS Certificate Manager as well. Using Route 53, currently I have mapped A record IP V4 mapping with my EC2 instance public IP V4 address.
I'm facing issue here in changing my site to HTTPS from current HTTP. Since my site is deployed on just 1 AZ (public subnet), I cannot add Elastic Load Balancer in front of my EC2 instance as it requires minimum 2 public subnets (this is what my understanding is). If my side was deployed on 2 AZs (2 public subnets) then easily I would have configured Application Load Balancer with these 2 subnets and would have used the SSL certificate which is stored in AWS certificate manager but in my case it is just 1 AZ/1subnet.
Question 1) Is it necessary to have 2 public subnets to configure elastic load balancer? Can't I configure load balancer with just 1 subnet like in my case? If yes then please advise how to do it?
Question 2) Is load balancer really necessary between Route 53 and EC2 instance to make the site HTTPS? Can I configure Route 53 and SSL certificate to listen to EC2 instance directly and make the site HTTPS?
Please assist here to make my site HTTPS. Thanks
1) Yes, an ELB requires two subnets - but you dont have to have a server running in both subnets (but obviously you dont get the benefit/cost of dual servers). Within AWS goto the VPC section and create a new subnet inside the correct VPC - you should then be able to create an ELB (it may complain about the 2nd subnet - but if there are no instances inside that subnet it doesnt really matter).
2) No, but if you want to use the free ACM certificate it must be installed at the Load Balencer or CloudFront distribution level. There's nothing stopping you installing your own certificate on your EC2, configuring apache to use it and then renewing it as required. Take a look at LetsEncrypt for free certificates, or buy a cert online.
Few things to bare in mind:
"Best Practise" for TLS/HTTPS is constantly changing. AWS take the headache out of this by providing policies so updating to the latest standard is very simple and requires no changes to your ec2 (as its talking to the ELB via port 80)
If you decide to manage your own certificate, take a look at the SSL Labs certificate tester (https://www.ssllabs.com/ssltest/) to help you ensure your configuration is correct.
Let me answer the questions inline.
Question 1) Is it necessary to have 2 public subnets to configure
elastic load balancer? Can't I configure load balancer with just 1
subnet like in my case? If yes then please advise how to do it?
Yes. You must specify subnets from at least two Availability Zones to increase the availability of your load balancer. This is why you need at least two subnets (Minimum one subnet in each Availability Zone). When you run the EC2 instances, it is also recommended to run them in both Availability Zones (Given to the Load Balancer) with Auto Scaling for high availability and fault tolerance.
Question 2) Is load balancer really necessary between Route 53 and EC2
instance to make the site HTTPS? Can I configure Route 53 and SSL
certificate to listen to EC2 instance directly and make the site
HTTPS?
It is necessory if you are using Amazon Certificate Manager (ACM) Issued SSL certificates. Otherwise, if you use an externally purchased SSL certificate, you can configure the SSL certificate at your EC2 instance web server level.
Note: An alternative approach is to use AWS CloudFront as a proxy (Also for SSL termination using ACM certificate) and proxy the requests to the EC2 instance (If you don't want to pay for the Load Balancer where CloudFront costs are based on the number of requests unlike an hourly charge for Load Balancer).