Setting up multiple EC2 instances and multiple subdomain under one parent domain in AWS Route 53 - amazon-web-services

I am developing a set of frontend webapps (for instance vaadin or angular) and backend RESTful services. Each frontend webapp will consume one or more of these backend services. I want both webapps and services to be secured over https.
Now, I want to register a single domain, say mydomain.com, and deploy the backend services such that they are available at
service1.api.mydomain.com, service2.api.mydomain.com etc. The frontend apps should be available at webapp1.mydomain.com, webapp2.mydomain.com etc.
I need to be able to setup two or more EC2 instances for the services, and the same for the webapps. For instance, service1 may be running instance A, service2 on instance B, and webapp1 on instance C, and webapp2 on instance D.
How do I configure this setup in AWS Route 53?
Since there is a limit to the max number of Elastic IPs (max 5) that can be allocated for one AWS account, I suppose separate public IPs for all the EC2 instances is not a solution, since I will be having more than 5 such subdomains.
I hope you can provide a practical example configuration with two services and two webapps.

You can submit a request to get the Elastic IP (EIP) limit increased for your account. Small increases (e.g. from 5 to 10) should be fairly quick and easy to obtain. Larger increases should be obtainable if you can justify it to AWS support.
https://console.aws.amazon.com/support/home#/case/create?issueType=service-limit-increase&limitType=service-code-vpc
If you're open to using path-based routing instead of subdomain based (e.g. mydomain.com/app1 and mydomain.com/app1/api) or a mix of the two (e.g. app1.mydomain.com and app1.mydomain.com/api), you could look at using an Application Load Balancer (ALB). You would need one ALB per subdomain used.
http://docs.aws.amazon.com/elasticloadbalancing/latest/application/tutorial-load-balancer-routing.html
Note: I expect subdomain-based routing to be available with the ALB in the future, but it hasn't been released yet.
ALBs could be cheaper than using Classic Elastic Load Balancers (ELBs), but if you're not using the load balancing functionality at all, EIPs may be your best bet since they're free when attached to a running instance.

Related

Remove ECS container name from record name on AWS Route 53

I have a little architecture with two services running on a EC2 cluster of AWS ECS, they're healthy and I can access them via browser through two ALBs, pointing to frontend and backend respectively. My frontend container can configure its backend base url so I want to connect it to the backend with a proper namespace with Route 53 Service Discovery (and not using ALB dns name).
My problem is I configured the tasks with awspvc mode and pointed them to the unique port I want to expose, but the EC2 instances (and the containers when I access via ssh) can't reach the short namespace, I have to add the name of the container and its port, but I can't abstract them (I think they're the original containers because the names does not match in pictures 2 and 3 but they're still accesable). When I used Fargate I could reach the containers only providing the service name and namespace, but now I can't with EC2.
I'll attach some pictures I believe they're useful (red is the same name for all texts):
service discovery of backend
route 53 records
active containers

Run multiple servers with interconnection on Amazon AWS

We are developing applications and devices that communicate with our servers. We have one "main" Java Spring server which handles almost all the HTTP requests including user authentication, storing relevant user data and giving that data to the applications. Furthermore, we have a few smaller HTTP servers (written in golang) which are both used by the "main" server to perform certain tasks but also have some public API's that apps and devices use directly.
In our current non-production setup we run all the servers locally on one machine with an apache2 in front which directs the requests. So the servers can be accessed via the apache2 by a user by their respective subdomains but they also perform some communication between each other. When doing so, currently we simply send the request to localhost:{PORT} since they all run on the same machine. They furthermore all utilize the same mysql-server running on that same machine.
We are now looking to get it more production-ready and are looking to deploy it to AWS. They are currently not containerized so a solution that requires containerization (ECS? K8s?) would most likely require more work. What would be the most straightforward way to do the following:
Deploy a number of servers on AWS where they are exposed publicly with their respective domains but can also communicate internally with one another (or would they just communicate with one another using their public domains?)
Deploy a managed SQL database (Amazon RDS?) which is accessible for all the servers.
Setup the routing of the requests. Currently run our own configured apache2 but I assume we can add a managed API Gateway in AWS and configure it for our servers.
Q. Deploy a number of servers on AWS where they are exposed publicly
with their respective domains but can also communicate internally with
one another (or would they just communicate with one another using
their public domains?)
On AWS you create a VPC(1st default VPC is created when you login for the first time).
You can deploy a number of EC2 instances(virtual servers) with just private IP addresses and without any public access and put them behind an ELB(elastic load balancer). The ELB will take all the traffic and distribute the load onto the servers based on endpoint.
However the EC2 instances won't have public IPs A VPC(virtual Private Gateway) allows your services to communicate to each other via private IPs (something like 172.31.xx.xx), You can also provide domain/sub-domain names to these private IP addresses using Route53 service of AWS.
For example You launch 2 servers:
Your Java Application - on 172.31.1.1 (you name it
xyz.myjavaapp.something.com on Route53)
Your Angular Application - on 172.31.1.2
The angular application can reach your java application on 172.31.1.1:8080 or
xyz.myjavaapp.something.com:8080
Q. Deploy a managed SQL database (Amazon RDS?) which is accessible for
all the servers.
Yes you can deploy an SQL database on RDS and it will be available to the EC2 instances. Just make sure you create proper security groups to allow only your servers to access it, and not leave it open for public internet.
Example for a VPC only security group entry is 172.31.0.0/16 This will allow only ther servers in you VPC to connect to the RDS DB. given that your VPC subnet has the range 172.31.x.x
Q. Setup the routing of the requests. Currently run our own configured
apache2 but I assume we can add a managed API Gateway in AWS and
configure it for our servers.
You can set up public/private APIs and manage different endpoints using API Gateway.
Another way it to put your application server behind an Application ELB. The ELB can take care of load balancing as well as endpoint management.
for example :
if you decide to deploy 2 servers for /getData and 1 server for /doSomethingElse. It can be easily managed by ELB.
I would suggest you use at-least servers for critical services and load balance them behind and ELB for production env.
On another note, containerizing and deploying to kubernetes is not that difficult or time consuming. But yes it has got some learning curve, but the benefits outweigh it.
Feel free to ask questions.

Send POST request from one service to another in Amazon ECS

I have a Node-Express website running on a microservices based architecture. I deployed the microservices on Amazon ECS cluster with one EC2 instance. The microservices sit behind an Application Load Balancer that routes external traffic correctly to the services. This system is working as expected except for one problem: I need to make a POST request from one service to the other. I am trying to use axios for this but I don't know what url to post to in axios. When testing locally, I just used axios.post('http://localhost:3000/service2',...) inside service 1 but how should I do it here?
So There are various ways.
1. Use Application Load Balancer behind the service
In this method, you put your micro services behind the load balancer(s) and to send request, you give load balancer URL. You can have path based routing for same load balancer or you can use multiple load balancers.
2. Use Service Discovery
In this method, you let your requester discover it. Now Service discovery can be done in various way like using ALB or Route 53 or ECS or Key Value Store or Configuration Management or Third Party Software such as Consul

Exposing various ports behind a load balancer on Rancher/AWS

I am setting up a Rancher environment.
The Rancher server is behind a classic ELB (since ALBs are not recommended per Rancher guidelines).
I also want to make available Prometheus and Grafana services.
These are offered via Rancher catalogue and will run as container services, being exposed on Rancher host ports 3000 and 9090.
Since Rancher server (per their recommendations) requires ELB, I wanted to explore the options on how to make available the two services above using the most minimal possible setup.
If the server is available on say rancher.mydomain.com, ideally I would like to have the other two on grafana.mydomain.com and prometheus.mydomain.com.
Can I at least combine the later two behind an ALB?
If so, how do I map them?
Do I place <my_rancher_host_public_IP>:3000 and <my_rancher_host_public_IP>:9090 behind an ALB?
You could do this a couple (maybe more) ways:
use an external dns updater like the route 53 infra catalog item. That will automatically map dns directly to the public ip of the host that houses the services. Modify the dns template so it prepends the service name to the domain.
register your targets and map the ports, then set a dns entry to the ALB.
The first way will allow for dns to update in case the service shifts across hosts in your environment. You could leverage the second way and force containers to specific hosts.

Service discovery on aws ECS with Application Load Balancer

I would like to ask you if you have an microservice architecture (based on Spring Boot) involving Amazon Elastic Container Service (ECS) with Application Load Balancer(ALB), service discovery is performed automatically by the platform, or do you need a special mechanism (such as Eureka or Consul)?
From the documentation (ECS and ALB) is not clear you have this feature provided.
I have talked this with the Amazon support team and they respond the following:
"...using Service Discovery on AWS ECS[..] just with ALBs.
So, there could be three options here:
1) Using ALB/ELB as service endpoints (Target groups for ALBs, separate ELBs if using ELBs)
2) Using Route53 and DNS for Service Discovery
3) Using a 3rd Party product like Consul.io in combination with Nginx.
Let me speak about each of these options.
Using ALBs/ELBs
For this option the idea is to use the ELBs or ALB Target groups in front of each service.
We define an Amazon CloudWatch Events filter which listens to all ECS service creation messages from AWS CloudTrail and triggers an Amazon Lambda function.
This function identifies which Elastic Load Balancing load balancer (or an ALB Target group) is used by the new service and inserts a DNS resource record (CNAME) pointing to it, using Amazon Route 53.
The Lambda function also handles service deletion to make sure that the DNS records reflect the current state of applications running in your cluster.
The down side here is that it can incur higher costs if you are using ELBs - as you need an ELB for each service. And it might not be the simplest solution out there.
If you wish to read more on this you can do so here[1]
Using Route53
This approach involves the use of Route53 and running a simple agent[2] on your ECS container instances.
As your containers stop/start the agent will update the Route53 DNS records. It creates a SRV record. Likewise it will delete said records once the container is stopped.
Another part of this method is a Lambda function that performs health checks on ECS container instances - and removes them from R53 in case of a failure.
You can read up more on this method, on our blog post here[3].
Using a 3rd Party tool like Consul.io Using tools like Consul.io on ECS, will work - but is not supported by AWS. So you are free to use it, but we - unfortunately - do not offer support for it.
So, in conclusion - there are a few ways of implementing service discovery on AWS ECS - the two ways I showed here that use AWS resources, and of course the way of using 3rd party applications.
"
you dont have an out-of-the-box solution in AWS, although it is possible with some effort as described in https://aws.amazon.com/es/blogs/compute/service-discovery-an-amazon-ecs-reference-architecture/
You may also install Zuul + Ribbon + Eureka or Nginx + Consul and use ALB to distribute traffic among Zuul or Nginx