EC2 instance autoscale in different regions? - amazon-web-services

I'm pretty new to aws and already set up an EC2 Instance running my Node.js Server. I created an AMI and added it to an Auto Scaling Group. Now I want to setup a Load Balancer which has one IP address and has different autoscaling groups in different Regions. It should connect the user to the Region with the lowest delay and consistently send and receive websocket messages from that server.
But all I see in my Settings is the VPC for the European region. Do I have to setup a new VPC? Or is this even possible what I'm trying to do?
Hope somebody can help me out, cheers!

It is possible to do that using Route53. You create your load balances on the regions you want, with their instances, running the same application, and setup route 53 to route the requests based on latency or geolocation.
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/routing-policy.html

Related

Unable to connect to AWS ALB through client VPN

I am creating a staging env using AWS and i want it to be accessible through VPN only.
The env was created using Fargate.
I have:
1 front lb connected to several front tasks.
1 back lb connected to several back tasks.
I created the VPN client endpoint.
I can connect to the VPN and ssh to instances in the same security group as my front and back lb. (I tried to start an ec2 instance with the same security group and it works).
But for some reason i am unable to connect to the albs using their dns name or the name used in the route 53 record.
Did i miss something that should be configured for dns to work on aws ressources through the VPN?
I hope this was detailed enough, Thanks in advance.
It sounds like you created a public, Internet-facing ALB. For the ALB to work internally in the VPC (and only in the VPC), you need to create an internal ALB.
See the "Scheme" setting in the documentation.

How does Elastic Beanstalk support multiple EC2 instances for a web server?

Maybe I am not understanding what exactly Elastic Beanstalk should be used for, so my question is this:
How does Elastic Beanstalk have the ability to support multiple EC2 instances in the same Elastic Beanstalk Environment if the instances act as a backend web server?
For example, if I have a server that has the end point www.example.com/api/endpoint, does Elastic Beanstalk allow me to have more than 1 instance (for high availability) with that same endpoint? Is that possible? If not, how do you make use of the extra EC2 instances if they all have different domains?
How do I send requests to the Elastic Beanstalk environment (from a front end web app) that all instances can understand?
You're going to need to watch some video's.
ElasticBeanStalk is for lazy developers who don't want to learn Cloud Technology ;) I know because I was one once!
ElasticBeanStalk creates a VPC that has Subnets by default 1 per Availability Zone (AZ), and the number of AZs depends on the regions number of AZs. The VPC it creates will have a Internet Gateway attached via the Route Tables making it Public.
In the Subnet(s) ElasticBeanStalk will spin up a EC2/VM to host your website.
ElasticBeanStalk will make a Network Security Group (NSG) opening port 80 &/or 443 and because the VPC is public plus the NSG is open the EC2 will be accessible to the WWW.
If you've chosen an Auto Scale Group (ASG) the ASG will spin up/down EC2s depending typically on CPU but you can use CloudWatch metrics.
With an ASG the ElasticBeanStalk will spin up a Elastic Load Balancer (ELB) and that will coordinate the traffic coming into the Internet Gateway to the VMs. The ELB is registered with the ASG and that's how it knows the ASG spun up or down EC2 instances. This is how the ELB can deliver the traffic using either Level4 (a range of IP addresses) or Layer7 (a range of IP Addresses plus HTTP Request, Header & etc info) to the EC2s currently running in a "Target Group".
if I have a server that has the end point www.example.com/api/endpoint, does Elastic Beanstalk allow me to have more than 1 instance (for high availability
Yes! And its actually quite tricky to demonstrate because you hit the same URL and need to get the ID of the different instances in the ASG.
The best resource is Ryan Kroonenberg's A Cloud Guru "Solution Architect Associate" video on VPCs, Chapter 9. https://acloudguru.com/course/aws-certified-solutions-architect-associate-saa-c02-4KYV (you can find an yrs old torrent with it)
This diagram isn't 100% accurate, the ASGs stretch across AZs:

Do I need a loadbalancer in an AWS elastic beanstak environment

My applications run on ElasticBeanstalk and communicate purely with internal services like Kinesis and DynamoDB. There is no web traffic needed? Do I need an ElasticLoadBalancer in order to scale my instances up and down. I want to add and remove instances purely based on some cloudwatch metrics? Do I need the ELB to do managed updates etc.?
If there is no traffic to the service then there is no need to have a load balancer.
In fact the load balancer is primarily to distribute inbound traffic such as web requests.
Autoscaling can still be accomplished without a load balancer with scaling based on the CloudWatch metric that you want to use. In fact this is generally how consumer based applications tend to work.
To create this without a load balancer you would want to configure you environment as a worker environment.
#Chris already anwsered, but I would like to complement his answer for the following:
There is no web traffic needed?
Even if you communicate with Kinesis and DynamoDB only, your instances still need to be able to access internet to communicate with the AWS services. So the web traffic is required from your instances. The direct inbound traffic to your instances is not needed.
To fully separate your EB env from the internet you should have a look at the following:
Using Elastic Beanstalk with Amazon VPC
The document describes what you can do and want can't be done when using private subnets.

Networking Between Tasks in AWS ECS Fargate

I'm trying to setup a cluster with several different tasks that need to be able to communicate with each other. I have turned on Service Discovery for each task and I see all of the Route 53 DNS entries in my PrivateHosted Zone get updated as I spin up new tasks, but for whatever reason when I try to use the domain name of a service (wordpress.local) My other container cannot resolve it. They are all on the same availability zone and the same subnet. I'm not totally certain what else I need to do in order to get these tasks to be able to communicate with each other aside from setting up a target group in my load balancer which seems unnecessary as I have Service Discovery turned on...

What is AWS load balancing? Should I create multiple ec2 instance with same files?

I am new to AWS. I would like to activate load balancing. I need to know that should I create multipl ec2 instance with the same files? Or only one instance is enough?. What will happen while heavy traffic?
AWS Elastic Load balancer (ELB) is for distributing traffic across multiple EC2 instances. You will be registering the instances with the ELB. Even when instances fail and new instances are added to ELB, the traffic is evenly distributed among the remaining active registered instances. Please see the documentation: AWS Elastic Load Balancing
If you have only one instance, ELB will send traffic only to that. But, what is the use of ELB then? It serves no purpose to have only 1.
If you need to scale out as the traffic increases, you need to use AWS Auto Scaling : AWS Auto Scaling