AWS-ECS - Auto scaling with awsvpc mode - amazon-web-services

I am facing an issue while using AWS - ECS service.
I am launching my ECS cluster with 2 instances. I use EC2 service. Not Fargate. I am trying to use the awsvpc networking for the ECS containers. Morte info is here.
For the container load balancing , target type is IP. It is not editable.
Now the problem is - Auto Scaling Group can not be created for this target group to scale the cluster.
How do you guys handle the situation?

Simply leave out the load balancing configuration for the Auto Scaling group.
awsvpc creates a separate network interface whose IP address is registered to the Target Group. This target group has to be of the ip-address type.
Auto Scaling Groups use the instance target group type, that uses the default network interface of the EC2 instances.
Since the Task will get its own IP address, which is separate from the IP address of the EC2 instance, there is no need to configure load balancing for the EC2 instances themselves.

This is because of awsvpc mode,awsvpc network mode is associated with an elastic
network interface, not an Amazon EC2 instance so you must choose IP. Here is what AWS said about AWVPC network mode .
AWS_Fargate
Services with tasks that use the awsvpc network mode (for example,
those with the Fargate launch type) only support Application Load
Balancers and Network Load Balancers. Classic Load Balancers are not
supported. Also, when you create any target groups for these services,
you must choose ip as the target type, not instance. This is because
tasks that use the awsvpc network mode are associated with an elastic
network interface, not an Amazon EC2 instance.
Fargate do not to manage EC2 instances, the purpose of Fargate is not to manage server then why you need to attach auto-scaling? you can scale services.
AWS Fargate is a technology that you can use with Amazon ECS to run
containers without having to manage servers or clusters of Amazon EC2
instances. With AWS Fargate, you no longer have to provision,
configure, or scale clusters of virtual machines to run containers.
This removes the need to choose server types, decide when to scale
your clusters, or optimize cluster packing.
https://aws.amazon.com/blogs/compute/aws-fargate-a-product-overview/

Related

Differences between EC2 Auto Scaling Group (EC2 ASG) and Elastic Container Service (ECS)

From what I've read so far:
EC2 ASG is a simple solution to scale your server with more copies of it with a load balancer in front of the EC2 instance pool
ECS is more like Kubernetes, which is used when you need to deploy multiple services in docker containers that works with each other internally to form a service, and auto scaling is a feature of ECS itself.
Are there any differences I'm missing here? Because ECS is almost always a superior choice to go with if they work as I understand.
You are right, in a very simple sense, EC2 Autoscaling Groups is a way to add/remove (register/unregister) EC2 instances to a Classic Load Balancer or Target Groups (ALB/NLB).
ECS has two type of scaling as does any Container orchestration platform:
Cluster Autoscaling: Add remove EC2 instances in a Cluster when tasks are pending to run
Service Autoscaling: Add/remove tasks in a service based on demand, uses Application AutoScaling service behind the scenes

Cannot access ports in AWS ECS EC2 instance

I am running an AWS ECS service which is running a single task that has multiple containers.
Tasks are run in awsvpc network mode. (EC2, not Fargate)
Container ports are mapped in the ECS task definition.
I added inbound rules in the EC2 Container instance security group (for ex: TCP 8883 -> access from anywhere). Also in the VPC network security group.
When I try to access the ports using Public IP of the instance from my remote PC, I get connection refused.
For ex: nc -z <PublicIP> <port>
When I SSH into the EC2 instance and try netstat, I can see SSH port 22 is listening, but not the container ports (ex: 8883).
Also, when I do docker ps inside instance, Ports column is empty.
I could not figure out what configuration I missed. Kindly help.
PS: The destination (Public IP) is reachable from the remote PC. Just not from the port.
I am running an AWS ECS service which is running a single task that
has multiple containers. Tasks are run in awsvpc network mode. (EC2,
not Fargate)
Ec2, not Fargate, different horse for different courses. The task that is run against awsvpc network mode has own elastic network interface (ENI), a primary private IP address, and an internal DNS hostname. so how you will access that container with AWS EC2 public IP?
The task networking features provided by the awsvpc network mode give
Amazon ECS tasks the same networking properties as Amazon EC2
instances. When you use the awsvpc network mode in your task
definitions, every task that is launched from that task definition
gets its own elastic network interface (ENI), a primary private IP
address, and an internal DNS hostname. The task networking feature
simplifies container networking and gives you more control over how
containerized applications communicate with each other and other
services within your VPCs.
task-networking
So you need to place LB and then configure your service behind LB.
when you create any target groups for these services, you must choose
ip as the target type, not instance. This is because tasks that use
the awsvpc network mode are associated with an ENI, not with an Amazon
EC2 instance.
So something wrong with the configuration or lack of understanding between network mode. I will recommend reading this article.
when I do docker ps inside instance, Ports column is empty.
So it might be the case below if the port column is empty.
The host and awsvpc network modes offer the highest networking
performance for containers because they use the Amazon EC2 network
stack instead of the virtualized network stack provided by the bridge
mode. With the host and awsvpc network modes, exposed container ports
are mapped directly to the corresponding host port (for the host
network mode) or the attached elastic network interface port (for the
awsvpc network mode), so you cannot take advantage of dynamic host
port mappings.
Keep the following in mind:
It’s available with the latest variant of the ECS-optimized AMI. It
only affects creation of new container instances after opting into
awsvpcTrunking. It only affects tasks created with awsvpc network mode
and EC2 launch type. Tasks created with the AWS Fargate launch type
always have a dedicated network interface, no matter how many you
launch.
optimizing-amazon-ecs-task-density-using-awsvpc-network-mode

How to add a Fargate Service to Inbound Security Rules?

I have a Fargate Service running in AWS. I use it to run multiple tasks. Some of the tasks connect to an RDS database to query the database.
How can I add the Fargate Service to my inboard rules of a Security Group for the RDS database? - Is there a way to associate an Elastic IP with the Fargate Cluster?
Might have misunderstood something here... But the ECS allows you specify a security group at the service level.
Go to https://docs.aws.amazon.com/cli/latest/reference/ecs/create-service.html
And search for the --network-configuration parameter
So surely you just need to set the source on your inbound rule of the RDS security group to be that security group ID?
Fargate doesn't support associating Elastic IPs with clusters. Clusters which runs in Fargate mode operate on instances which are not yours, it's the opposite of classic ECS stacks. That means you can't manage networking of host instances.
There is a way to associate IP with stack by having a Network Load Balancer in front of cluster. Then you could add a rule which allows connect your cluster through NLB.

aws auto scaling group + elb v2 target groups

I'm using AWS Application Load Balancer (new ELB version, target groups),
that allows attach several ports of single server to the balancer.
If I attach this application balance to EC2 Autoscaling group, then for each new instance added only one port from newly created machine.
Is there any way attach several ports of newly created instance to balancer?
You are correct that traditional Auto Scaling launches a new Amazon EC2 instance, and then associates that EC2 instances with the Load Balancer on a single port.
In a Microservice environment (where there are multiple services on each instance, each operating behind a different port), it is recommended to use the Amazon EC2 Container Service that manages the deployment of containers across multiple EC2 instances.
The Amazon EC2 Container Service also features Service Auto Scaling, which can automatically deploy new containers based upon metric thresholds. This is, effectively, the same as traditional Auto Scaling but at the Container level rather than the Instance level.
When adding new containers, it should be able to add the new containers to the Application Load Balancer. (I haven't tried it myself, but that's the theory!)

What is AWS load balancing? Should I create multiple ec2 instance with same files?

I am new to AWS. I would like to activate load balancing. I need to know that should I create multipl ec2 instance with the same files? Or only one instance is enough?. What will happen while heavy traffic?
AWS Elastic Load balancer (ELB) is for distributing traffic across multiple EC2 instances. You will be registering the instances with the ELB. Even when instances fail and new instances are added to ELB, the traffic is evenly distributed among the remaining active registered instances. Please see the documentation: AWS Elastic Load Balancing
If you have only one instance, ELB will send traffic only to that. But, what is the use of ELB then? It serves no purpose to have only 1.
If you need to scale out as the traffic increases, you need to use AWS Auto Scaling : AWS Auto Scaling