Show VPC and subnets associated with an AWS ECS cluster - amazon-web-services

I've just created an ECS cluster via the AWS console. During that process, I specified the VPC I wanted it to use, as well as four subnets.
Now I can't find any indication--neither in the console, nor via the CLI--that this actually happened. I see that the cluster exists, but I cannot get any details regarding its network disposition.
I've tried using the aws client, with all of the arguments to --include that are accepted (SETTINGS, ATTACHMENTS, CONFIGURATION, et cetera), but aws ecs describe-clusters --cluster foocluster --include SETTINGS (for example) shows me nothing but the bare details.

Cluster is not specific to any VPC, thus there is no association between an ECS cluster and a VPC. VPC are specific to ECS tasks and services.
AWS console just helped you to create a VPC as a separate entity to cluster. This way you can lunch your tasks and services to that VPC. But you can lunch them to any other VPC as well.

Related

Fargate cluster with dedicated VPC configuration

I did not quite understand the configuring of VPC "CIDR block" while creating fargate cluster. Based on the link https://www.datadoghq.com/blog/aws-fargate-metrics/, there is a fleet that runs outside my VPC that has the infrastructure to run my fargate tasks.
What I dont understand if I configure a dedicated VPC for my fargate cluster. How does it connect with dedicated AWS managed infrastructure for fargate.
I did not find any documentation with some explaination.
After googling for sometime, found this https://thenewstack.io/aws-fargate-through-the-lens-of-kubernetes/
The author states the VPC configured during fargate cluster creation acts as proxy and requests are fwded to EC2 instance running in VPC owned/managed by AWS. Configuring VPC serves the purpose of controlling the IP range of ENI attached to containers. This is based on my observation, need something more to back it up.

Why are ecs services trying to attach their ENIs to the ec2 instance? leading to this error "encountered error "RESOURCE:ENI"."

I have 3 aws accounts with almost identical cloudformation. In 2 of them I am able to run up to 8 ecs services per ec2 instance. Each service has it's own ENI, this ENI is not attached to anything and not attached to the ec2 instance. Everything works.
In 1 of my aws accounts, each ecs service is trying to attach its ENI to the ec2 instance and so I now see the unable to place a task because no container instance met all of its requirements."RESOURCE:ENI" error and I'm unable to deploy more than 2 services per instance. This is because each ec2 instance has a limit to the ENIs you can attach.
VPC trunking is not on in the working accounts, so my question is why now are the ecs services attaching their ENIs to the ec2 instance? is there an option somewhere that says "don't attach your ENI to anything". Is it maybe normal to attach the ENIs and actually my working accounts should be attaching them but they aren't?
The answer is that vpc trunking was actually on in the other accounts. Just because you can't see the vpctrunking option checked in ecs account settings doesn't mean that another user/role might have vpctrunking set to on.
Or maybe vpctrunking will appear to be on when you check account settings in ecs but that only displays the setting for your user and not for the role the ecs ec2 instances are using.
I needed to set account wide vpc trunking and more importantly properly read documentation.

AWS ECS Fargate Platform 1.4 error ResourceInitializationError: unable to pull secrets or registry auth: execution resource

I am using docker containers with secrets on ECS, without problems. After moving to fargate and platform 1.4 for efs support i start getting the following error.
Any help please?
ResourceInitializationError: unable to pull secrets or registry auth: execution resource retrieval failed: unable to retrieve secret from asm: service call has been retried 1 time(s): secret arn:aws:secretsmanager:eu-central-1:.....
Here's a checklist:
If your ECS tasks are in a public subnet (0.0.0.0/0 routes to Internet Gateway) make sure your tasks can call the "public" endpoint for Secrets Manager. Basically, outbound TCP/443.
If your ECS tasks are in a private subnet, make sure that one of the following is true: (a) your instances need to connect to the Internet through a NAT gateway (0.0.0.0/0 routes to NAT gateway) or (b) you have an AWS PrivateLink endpoint to secrets manager connected to your VPC (and to your subnets)
If you have an AWS PrivateLink connection, make sure the associated Security Group has inbound access from the security groups linked to your ECS tasks.
Make sure you have set GetSecretValue IAM permission to the ARN(s) of the secrets manager entry(or entries) set in the ECS "tasks role".
Edit: Here's another excellent answer - https://stackoverflow.com/a/66802973
I had the same error message, but the checklist above misses the cause of my problem. If you are using VPC endpoints to access AWS services (ie, secretsmanager, ecr, SQS, etc) then those endpoints MUST permit access to the security group that is associated with the VPC subnet that your ECS instance is running in.
Another watchit is, if you are using EFS to host volumes, ensure that your volumes can be mounted by the same security group identified above. Go to EFS, select the appropriate file system, Network tab, then Manage.

AWS EKS deploy to multiple VPC's

I'm a bit confused about how aws EKS works, excuse me for my ignorance.
I have 4 VPC's one for shared services(gitlab ect..), one for dev, one for staging and one for prod.
There are multiple subnets on each vpc for different clients a,b,c.
Currently I just have pipelines that build images and deploy on an ec2 in a specific vpc/subnet. The pipeline ssh to the server based on gitlab-ci file.
I would like to change that and have a k8 cluster, where when the image updates the k8 deploys my image to the specified vpc and subnets. I know I can hook up my registry to the k8 and have it work on update thats not my question. My question is how does EKS work across VPC's and subnets.
Is this possible. It seems like the EKS cluster can only be in 1 vpc and can only deploy to those subnets?
Am I not understanding correctly?
You are correct.
EKS ControlPlane can only be running in a single VPC and can be associated with subnets in that VPC.
I have raised feature requests while back with AWS to support multi VPC and multi region EKS but no news about them so far.

AWS ECS Error when running task: No Container Instances were found in your cluster

Im trying to deploy a docker container image to AWS using ECS, but the EC2 instance is not being created. I have scoured the internet looking for an explanation as to why I'm receiving the following error:
"A client error (InvalidParameterException) occurred when calling the RunTask operation: No Container Instances were found in your cluster."
Here are my steps:
1. Pushed a docker image FROM Ubuntu to my Amazon ECS repo.
2. Registered an ECS Task Definition:
aws ecs register-task-definition --cli-input-json file://path/to/my-task.json
3. Ran the task:
aws ecs run-task --task-definition my-task
Yet, it fails.
Here is my task:
{
"family": "my-task",
"containerDefinitions": [
{
"environment": [],
"name": "my-container",
"image": "my-namespace/my-image",
"cpu": 10,
"memory": 500,
"portMappings": [
{
"containerPort": 8080,
"hostPort": 80
}
],
"entryPoint": [
"java",
"-jar",
"my-jar.jar"
],
"essential": true
}
]
}
I have also tried using the management console to configure a cluster and services, yet I get the same error.
How do I configure the cluster to have ec2 instances, and what kind of container instances do I need to use? I thought this whole process was to create the EC2 instances to begin with!!
I figured this out after a few more hours of investigating. Amazon, if you are listening, you should state this somewhere in your management console when creating a cluster or adding instances to the cluster:
"Before you can add ECS instances to a cluster you must first go to the EC2 Management Console and create ecs-optimized instances with an IAM role that has the AmazonEC2ContainerServiceforEC2Role policy attached"
Here is the rigmarole:
1. Go to your EC2 Dashboard, and click the Launch Instance button.
2. Under Community AMIs, Search for ecs-optimized, and select the one that best fits your project needs. Any will work. Click next.
3. When you get to Configure Instance Details, click on the create new IAM role link and create a new role called ecsInstanceRole.
4. Attach the AmazonEC2ContainerServiceforEC2Role policy to that role.
5. Then, finish configuring your ECS Instance. NOTE: If you are creating a web server you will want to create a securityGroup to allow access to port 80.
After a few minutes, when the instance is initialized and running you can refresh the ECS Instances tab you are trying to add instances too.
I ran into this issue when using Fargate. I fixed it when I explicitly defined launchType="FARGATE" when calling run_task.
Currently, the Amazon AWS web interface can automatically create instances with the correct AMI and the correct name so it'll register to the correct cluster.
Even though all instances were created by Amazon with the correct settings, my instances wouldn't register. On the Amazon AWS forums I found a clue. It turns out that your clusters need internet access and if your private VPC does not have an internet gateway, the clusters won't be able to connect.
The fix
In the VPC dashboard you should create a new Internet Gateway and connect it to the VPC used by the cluster.
Once attached you must update (or create) the route table for the VPC and add as last line
0.0.0.0/0 igw-24b16740
Where igw-24b16740 is the name of your freshly created internet gateway.
Other suggested checks
Selecting the suggested AMI which was specified for the given region solved my problem.
To find out the AMI - check Launching an Amazon ECS Container Instance.
By default all the ec2 instances are added to default cluster . So the name of the cluster also matters.
See point 10 at Launching an Amazon ECS Container Instance.
More information available in this thread.
Just in case someone else is blocked with this problem as I was...
I've tried everything here and didn't work for me.
Besides what was said here regards the EC2 Instance Role, as commented here, in my case only worked if I still configured the EC2 Instance with simple information. Using the User Data an initial script like this:
#!/bin/bash
cat <<'EOF' >> /etc/ecs/ecs.config
ECS_CLUSTER=quarkus-ec2
EOF
Informing the related ECS Cluster Name created at this ecs config file, resolved my problem. Without this config, the ECS Agent Log at the EC2 Instance was showing an error that was not possible to connect to the ECS, doing this I've got the EC2 Instance visible to the ECS Cluster.
After doing this, I could get the EC2 Instance available for my EC2 Cluster:
The AWS documentation said that this part is optional, but in my case, it didn't work without this "optional" configuration.
When this happens, you need to look to the following:
Your EC2 instances should have a role with AmazonEC2ContainerServiceforEC2Role managed policy attached to it
Your EC2 Instances should be running AMI image which is ecs-optimized (you can check this in EC2 dashboard)
Your VPC's private subnets don't have public IPs assigned, OR you do not have an interface VPC endpoint configured, OR you don't have NAT gateway set up
Most of the time, this issue appears because of the misconfigured VPC. According to the Documentation:
QUOTE: If you do not have an interface VPC endpoint configured and your container instances do not have public IP addresses, then they must use network address translation (NAT) to provide this access.
To create a VPC endpoint: Follow to the documentation here
To create a NAT gateway: Follow to the documentation here
These are the reasons why you don't see the EC2 instances listed in the ECS dashboard.
If you have come across this issue after creating the cluster
Go the ECS instance in the EC2 instances list and check the IAM role that you have assigned to the instance. You can identify the instances easily with the instance name starts with ECS Instance
After that click on the IAM role and it will direct you to the IAM console. Select the AmazonEC2ContainerServiceforEC2Role policy from the permission policy list and save the role.
Your instances will be available in the cluster shortly after you save it.
The real issue is lack of permission. As long as you create and assign a IAM Role with AmazonEC2ContainerServiceforEC2Role permission, the problem goes away.
I realize this is an older thread, but I stumbled on it after seeing the error the OP mentioned while following this tutorial.
Changing to an ecs-optimized AMI image did not help. My VPC already had a route 0.0.0.0/0 pointing to the subnet. My instances were added to the correct cluster, and they had the proper permissions.
Thanks to #sanath_p's link to this thread, I found a solution and took these steps:
Copied my Autoscaling Group's configuration
Set IP address type under the Advanced settings to "Assign a public IP address to every instance"
Updated my Autoscaling Group to use this new configuration.
Refreshed my instances under the Instance refresh tab.
Another possible cause that I ran into was updating my ECS cluster AMI to an "Amazon Linux 2" AMI instead of an "Amazon Linux AMI", which caused my EC2 user_data launch script to not work.
for other than ecs-optimized instance image. Please do below step
Install ECS Agent ECS Agent download link
ECS_CLUSTER=REPLACE_YOUR_CLUSTER_NAME
add above content to /etc/ecs/ecs.config
The VPC will need to communicate with the ECR.
To do this, the security group attached to the VPC will need an outbound rule of 0.0.0.0/0.