Fargate cluster with dedicated VPC configuration - amazon-web-services

I did not quite understand the configuring of VPC "CIDR block" while creating fargate cluster. Based on the link https://www.datadoghq.com/blog/aws-fargate-metrics/, there is a fleet that runs outside my VPC that has the infrastructure to run my fargate tasks.
What I dont understand if I configure a dedicated VPC for my fargate cluster. How does it connect with dedicated AWS managed infrastructure for fargate.
I did not find any documentation with some explaination.

After googling for sometime, found this https://thenewstack.io/aws-fargate-through-the-lens-of-kubernetes/
The author states the VPC configured during fargate cluster creation acts as proxy and requests are fwded to EC2 instance running in VPC owned/managed by AWS. Configuring VPC serves the purpose of controlling the IP range of ENI attached to containers. This is based on my observation, need something more to back it up.

Related

Calling AWS Services from a container on ECS + EC2. Connection Timeout

I want to run an ECS Task on EC2 instance, and I want that task/container to be able to call other AWS services via Boto3.
When I run the same task on Fargate, it works as expected and I am able to call other AWS services from the task/container. When I run the ECS Task on EC2, it given me connection timeout errors when attempting to call other AWS services. (The specific errors depend on the service.)
In an attempt to rule out any permission issues, I am running in a public subnet and using a single IAM role (with the AdministratorAccess policy) for the EC2 instance, ECS task role, and ECS task execution role.
The ECS Task on EC2 IS able to access the internet (which I confirmed by having it ping google.com).
What are any other conditions that need to be satisfied in order to call other AWS services from a container on ECS + EC2?
The cause of my issue was using a public subnet and the awsvpc network mode.
Using Amazon EC2 — You can launch EC2 instances on a public subnet.
Amazon ECS uses these EC2 instances as cluster capacity, and any
containers that are running on the instances can use the underlying
public IP address of the host for outbound networking. This applies to
both the host and bridge network modes. However, the awsvpc network
mode doesn't provide task ENIs with public IP addresses. Therefore,
they can’t make direct use of an internet gateway.
-- Amazon Elastic Container Service Best Practices Guide

Show VPC and subnets associated with an AWS ECS cluster

I've just created an ECS cluster via the AWS console. During that process, I specified the VPC I wanted it to use, as well as four subnets.
Now I can't find any indication--neither in the console, nor via the CLI--that this actually happened. I see that the cluster exists, but I cannot get any details regarding its network disposition.
I've tried using the aws client, with all of the arguments to --include that are accepted (SETTINGS, ATTACHMENTS, CONFIGURATION, et cetera), but aws ecs describe-clusters --cluster foocluster --include SETTINGS (for example) shows me nothing but the bare details.
Cluster is not specific to any VPC, thus there is no association between an ECS cluster and a VPC. VPC are specific to ECS tasks and services.
AWS console just helped you to create a VPC as a separate entity to cluster. This way you can lunch your tasks and services to that VPC. But you can lunch them to any other VPC as well.

ECS Fargate Task in EventBridge fails with ResourceInitializationError

I have created an ECS Fargate Task, which I can manually run. It updates a Dynomodb and I get logs.
Now I want this to run on a schedule. I have setup a scheduled ECS task through EventBridge. However, this does not run.
My looking at the EventBridge logs I can see that the container has been stopped for the following stopped reason:
ResourceInitializationError: unable to pull secrets or registry auth: execution resource
retrieval failed: unable to retrieve ecr registry auth: service call has been retried 3
time(s): RequestError: send request failed caused by: Post https://api.ecr....
I thought this might be a problem with permissions. However, I tested giving the Task Execution Role full power user permissions and I still get the same error. Could the problem be something else?
This is due to a connectivity issue.
The docs say the following:
For tasks on Fargate, in order for the task to pull the container image it must either use a public subnet and be assigned a public IP address or a private subnet that has a route to the internet or a NAT gateway that can route requests to the internet.
So you need to make sure your task has a route to an internet gateway (i.e. it's in a Public subnet) or a NAT gateway.
Alternatively, if your service is in an isolated subnet, you need to create VPC endpoints for ECR and other services you need to call, as described in the docs:
To allow your tasks to pull private images from Amazon ECR, you must create the interface VPC endpoints for Amazon ECR.
When you create a scheduled task, you also specify the networking options. The docs mention this step:
(Optional) Expand Configure network configuration to specify a network configuration. This is required for tasks hosted on Fargate and for tasks using the awsvpc network mode.
For Subnets, specify one or more subnet IDs.
For Security groups, specify one or more security group IDs.
For Auto-assign public IP, specify whether to assign a public IP address from your subnet to the task.
So the networking configuration changed between the manually run task and the scheduled task. Refer to the above to figure out the needed settings for your case.
I fixed this by enabling auto-assign public IP.
However, to do this, I had to first change from "Capacity provider strategy" -
"Use cluster default", to "Launch type" - "FARGATE". Then the option to enable auto-assign public IP became available in the dropdown in the EventBridge UI.
This seems odd to me, because my default capacity provider strategy for my cluster is Fargate. But it is working now.
Need to use a gateway to follow the traffic from ECS to ECR. It can either Internet Gateway or NAT Gateway eventually which would be effecting cost factor.
But where we can resolve this scenario, by creating VPC Endpoints. Which maintains the traffic within the AWS Resources.
Endpoints Required for this would be :
S3 Gateway
ECR
ECS

AWS EKS deploy to multiple VPC's

I'm a bit confused about how aws EKS works, excuse me for my ignorance.
I have 4 VPC's one for shared services(gitlab ect..), one for dev, one for staging and one for prod.
There are multiple subnets on each vpc for different clients a,b,c.
Currently I just have pipelines that build images and deploy on an ec2 in a specific vpc/subnet. The pipeline ssh to the server based on gitlab-ci file.
I would like to change that and have a k8 cluster, where when the image updates the k8 deploys my image to the specified vpc and subnets. I know I can hook up my registry to the k8 and have it work on update thats not my question. My question is how does EKS work across VPC's and subnets.
Is this possible. It seems like the EKS cluster can only be in 1 vpc and can only deploy to those subnets?
Am I not understanding correctly?
You are correct.
EKS ControlPlane can only be running in a single VPC and can be associated with subnets in that VPC.
I have raised feature requests while back with AWS to support multi VPC and multi region EKS but no news about them so far.

How to add a Fargate Service to Inbound Security Rules?

I have a Fargate Service running in AWS. I use it to run multiple tasks. Some of the tasks connect to an RDS database to query the database.
How can I add the Fargate Service to my inboard rules of a Security Group for the RDS database? - Is there a way to associate an Elastic IP with the Fargate Cluster?
Might have misunderstood something here... But the ECS allows you specify a security group at the service level.
Go to https://docs.aws.amazon.com/cli/latest/reference/ecs/create-service.html
And search for the --network-configuration parameter
So surely you just need to set the source on your inbound rule of the RDS security group to be that security group ID?
Fargate doesn't support associating Elastic IPs with clusters. Clusters which runs in Fargate mode operate on instances which are not yours, it's the opposite of classic ECS stacks. That means you can't manage networking of host instances.
There is a way to associate IP with stack by having a Network Load Balancer in front of cluster. Then you could add a rule which allows connect your cluster through NLB.