Cannot list ECS Clusters in Jenkins configure cloud screen - amazon-web-services

I used this CloudFormation template to create my ECS Cluster https://tomgregory-cloudformation-examples.s3-eu-west-1.amazonaws.com/jenkins-for-ecs.yml
I am using Amazon Elastic Container Service (ECS) / Fargate Jenkins plugin.
I am trying to configure Configure Clouds but when I input the region eu-central-1 where I created my cluster it spins and spins and cannot list my cluster (it times out with error 504 in the browser console). I am 100% sure my cluster is located in eu-central-1 but when I select this region it doesn't find my cluster. What am I missing?
UPDATE
I looked at CloudWatch logs and I found that it's permissions related
User: arn:aws:sts::{...}:assumed-role/jenkins-role/5d8e46aed4f642809856ffa57732588a is not authorized to perform: ecs:ListClusters on resource: * because no identity-based policy allows the ecs:ListClusters action (Service: AmazonECS; Status Code: 400; Error Code: AccessDeniedException)
I added a policy to the role with this statement
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "ecs:ListClusters",
"Resource": "*"
}
]
}
I've confirmed in Policy Simulator that the role does have permissions to list ECS clusters but it still doesn't work.
This is the response from AWS IAM API
{
"RoleName": "jenkins-role",
"PolicyName": "JenkinsECSListClusters",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "ecs:ListClusters",
"Resource": "*"
}
]
}
}

Related

AWS AMP : 502 Bad Gateway: unable to proxy request - WebIdentityErr: failed to retrieve credentials

i have eks setup and provisioned aws managed service for prometheus. Created policy with AMP full access( "aps:*") and attached that policy to role which is used by EKS.
Prometheus is installed on eks but it was not able to push metrics into Prometheus managed service.
EKS is provisioned in VPC.
Error:
ts=2021-07-07T00:43:57.951Z caller=dedupe.go:112 component=remote
level=warn remote_name=e595f3
url=http://localhost:8005/workspaces/ws-xxxx-xxxx-xxxx/api/v1/remote_write
msg="Failed to send batch, retrying" err="server returned HTTP status
502 Bad Gateway: unable to proxy request - WebIdentityErr: failed to
retrieve credentials"
ingest policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"aps:*"
],
"Resource": "*"
}
]
}
Trust relationship:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::xxxxxx:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/65ETDGGHD56WTRSDGF"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"oidc.eks.us-east-1.amazonaws.com/id/65ETDGGHD56WTRSDGF:sub": "system:serviceaccount:test-eks-prometheus:amp-iamproxy-query-service-account"
}
}
}
]
}
any help on this?
From "url=http://localhost:8005/workspaces/ws-xxxx-xxxx-xxxx/api/v1/remote_write"it looks like you are trying to use sigv4 proxy.
Prometheus now supports sigv4 natively, I'd recommend this setup to remove one component to debug the problem through

How to give a Fargate Task the right permissions to upload to S3

I want to upload to S3 from a Fargate task. Can this be achieved by only specifying a ExecutionRoleArn as opposed to specifying a both a ExecutionRoleArn and a TaskRoleArn?
If I specify a ExecutionRoleArn that has the following Permission Policies attached:
Custom policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": "arn:aws:s3:::example_bucket/*"
}
]
}
AmazonECSTaskExecutionRolePolicy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ecr:GetAuthorizationToken",
"ecr:BatchCheckLayerAvailability",
"ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
With the following trust relationship:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": [
"events.amazonaws.com",
"lambda.amazonaws.com",
"ecs-tasks.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
Would this be sufficient to allow the task to upload to S3? Or do I need to define a TaskRoleArn?
The ExecutionRoleArn is used by the service to setup the task correctly, this includes pulling any images down from ECR.
The TaskRoleArn is used by the task to give it the permissions it needs to interact with other AWS Services (such as S3).
Technically both Arns could be the same, however I would suggest separating them to be different roles to avoid confusion over the permissions required for both of the scenarios the role is used for.
Additionally you should have the endpoint for ecs.amazonaws.com. In fact the full list of services depending on how you're using ECS are below (although most could be removed such as spot if you're not using spot, or autoscaling if you're not using autoscaling).
"ecs.amazonaws.com",
"ecs-tasks.amazonaws.com",
"spot.amazonaws.com",
"spotfleet.amazonaws.com",
"ecs.application-autoscaling.amazonaws.com",
"autoscaling.amazonaws.com"
In the case of Fargate, both IAM role pay different role
Execution Role
This is role is mandatory and you can not run the task without this role even if you add ExecuationRole policy in Task Role
To produce this error just set Execution role =None, you will not able to launch the task.
AWS Forums (Unable to create a new revision of Task Definition)
Task Role
This role is optional and you can add s3 related permission in this role,
Optional IAM role that tasks can use to make API requests to authorized AWS services.
Your police seems okay,
Create ecs_s3_upload_role
Add below policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": "arn:aws:s3:::example_bucket/*"
}
]
}
Now Fargate Task will able to upload to S3 bucket.
Your policies don't include any s3 related permissions. Thus you should define your s3 permissions in a task role:
With IAM roles for Amazon ECS tasks, you can specify an IAM role that can be used by the containers in a task.

Cannot access s3 from application running on EKS EC2 instance, IAM assume role permissions issue

NOTE: similar question asked here, but no proper solution provided.
I setted up an EKS cluster via eksctl tool with single EC2 node. Deploy a Pod inside the EC2 node, this Pod writes the logs into s3 bucket. All worked fine when I used IAM user with key and secret. But now I want this Pod to use IAM Role instead. This Pod uses a newly created role with AmazonS3FullAccess permissions named prod-airflow-logs. According to the Docs, I also added "ec2.amazonaws.com" in this role's trust Relationship as follows;
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"s3.amazonaws.com",
"ec2.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
EC2 Node has its own Role named eksctl-prod-eks-nod-NodeInstanceRole-D4JQ2Q6D9GDA. If I understand correct, this role has to assume role prod-airflow-logs in order to allow container Pod to access and store logs in s3. According to the same Docs, I attached an in-line policy in this Node Role as follows;
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"iam:GetRole",
"iam:PassRole",
"ec2:*",
"iam:ListInstanceProfiles",
"iam:GetRolePolicy"
],
"Resource": "*"
}
]
}
But I still get following error in kubernetes pod when it tried to store logs on s3;
botocore.exceptions.ClientError: An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:sts::XXXXXXX:assumed-role/eksctl-prod-eks-nod-NodeInstanceRole-D4JQ2Q6D9GDA/i-0254e5b5b36e58f79 is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::XXXXXX:role/prod-airflow-logs
The only thing I don't understand from this error is, which user is it referring to ?
Where on earth is this user User: arn:aws:sts::XXXXXXX:assumed-role/eksctl-prod-eks-nod-NodeInstanceRole-D4JQ2Q6D9GDA/i-0254e5b5b36e58f79 ? Would appreciate if someone could point out what exactly I am missing here.
No answer yet... Here is how I made this work, I had to add the arn of Node Role into the trust policy of Pod Execution role.
In my case, the Pod execution role is prod-airflow-logs and the Node Role is eksctl-prod-eks-nod-NodeInstanceRole-D4JQ2Q6D9GDA.
The trust relationship of Pod execution has to be as follows;
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::XXXXXXX:role/eksctl-prod-eks-nod-NodeInstanceRole-D4JQ2Q6D9GDA"
},
"Action": "sts:AssumeRole"
}
]
}

Simple IAM Issue with CodeDeploy

I'm having an issue with a seemingly trivial task of getting CodeDeploy to deploy Github code to an AutoScaling Group in a Blue/Green Deployment.
I have a Pipeline setup, a Deployment Group setup, and the AutoScaling Group, but it fails when it gets to the actual deployment:
I went to my role and it seems like it has sufficient policies for it to go through with the blue/green deployment:
Is there a policy that I'm not considering that needs to be attached to this role?
I found the answer in this link:
https://h2ik.co/2019/02/28/aws-codedeploy-blue-green/
Without wanting to take the credit, only one statement was missing from #PeskyGnat :
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"iam:PassRole",
"ec2:CreateTags",
"ec2:RunInstances"
],
"Resource": "*"
}
]
}
I was also getting the error:
"The IAM role does not give you permission to perform operations in the following AWS service: AmazonAutoScaling. Contact your AWS administrator if you need help. If you are an AWS administrator, you can grant permissions to your users or groups by creating IAM policies."
I figured out the 2 permissions needed to get past this error, I created the policy below and attached it to the Code Deploy role:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iam:PassRole",
"ec2:RunInstances",
"ec2:CreateTags"
],
"Resource": "*"
}
]
}

Access AWS Elasticsearch from AWS Beanstalk

I have an Elasticsearch Service instance on AWS and an Elastic Beanstalk one.
I want to give read-only access to beanstalk however beanstalk doesn't have a static ip address be default and with a bit of googling it is too much trouble to add one.
I therefore gave access to the aws account but that doesnt seem to work. I am still getting the error:
"User: anonymous is not authorized to perform: es:ESHttpPost
When I set it to public access everything works so I am certain I am doing something wrong here:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::xxx:root"
},
"Action": "es:*",
"Resource": "arn:aws:es:eu-central-1:xxx:domain/xxx-elastic-search/*"
}
]
}
Use identity-based policy such as this instead of IP whitelists.
{
"Version": "2012-10-17",
"Statement": [
{
"Resource": "arn:aws:es:us-west-2:111111111111:domain/recipes1/*",
"Action": ["es:*"],
"Effect": "Allow"
}
]
}
Then attach it to the Elastic Beanstalk role. Read more here
https://aws.amazon.com/blogs/security/how-to-control-access-to-your-amazon-elasticsearch-service-domain/