Unable to download AWS CodeDeploy Agent Install file - amazon-web-services

I am trying to download AWS Codedeploy Agent file in my Amazon Linux. I followed instructions as mentioned in http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-run-agent.html, for Amazon Linux, have created appropriate instance profile, service role etc. Everything is latest (Amazon Linux, CLI Packages, it is a brand new instance and I have tried this with at least 3 more brand new instances with same result). All instances have full outbound internet access.
But following statement for downloading install from S3 always fails,
aws s3 cp s3://aws-codedeploy-us-east-1/latest/install . --region us-east-1
With Error,
A client error (403) occurred when calling the HeadObject operation: Forbidden
Completed 1 part(s) with ... file(s) remaining
Can anyone help me with this error?

I figured out the problem, According to Codedeploy documentation for IAM Instance profile
http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-create-iam-instance-profile.html
following permissions needs to be given to your IAM instance profile.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:Get*",
"s3:List*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
But I limited the resource to my code bucket since I don't want my instances to access other buckets directly. But turns out I also need to give additional permission for aws-codedeploy-us-east-1/* s3 resource for being able to download the agent. This is not very clear in the document for setting up IAM instance profile for Codedeploy.

More restrictive policy that works:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::aws-codedeploy-us-east-1/*",
"arn:aws:s3:::aws-codedeploy-us-west-1/*",
"arn:aws:s3:::aws-codedeploy-us-west-2/*",
"arn:aws:s3:::aws-codedeploy-ap-south-1/*",
"arn:aws:s3:::aws-codedeploy-ap-northeast-2/*",
"arn:aws:s3:::aws-codedeploy-ap-southeast-1/*",
"arn:aws:s3:::aws-codedeploy-ap-southeast-2/*",
"arn:aws:s3:::aws-codedeploy-ap-northeast-1/*",
"arn:aws:s3:::aws-codedeploy-eu-central-1/*",
"arn:aws:s3:::aws-codedeploy-eu-west-1/*",
"arn:aws:s3:::aws-codedeploy-sa-east-1/*"
]
}
]
}

Related

"An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied" when using batch jobs

I have a compute environment with 'ecsInstanceRole'. It contains the policies 'AmazonS3FullAccess' and 'AmazonEC2ContainerServiceforEC2Role'
Since I am using the AmazonS3FullAccess policy, I assume the batch job has permission to list, copy, put etc.
-The image I am using is a custom docker image that has a startup script which uses "aws s3 ls <S3_bucket_URL>"
When I start this image on an EC2 instance, it runs fine and lists the contents of the bucket
when I do the same as a batch job, I get the access denied error seen above.
I dont understand how this is happening.
Things I have tried so far:
having the bucket policy as
.
{
"Version": "2012-10-17",
"Id": "Policy1546414123454",
"Statement": [
{
"Sid": "Stmt1546414471931",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<Account Id>:root"
},
"Action": [
"s3:ListBucket",
"s3:ListBucketVersions"
],
"Resource": [
"arn:aws:s3:::"bucketname",
"arn:aws:s3:::bucketname/*"
]
}
]
}
Granted public access to the bucket
Quoting the reply from #JohnRotenstein because I cannot mark it as an answer.
"If you are using IAM Roles, there is no need for a Bucket Policy. (Also, there is a small typo in that policy, before bucketname but I presume that was due to a Copy & Paste error.) It would appear that a role has not been assigned to your ECS task: IAM Roles for Tasks - Amazon Elastic Container Service"
Solution: I had toattach an S3 access policy to my current Job Role.

Databricks AWS account setup - AWS storage with error - Missing permissions: PUT, LIST, DELETE

I have created a PREMIUM trail Databricks account with AWS. I have setup AWS account with user access keys.
And for configuring AWS storage followed the below instructions in the URL(setup bucket policy as below in below URL).
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Grant Databricks Access",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::98765432101:root"
},
"Action": [
"s3:GetObject",
"s3:GetObjectVersion",
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::my-databricks-user-bucket/*",
"arn:aws:s3:::my-databricks-user-bucket"
]
}
]
}
https://docs.databricks.com/administration-guide/account-settings/aws-storage.html
But, I am getting the error as below.
The provided S3 bucket is valid, but have insufficient permissions to
launch a Databricks deployment. Please double check your settings
according to the tutorial. Missing permissions: PUT, LIST, DELETE
In the above bucket policy which I used, PUT, LIST, DELETE policies are there. Still facing the above error.
Note: As trail and error, changed the Action as below which allows all actions. But, still getting the same error.
"Action": "*"
The above error is caused because of the mistake I did when I am setting up Databricks account with AWS.
As part of setting up AWS account details in Databricks, a cross-account-role should be created (alternative is through access key). When creating the role, AWS account id should be given(Databricks AWS account id). The value of that is 414351767826.
The mistake I did was I gave my AWS account ID instead of Databricks one. Following as it is in the below URL will work as expected.
The same issue I did when I am setting AWS storage. Following the documentation as it is will work perfectly.
https://docs.databricks.com/administration-guide/account-settings/aws-accounts.html

AWS permissions for Fargate and SSM

I'm trying to create some infrastructure for a service I am building on AWS using AWS Fargate. I'm using SSM as a value store for some of my application configuration, so I need both the regular permissions for Fargate as well as additional permissions for SSM. However, after banging my head against this particular wall for a while, I've come to the conclusion that I just don't understand AWS IAM in general or this problem in particular, so I'm here for help.
The basis of my IAM code comes from this tutorial; the IAM code is actually not in that tutorial but rather in this file in the github repo linked to that tutorial. I presume I need to retain that STS permission for something although I'm not entirely sure what.
I've converted the IAM code from the tutorial into a JSON document because I find JSON easier to work with than the Terraform native thing. Here's what I've come up with. It doesn't work. I would like to know why it doesn't work and how to fix it. Please ELI5 (explain like I'm 5 years old) because I know nothing about this.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssm:GetParameters",
"secretsmanager:GetSecretValue",
"kms:Decrypt",
"sts:AssumeRole"
],
"Principal": {
"Service": ["ecs-tasks.amazonaws.com"]
}
}
]
}
At a minimum, your ECS task should have below permissions:
Ability to assume a role
Resource level permissions
In the example, you have referred, An IAM Role is created with the following:
A trust relationship is attached. <-- To enable ECS task to assume an IAM role
AWS managed policy AmazonECSTaskExecutionRolePolicy is attached. <-- Resource permissions
So, in order to retrieve the SSM parameter values, add below resource permissions.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssm:Describe*",
"ssm:Get*",
"ssm:List*"
],
"Resource": [
"arn:aws:ssm:*:*:parameter/{your-path-hierarchy-to-parameter}/*"
]
}
]
}
If your Secrets uses KMS, then grant necessary kms permissions (kms:Decrypt). Refer specifying-sensitive-data for reference.

Access denied to S3 bucket from ec2 docker container

I have launched an EC2 instance which is needed to connect to s3 bucket.
i created IAM role and linked it to EC2 instance. and from EC2 awscli i can list the files, however i deployed a container in that EC2 and when trying to list the file, I am getting the error -
An error occurred (AccessDenied) when calling the ListObjectsV2 operation: Access Denied
IAM-Role
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:ListBucket",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::my-bucket/*",
]
}
]
}
Can somebody please suggest. why i can access the s3 from an ec2 instance but not from the container running on the same EC2 instance.
The ListBucket call is applied at the bucket level, so you need to add the bucket as a resource in your IAM policy (as written, you were just allowing access to the bucket's files):
"Resource": [
"arn:aws:s3:::my-bucket",
"arn:aws:s3:::my-bucket/*"
]
See this for more information about the resource description needed for each permission.
The fact that you were able to get the bucket listing from a shell running on the EC2 instance indicates to me that you have another user configured. Look for files in $HOME/.aws and environment variables that start with AWS.

AWS EMR Cluster fails to launch

I am trying to launch an AWS EMR Cluster from the AWS Console, and am getting the following error:
Failed to provision ec2 instances because 'IAM Instance Profile "arn:aws:iam::553706642095:instance-profile/EMR_EC2_DefaultRole" has no associated IAM Roles
Any one know what this means and how to resolve it?
The following is the role policy:
{
"Statement": [
{
"Action": [
"cloudwatch:*",
"dynamodb:*",
"ec2:Describe*",
"elasticmapreduce:Describe*",
"rds:Describe*",
"s3:*",
"sdb:*",
"sns:*",
"sqs:*"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Its trust policy document is:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
I finally resolved this issue. This was confusing because the instance-profile and the role use the same name by default. Full steps outline below, but you may be able to skip various steps.
Create default roles (if error, downgrade to awscli version 1.10.30)
aws emr create-default-roles
Create instance profile if it doesn't already exist:
aws iam create-instance-profile --instance-profile-name EMR_EC2_DefaultRole
Verify that instance profile exists but doesn't have any roles:
aws iam get-instance-profile --instance-profile-name EMR_EC2_DefaultRole
Add the role using:
aws iam add-role-to-instance-profile --instance-profile-name EMR_EC2_DefaultRole --role-name EMR_EC2_DefaultRole
You have only readonly permission for EMR
"elasticmapreduce:Describe*",
You need to give full access to elastic map reduce so that you can launch cluster/terminate
once you give this access role policy will look like
"elasticmapreduce:*",
I tried around and could get it to work without the tool using my own Cloudformation stack.
The key you have to have a InstanceProfile for the flow role and both flow and service role have to be provided as ARN.
That's how I got it to work for me!
Hope that helps someone else as well.
I got the same issue. Instead of giving new cluster name, i just kept the same default cluster name 'My Cluster' and clicked on 'Create cluster' again. It created without this error.