Code that uses the AWS Node SDK doesn't seem to be able to gain the role permissions of the ECS task.
If I run the code on an EC2 ECS instance, the code seems to inherit the role on the instance, not of the task.
If I run the code on Fargate, the code doesn't get any permission.
By contrast, any bash scripts that run within the instance seem to have the proper permissions.
Indeed, the documentation doesn't mention this as an option for the node sdk, just:
Loaded from IAM roles for Amazon EC2 (if running on EC2),
Loaded from the shared credentials file (~/.aws/credentials),
Loaded from environment variables,
Loaded from a JSON file on disk,
Hardcoded in your application
Is there any way to have your node code gain the permissions of the ECS task?
This seems to be the logical way to pass permissions to your code. It works beautifully with code running on an instance.
The only workaround I can think of is to create one IAM user per ECS service and pass the API Key/Secret as environmental variables in the task definition. However, that doesn't seem very secure since it would be visible in plain text to anyone with access to the task definition.
Your question is missing a lot of details on how you setup your ECS Cluster plus I am not sure if the question is for ECS or for Fargate specifically.
Make sure that you are using the latest version of the SDK. Javascript supports ECS and Fargate task credentials.
Often there is confusion about credentials on ECS. There is the IAM role that is assigned to the Cluster EC2 instances and the IAM role that is assigned to ECS tasks.
The most common problem is the "Trust Relationship" has not been setup on the ECS Task Role. Select your IAM role and then the "Trust Relationships" tab and make sure that it looks like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
In addition to the standard Amazon ECS permissions required to run tasks and services, IAM users also require iam:PassRole permissions to use IAM roles for tasks.
Next verify that you are using the IAM role in the task definition. Specify the correct IAM role ARN in the Task Role field. Note that this different than Task Execution Role (which allows containers to pull images and publish logs).
Next make sure that your ECS Instances are using the latest version of the ECS Agent. The agent version is listed on the "ECS Instances" tab under the right hand side column "Agent version". The current version is 1.20.3.
Are you using an ECS optimized AMI? If not, add --net=host to your docker run command that starts the agent. Review this link for more information.
I figured it out. This was a weird one.
A colleague thought it would be "safer" if we call Object.freeze on proccess.env. This was somehow interfering with the SDK's ability to access the credentials.
Removed that "improvement" and all is fine again. I think the lesson is "do not mess with process.env".
Related
I am trying to configure AWS ECS using awsvpc mode with an IAM role to use specifically for tasks. Our ECS instances are of fargate launch types. After specifying a Task IAM role in the task configuration, we ssh into our task and try to run awscli commands and get the following error:
Unable to locate credentials. You can configure credentials by running "aws configure".
In order to troubleshoot, we ran the same docker image in a container with an EC2 launch type and when we ran the same awscli command, it errors by saying the assumed role does not have sufficient permissions. We noticed that this was because it was assuming the container instance IAM role, rather than the Task IAM role.
Based on the documentation here, it is clear that when using awsvpc networking mode, we need to set the ECS_AWSVPC_BLOCK_IMDS agent configuration variable to true in the agent configuration file and restart the agent in order for our instances to assume the Task IAM role rather than the container instance IAM role.
For the time being, for performance testing purposes, we need to deploy with the Fargate launch type and according to the docs, the container agent should be installed automatically for Fargate:
The Amazon ECS container agent is installed on the AWS managed infrastructure used for tasks using the Fargate launch type. If you are only using tasks with the Fargate launch type no additional configuration is needed and the content in this topic does not apply.
However, we still need to be able to assume our task IAM role. Is there a way to update the necessary environment variable in the AWS-managed agent configuration file so as to allow the assuming of the task IAM role? Or is there another way to allow this?
When creating the task definition for your Fargate Task, are you assigning a Task Role ARN? There are two IAM ARNs needed. The Execution Role ARN is the IAM role to start the container in your Fargate cluster and uses permissions to setup the CloudWatch logs and possibly pulling an image from ECR. The Task Role ARN is the IAM Role that the container has. Make sure the Task Role ARN has the ECS Trust Relationship.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
I am attempting to launch a Docker container stored in ECR as an AWS batch job. The entrypoint python script of this container attempts to connect to S3 and download a file.
I have attached a role with AmazonS3FullAccess to both the AWSBatchServiceRole in the compute environment and I have also attached a role with AmazonS3FullAccess to the compute resources.
This is the following error that is being logged: botocore.exceptions.ConnectTimeoutError: Connect timeout on endpoint URL: "https://s3.amazonaws.com/"
There is a chance that these instances are being launched in a custom VPC, not the default VPC. I'm not sure this makes a difference, but maybe that is part of the problem. I do not have appropriate access to check. I have tested this Docker image on an EC2 instance launched in the same VPC and everything works as expected.
You mentioned compute environment and compute resources. Did you add this S3 policy to the Job Role as mentioned here?
After you have created a role and attached a policy to that role, you can run tasks that assume the role. You have several options to do this:
Specify an IAM role for your tasks in the task definition. You can create a new task definition or a new revision of an existing task definition and specify the role you created previously. If you use the console to create your task definition, choose your IAM role in the Task Role field. If you use the AWS CLI or SDKs, specify your task role ARN using the taskRoleArn parameter. For more information, see Creating a Task Definition.
Specify an IAM task role override when running a task. You can specify an IAM task role override when running a task. If you use the console to run your task, choose Advanced Options and then choose your IAM role in the Task Role field. If you use the AWS CLI or SDKs, specify your task role ARN using the taskRoleArn parameter in the overrides JSON object. For more information, see Running Tasks.
I've created a worker environment for my eb application in order to take advantage of its "periodic tasks" capabilities using cron.yaml (located in the root of my application). It's a simple sinatra app (for now) that I would like to use to use to issue requests to my corresponding web server environment.
However, I'm having trouble deploying via the eb cli. Below is what happens I run eb deploy.
╰─➤ eb deploy
Creating application version archive "4882".
Uploading myapp/4882.zip to S3. This may take a while.
Upload Complete.
INFO: Environment update is starting.
ERROR: Service:AmazonCloudFormation, Message:Stack named 'awseb-e-1a2b3c4d5e-stack'
aborted operation. Current state: 'UPDATE_ROLLBACK_IN_PROGRESS'
Reason: The following resource(s) failed to create: [AWSEBWorkerCronLeaderRegistry].
I've looked around the CloudFormation dashboard to see to check for possible errors. After reading a bit of about what I could find regarding AWSEBWorkerCronLeaderRegistry, I found it that it's most likely a DynamoDB table that gets updated/created. However, when I look in the DynamoDB dashboard, there are no tables listed.
As always, any help, feedback, or guidance is appreciated.
If you are reluctant to add full DynamoDB access (like I was), Beanstalk now provides a Managed Policy for Worker environment permissions (AWSElasticBeanstalkWorkerTier). You can try adding one of these to your instance profile role instead.
See http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/iam-instanceprofile.html
We had the same issue and fixed it by attaching AmazonDynamoDBFullAccess to Elastic Beanstalk role (which was named aws-elasticbeanstalk-ec2-role in our case).
I was using Codepipeline to deploy my worker and was getting the same error. Eventually I tried giving AWS-CodePipeline-Service the AmazonDynamoDBFullAccess policy and that seemed to resolve the issue.
As Anthony suggested, when triggering the deploy from other services such as CodePileline, its service role needs the dynamodb:CreateTable permission to create the Leader Registry table (more info below) in DynamoDB.
Adding Full Access permission is a bad practice and should be avoided. Also, the managed policy AWSElasticBeanstalkWorkerTier does not have the appropriate permissions since it is for the worker to access DynamoDB and check if they are the current leader.
1. Find the Role that is trying to create the table:
Go to CloudTrail > Event History
Filter Event Name: CreateTable
Make sure the error code is AccessDenied
Locate the role name (i.e. AWSCodePipelineServiceRole-us-east-1-dev):
2. Add the permissions:
Go to IAM > Roles
Find the role in the list
Attach a policy with:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "CreateCronLeaderTable",
"Effect": "Allow",
"Action": "dynamodb:CreateTable",
"Resource": "arn:aws:dynamodb:*:*:table/*-stack-AWSEBWorkerCronLeaderRegistry*"
}
]
}
3. Check results:
Redeploy by triggering the pipeline
Check Elasticbeanstalk for errors
Optionally go to CloudTrail and make sure the request succeded this time.
You may use this technique any time you are not sure of what permission should be attached to what.
About the Cron Leader Table
From the Periodic Tasks Documentation:
Elastic Beanstalk uses leader election to determine which instance in your worker environment queues the periodic task. Each instance attempts to become leader by writing to an Amazon DynamoDB table. The first instance that succeeds is the leader, and must continue to write to the table to maintain leader status. If the leader goes out of service, another instance quickly takes its place.
For those wondering, this DynamoDB table uses 10 RCU and 5 WCU which covered by the always free tier.
I'm starting to use AWS IAM instance roles which is great for leveling up our security management but it adds some challenges maintaining a lightweight development flow.
The problem at hand is:
I define an AWS instance role and attach a security policy to this role. This policy specifies the permissions granted for new EC2 instances launched under this role, it's a long and complicated json document with rules such as "allow access to S3 bucket x" and "deny access to SQS y". This works well on EC2.
Next, I'd like to let developers run our code on their local boxes, when developing locally in the IDE, and use the exact same security policy as defined earlier. This is important b/c when developers produce new code I want them to test it WRT to the exact same security policy as they'd run with in production. If they run with different security policy there's a chance things will slip.
The problem is that I haven't found a way to do this. It's possible to define IAM Groups, and join the developers into the groups (e.g. "developers" group). In this group I can define an IAM security policy which applies to all developers in this group. But there's no way (that I found) to reuse the policy attached to a group in the context of an instance role. Or to use the policy attached to a role in the context of a group. (did I totally miss this?...)
So to sum up, what I want is: 1) define a security policy document once. 2) reuse this document in two places, one is in IAM instance role definition and the other is IAM groups.
Not being able to do so means I'll have to maintain two copies of a complicated JSON document (which is out of version control) for each type of service (and we have many such services) and for each environment (e.g. stage/prod). I can see how this becomes a maintenance nightmare very easily.
So far the best thing I came up with is perhaps to store the policy documents on disk, under version control, and write a tool that uses the aws api to upload the policy document to the both the instance role and the group. It's somewhat cumbersome so I was hoping for a little more agility.
Do you have a better advice for me?... Thanks!
Thanks #Steffen for pointing out CloudFormation, but I think I found a solution that works better for me.
AWS provide a Security Token Service which in short among other things, allows you to Assume a Role.
This is exactly what I was looking for since I want to define a role once (e.g. a set of AWS permissions) and then let AWS EC2 instances assume this role automatically (easy to do) as well as developer assume a role for a specific service they are developing. The developers part is a bit more involved, but I'll paste some Java code that shows how to do this below.
First when defining a role you have to say which principals are allowed to assume this role. You do so by editing the Trust Relationships section of the role (at the bottom of a role's definition page on AWS web UI)
So for example here's a Trust Relationships document that allows EC2 instances assume this role, as well as some of the users in my domain (replace your-service-id-number and your-user#example.com):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::your-service-id-number:user/your-user#example.com",
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
And next the Java code that assumes this role when running in local development:
This code checks whether I'm running on an EC2 instance, and if so will assume the role of the instance (or otherwise, whatever's defined by the DefaultAWSCredentialsProviderChain, but in my case, and the best practice is - the EC2 instance role). If running on a dev environment, e.g. outside of EC2 then it assumes the role as provided by roleName
AWSCredentialsProvider getCredentialsProvider(String roleName) {
if (isRunningOnEc2()) {
return new DefaultAWSCredentialsProviderChain();
}
// If not running on EC2, then assume the role provided
final AWSCredentials longLivedCredentialsProvider = new DefaultAWSCredentialsProviderChain().getCredentials(); // user credentials provided by env variables for ex.
// The String roleArn is defined at the top of the Role page titled "Role ARN" ;-)
final String roleArn = String.format("arn:aws:iam::your-service-id-number:role/%s", roleName);
final String roleSessionName = "session" + Math.random(); // not sure it's really needed but what the heck..
final STSAssumeRoleSessionCredentialsProvider credentialsProvider =
new STSAssumeRoleSessionCredentialsProvider(longLivedCredentialsProvider, roleArn, roleSessionName);
return credentialsProvider;
}
The utility isRunningOnEc2() is provided:
public static boolean isRunningOnEc2() {
try {
final InetAddress byName = InetAddress.getByName("instance-data");
return byName != null;
} catch (final UnknownHostException e) {
return false;
}
}
Using CloudFormation as Steffen suggested, might be useful in the general sense as well, mostly in order to retain consistency b/w my codebase and the actual AWS deployment, but that's something else.
One annoying point though: It's possible to define principals as actual usernames but not as Group of users, so you cannot actually say that "all developers are allowed to assume role x", but rather you have to list each and every developer specifically. This is pretty annoying, but I guess I'm not the only one with this complaint.
Update
AWS has just introduced Managed Policies for AWS Identity & Access Management, which provide a fresh approach to sharing and maintaining IAM policies across IAM entities, specifically aimed at reducing the current duplication of information and effort:
As the size and complexity of your AWS installation grows, you might find yourself editing multiple permission documents in order to add a new permission or to remove an existing one. Auditing and confirming permissions was also more difficult than necessary.
The security blog post An Easier Way to Manage Your Policies provides a walk through with more details.
The new feature is can be used via the AWS Management Console and the AWS Command Line Interface (AWS CLI) as usual (but presumably not via AWS CloudFormation yet).
Initial Answer
So far the best thing I came up with is perhaps to store the policy documents on disk, under version control, and write a tool that uses the aws api to upload the policy document to the both the instance role and the group. It's somewhat cumbersome so I was hoping for a little more agility.
A tool like this already exists and is in fact a major backing technology for many of AWS' own and 3rd party services - have a look at AWS CloudFormation, which gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.
More specifically, the AWS::IAM::Policy resource associates an IAM policy with IAM users, roles, or groups:
{
"Type": "AWS::IAM::Policy",
"Properties": {
"Groups" : [ String, ... ],
"PolicyDocument" : JSON,
"PolicyName" : String,
"Roles" : [ String, ...
"Users" : [ String, ... ],
}
}
There's much more to CloudFormation of course, it's a very powerful tool (see Getting Started with AWS CloudFormation).
I'm trying to constrain the images which a specific IAM group can describe. If I have the following policy for my group, users in the group can describe any EC2 image:
{
"Effect": "Allow",
"Action": ["ec2:DescribeImages"],
"Resource": ["*"]
}
I'd like to only allow the group to describe a single image, but when I try setting "Resource": ["arn:aws:ec2:eu-west-1::image/ami-c37474b7"], I get exceptions when trying to describe the image as a member of the group:
AmazonServiceException Status Code: 403,
AWS Service: AmazonEC2,
AWS Request ID: 911a5ed9-37d1-4324-8493-84fba97bf9b6,
AWS Error Code: UnauthorizedOperation,
AWS Error Message: You are not authorized to perform this operation.
I got the ARN format for EC2 images from IAM Policies for EC2, but perhaps something is wrong with my ARN? I have verified that the describe image request works just fine when my resource value is "*".
Unfortunately the error message is misleading, the problem is that Resource-Level Permissions for EC2 and RDS Resources aren't yet available for all API actions, see this note from Amazon Resource Names for Amazon EC2:
Important
Currently, not all API actions support individual ARNs; we'll add support for additional API actions and ARNs for additional Amazon EC2 resources later. For information about which ARNs you can
use with which Amazon EC2 API actions, as well as supported condition
keys for each ARN, see Supported Resources and Conditions for Amazon
EC2 API Actions.
In particular, all ec2:Describe* actions are absent still from Supported Resources and Conditions for Amazon EC2 API Actions at the time of this writing, which implies that you cannot use anything but "Resource": ["*"] for ec2:DescribeImages.
The referenced page on Granting IAM Users Required Permissions for Amazon EC2 Resources also mentions that AWS will add support for additional actions, ARNs, and condition keys in 2014 - they have indeed regularly expanded resource level permission coverage over the last year or so already, but so far only for actions which create or modify resources, but not any which require read access only, something many users desire and expect for obvious reasons, including myself.