How can I limit EC2 describe images permissions? - amazon-web-services

I'm trying to constrain the images which a specific IAM group can describe. If I have the following policy for my group, users in the group can describe any EC2 image:
{
"Effect": "Allow",
"Action": ["ec2:DescribeImages"],
"Resource": ["*"]
}
I'd like to only allow the group to describe a single image, but when I try setting "Resource": ["arn:aws:ec2:eu-west-1::image/ami-c37474b7"], I get exceptions when trying to describe the image as a member of the group:
AmazonServiceException Status Code: 403,
AWS Service: AmazonEC2,
AWS Request ID: 911a5ed9-37d1-4324-8493-84fba97bf9b6,
AWS Error Code: UnauthorizedOperation,
AWS Error Message: You are not authorized to perform this operation.
I got the ARN format for EC2 images from IAM Policies for EC2, but perhaps something is wrong with my ARN? I have verified that the describe image request works just fine when my resource value is "*".

Unfortunately the error message is misleading, the problem is that Resource-Level Permissions for EC2 and RDS Resources aren't yet available for all API actions, see this note from Amazon Resource Names for Amazon EC2:
Important
Currently, not all API actions support individual ARNs; we'll add support for additional API actions and ARNs for additional Amazon EC2 resources later. For information about which ARNs you can
use with which Amazon EC2 API actions, as well as supported condition
keys for each ARN, see Supported Resources and Conditions for Amazon
EC2 API Actions.
In particular, all ec2:Describe* actions are absent still from Supported Resources and Conditions for Amazon EC2 API Actions at the time of this writing, which implies that you cannot use anything but "Resource": ["*"] for ec2:DescribeImages.
The referenced page on Granting IAM Users Required Permissions for Amazon EC2 Resources also mentions that AWS will add support for additional actions, ARNs, and condition keys in 2014 - they have indeed regularly expanded resource level permission coverage over the last year or so already, but so far only for actions which create or modify resources, but not any which require read access only, something many users desire and expect for obvious reasons, including myself.

Related

AWS Glue Job getting Access Denied when writing to S3

I have a Glue ETL job, created by CloudFormation. This job extracts data from RDS Aurora and write to S3.
When I run this job, I get the error below.
The job has an IAM service role.
This service role allows
Glue and RDS service,
assume arn:aws:iam::aws:policy/AmazonS3FullAccess and arn:aws:iam::aws:policy/service-role/AWSGlueServiceRole, and
has full range of rds:* , kms:* , and s3:* actions allow to the corresponding RDS, KMS, and S3 resources.
I have the same error whether the S3 bucket is encrypted with either AES256 or aws:kms.
I get the same error whether the job has a Security Configuration or not.
I have a job doing the exactly same thing that I created manually and can run successfully without a Security Configuration.
What am I missing? Here's the full error log
"/mnt/yarn/usercache/root/appcache/application_1...5_0002/container_15...45_0002_01_000001/py4j-0.10.4-src.zip/py4j/protocol.py", line 319, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o145.pyWriteDynamicFrame.
: org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 2.0 failed 4 times, most recent failure: Lost task 3.3 in stage 2.0 (TID 30, ip-10-....us-west-2.compute.internal, executor 1): com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception: Access Denied (Service: Amazon S3; Status Code: 403; Error Code: AccessDenied; Request ID: F...49), S3 Extended Request ID: eo...wXZw=
at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1588
Unfortunately the error doesn't tell us much except that it's failing during the write of your DynamicFrame.
There is only a handful of possible reasons for the 403, you can check if you have met them all:
Bucket Policy rules on the destination bucket.
The IAM Role needs permissions (although you mention having S3*)
If this is cross-account, then there is more to check with regards things like to allow-policies on the bucket and user. (In general a Trust for the Canonical Account ID is simplest)
I don't know how complicated your policy documents might be for the Role and Bucket, but remember that an explicit Deny statement takes precedence over an allow.
If the issue is KMS related, I would check to ensure your Subnet you select for the Glue Connection has a route to reach the KMS endpoints (You can add an Endpoint for KMS in VPC)
Make sure issue is not with the Temporary Directory that is also configured for your job or perhaps write-operations that are not your final.
Check that your account is the "object owner" of the location you are writing to (normally an issue when read/writing data between accounts)
If none of the above works, you can shed some more light with regards to your setup. Perhaps the code for write-operation.
In addition to Lydon's answer, error 403 is also received if your Data Source location is the same as the Data Target; defined when creating a Job in Glue. Change either of these if they are identical and the issue will be resolved.
You should add a Security configurations(mentioned under Secuity tab on Glue Console). providing S3 Encryption mode either SSE-KMS or SSE-S3.
Security Configuration
Now select the above security configuration while creating your job under Advance Properties.
Duly verify you IAM role & S3 bucket policy.
It will work
How are you providing permission for PassRole to glue role?
{
"Sid": "AllowAccessToRoleOnly",
"Effect": "Allow",
"Action": [
"iam:PassRole",
"iam:GetRole",
"iam:GetRolePolicy",
"iam:ListRolePolicies",
"iam:ListAttachedRolePolicies"
],
"Resource": "arn:aws:iam::*:role/<role>"
}
Usually we create roles using <project>-<role>-<env> e.g. xyz-glue-dev where project name is xyz and env is dev. In that case we use "Resource": "arn:aws:iam::*:role/xyz-*-dev"
For me it was two things.
Access policy for a bucket should be given correctly - bucket/*, here I was missing the * part
Endpoint in VPC must be created for glue to access S3 https://docs.aws.amazon.com/glue/latest/dg/vpc-endpoints-s3.html
After these two settings, my glue job ran successfully. Hope this helps.
Make sure you have given the right policies.
I was facing the same issue, thought I had the role configured well.
But after I erased the role and followed this step, it worked ;]

How to use AWS ECS Task Role in Node AWS SDK code

Code that uses the AWS Node SDK doesn't seem to be able to gain the role permissions of the ECS task.
If I run the code on an EC2 ECS instance, the code seems to inherit the role on the instance, not of the task.
If I run the code on Fargate, the code doesn't get any permission.
By contrast, any bash scripts that run within the instance seem to have the proper permissions.
Indeed, the documentation doesn't mention this as an option for the node sdk, just:
Loaded from IAM roles for Amazon EC2 (if running on EC2),
Loaded from the shared credentials file (~/.aws/credentials),
Loaded from environment variables,
Loaded from a JSON file on disk,
Hardcoded in your application
Is there any way to have your node code gain the permissions of the ECS task?
This seems to be the logical way to pass permissions to your code. It works beautifully with code running on an instance.
The only workaround I can think of is to create one IAM user per ECS service and pass the API Key/Secret as environmental variables in the task definition. However, that doesn't seem very secure since it would be visible in plain text to anyone with access to the task definition.
Your question is missing a lot of details on how you setup your ECS Cluster plus I am not sure if the question is for ECS or for Fargate specifically.
Make sure that you are using the latest version of the SDK. Javascript supports ECS and Fargate task credentials.
Often there is confusion about credentials on ECS. There is the IAM role that is assigned to the Cluster EC2 instances and the IAM role that is assigned to ECS tasks.
The most common problem is the "Trust Relationship" has not been setup on the ECS Task Role. Select your IAM role and then the "Trust Relationships" tab and make sure that it looks like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
In addition to the standard Amazon ECS permissions required to run tasks and services, IAM users also require iam:PassRole permissions to use IAM roles for tasks.
Next verify that you are using the IAM role in the task definition. Specify the correct IAM role ARN in the Task Role field. Note that this different than Task Execution Role (which allows containers to pull images and publish logs).
Next make sure that your ECS Instances are using the latest version of the ECS Agent. The agent version is listed on the "ECS Instances" tab under the right hand side column "Agent version". The current version is 1.20.3.
Are you using an ECS optimized AMI? If not, add --net=host to your docker run command that starts the agent. Review this link for more information.
I figured it out. This was a weird one.
A colleague thought it would be "safer" if we call Object.freeze on proccess.env. This was somehow interfering with the SDK's ability to access the credentials.
Removed that "improvement" and all is fine again. I think the lesson is "do not mess with process.env".

How to concisely write a policy to control access to Amazon EC2 resources based on tags

I have an IAM group called "devops" to which I want to apply a policy that will grant members of that group full access to EC2 instances tagged "Class=devops", and no access to any other EC2 instances. I found this great knowledge center article by Amazon which put me on the right path: https://aws.amazon.com/premiumsupport/knowledge-center/iam-ec2-resource-tags/.
The problem as I see it stems from the "Note" about halfway down that page:
"Full control" extends to all actions within the EC2 namespace with the exception of those Amazon EC2 API actions that currently do not support resource-level permissions. For more information, see Unsupported Resource-Level Permissions in the Amazon EC2 API Reference.
If you follow the link in the note to the list of unsupported resource-level permissions, you'll find that it's dozens of items long. You'll also find this statement:
All Amazon EC2 actions can be used in an IAM policy to either grant or deny users permission to use that action. However, not all Amazon EC2 actions support resource-level permissions, which enable you to specify the resources on which an action can be performed. The following Amazon EC2 API actions currently do not support resource-level permissions; therefore, to use these actions in an IAM policy, you must grant users permission to use all resources for the action by using a * wildcard for the Resource element in your statement.
In order to grant "allow" permissions to all of these.
If I wanted to grant permissions in this policy to all of those actions which don't support resource-level permissions, my policy would be hundreds of lines long! Is there a better and more concise way to do this?
There is one simple shortcut. A lot of the actions start with the same word such as "Describe". You can cover this list with a wildcard. Example, "Action" : "ec2:Describe*".
Just be careful with actions that will then override your other policy sections that DENY actions for specific resources.

Why is my Elastic Beanstalk app denied PutItem access to my DynamoDB, despite its role?

The goal
I want to programmatically add an item to a table in my DynamoDB from my Elastic Beanstalk application, using code similar to:
Item item = new Item()
.withPrimaryKey(UserIdAttributeName, userId)
.withString(UserNameAttributeName, userName);
table.putItem(item);
The unexpected result
Logs show the following error message, with the [bold parts] being my edits:
User: arn:aws:sts::[iam id?]:assumed-role/aws-elasticbeanstalk-ec2-role/i-[some number] is not authorized to perform: dynamodb:PutItem on resource: arn:aws:dynamodb:us-west-2:[iam id?]:table/PiggyBanks (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: AccessDeniedException; Request ID: [the request id])
I am able to get the table just fine, but things go awry when PutItem is called.
The configuration
I created a new Elastic Beanstalk application. According to the documentation, this automatically assigns the application a new role, called:
aws-elasticbeanstalk-service-role
That same documentation indicates that I can add access to my database as follows:
Add permissions for additional services to the default service role in the IAM console.
So, I found the aws-elasticbeanstalk-service-role role and added to it the managed policy, AmazonDynamoDBFullAccess. This policy looks like the following, with additional actions removed for brevity:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"dynamodb:*",
[removed for brevity]
"lambda:DeleteFunction"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
This certainly looks like it should grant the access I need. And, indeed, the policy simulator verifies this. With the following parameters, the action is allowed:
Role: aws-elasticbeanstalk-service-role
Service: DynamoDB
Action: PutItem
Simulation Resource: [Pulled from the above log] arn:aws:dynamodb:us-west-2:[iam id?]:table/PiggyBanks
Update
In answer to the good question by filipebarretto, I instantiate the DynamoDB object as follows:
private static DynamoDB createDynamoDB() {
AmazonDynamoDBClient client = new AmazonDynamoDBClient();
client.setRegion(Region.getRegion(Regions.US_WEST_2));
DynamoDB result = new DynamoDB(client);
return result;
}
According to this documentation, this should be the way to go about it, because it is using the default credentials provider chain and, in turn, the instance profile credentials,
which exist within the instance metadata associated with the IAM role
for the EC2 instance.
[This option] in the default provider chain is available only when
running your application on an EC2 instance, but provides the greatest
ease of use and best security when working with EC2 instances.
Other things I tried
This related Stack Overflow question had an answer that indicated region might be the issue. I've tried tweaking the region with no additional success.
I have tried forcing the usage of the correct credentials using the following:
AmazonDynamoDBClient client = new AmazonDynamoDBClient(new InstanceProfileCredentialsProvider());
I have also tried creating an entirely new environment from within Elastic Beanstalk.
In conclusion
By the error in the log, it certainly looks like my Elastic Beanstalk application is assuming the correct role.
And, by the results of the policy simulator, it looks like the role should have permission to do exactly what I want to do.
So...please help!
Thank you!
Update the aws-elasticbeanstalk-ec2-role role, instead of the aws-elasticbeanstalk-service-role.
This salient documentation contains the key:
When you create an environment, AWS Elastic Beanstalk prompts you to provide two AWS Identity and Access Management (IAM) roles, a service role and an instance profile. The service role is assumed by Elastic Beanstalk to use other AWS services on your behalf. The instance profile is applied to the instances in your environment and allows them to upload logs to Amazon S3 and perform other tasks that vary depending on the environment type and platform.
In other words, one of these roles (-service-role) is used by the Beanstalk service itself, while the other (-ec2-role) is applied to the actual instance.
It's the latter that pertains to any permissions you need from within your application code.
To load your credentials, try:
InstanceProfileCredentialsProvider mInstanceProfileCredentialsProvider = new InstanceProfileCredentialsProvider();
AWSCredentials credentials = mInstanceProfileCredentialsProvider.getCredentials();
AmazonDynamoDBClient client = new AmazonDynamoDBClient(credentials);
or
AmazonDynamoDBClient client = new AmazonDynamoDBClient(new DefaultAWSCredentialsProviderChain());

Is there a way to reuse AWS IAM permissions policy across users and EC2 instance roles?

I'm starting to use AWS IAM instance roles which is great for leveling up our security management but it adds some challenges maintaining a lightweight development flow.
The problem at hand is:
I define an AWS instance role and attach a security policy to this role. This policy specifies the permissions granted for new EC2 instances launched under this role, it's a long and complicated json document with rules such as "allow access to S3 bucket x" and "deny access to SQS y". This works well on EC2.
Next, I'd like to let developers run our code on their local boxes, when developing locally in the IDE, and use the exact same security policy as defined earlier. This is important b/c when developers produce new code I want them to test it WRT to the exact same security policy as they'd run with in production. If they run with different security policy there's a chance things will slip.
The problem is that I haven't found a way to do this. It's possible to define IAM Groups, and join the developers into the groups (e.g. "developers" group). In this group I can define an IAM security policy which applies to all developers in this group. But there's no way (that I found) to reuse the policy attached to a group in the context of an instance role. Or to use the policy attached to a role in the context of a group. (did I totally miss this?...)
So to sum up, what I want is: 1) define a security policy document once. 2) reuse this document in two places, one is in IAM instance role definition and the other is IAM groups.
Not being able to do so means I'll have to maintain two copies of a complicated JSON document (which is out of version control) for each type of service (and we have many such services) and for each environment (e.g. stage/prod). I can see how this becomes a maintenance nightmare very easily.
So far the best thing I came up with is perhaps to store the policy documents on disk, under version control, and write a tool that uses the aws api to upload the policy document to the both the instance role and the group. It's somewhat cumbersome so I was hoping for a little more agility.
Do you have a better advice for me?... Thanks!
Thanks #Steffen for pointing out CloudFormation, but I think I found a solution that works better for me.
AWS provide a Security Token Service which in short among other things, allows you to Assume a Role.
This is exactly what I was looking for since I want to define a role once (e.g. a set of AWS permissions) and then let AWS EC2 instances assume this role automatically (easy to do) as well as developer assume a role for a specific service they are developing. The developers part is a bit more involved, but I'll paste some Java code that shows how to do this below.
First when defining a role you have to say which principals are allowed to assume this role. You do so by editing the Trust Relationships section of the role (at the bottom of a role's definition page on AWS web UI)
So for example here's a Trust Relationships document that allows EC2 instances assume this role, as well as some of the users in my domain (replace your-service-id-number and your-user#example.com):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::your-service-id-number:user/your-user#example.com",
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
And next the Java code that assumes this role when running in local development:
This code checks whether I'm running on an EC2 instance, and if so will assume the role of the instance (or otherwise, whatever's defined by the DefaultAWSCredentialsProviderChain, but in my case, and the best practice is - the EC2 instance role). If running on a dev environment, e.g. outside of EC2 then it assumes the role as provided by roleName
AWSCredentialsProvider getCredentialsProvider(String roleName) {
if (isRunningOnEc2()) {
return new DefaultAWSCredentialsProviderChain();
}
// If not running on EC2, then assume the role provided
final AWSCredentials longLivedCredentialsProvider = new DefaultAWSCredentialsProviderChain().getCredentials(); // user credentials provided by env variables for ex.
// The String roleArn is defined at the top of the Role page titled "Role ARN" ;-)
final String roleArn = String.format("arn:aws:iam::your-service-id-number:role/%s", roleName);
final String roleSessionName = "session" + Math.random(); // not sure it's really needed but what the heck..
final STSAssumeRoleSessionCredentialsProvider credentialsProvider =
new STSAssumeRoleSessionCredentialsProvider(longLivedCredentialsProvider, roleArn, roleSessionName);
return credentialsProvider;
}
The utility isRunningOnEc2() is provided:
public static boolean isRunningOnEc2() {
try {
final InetAddress byName = InetAddress.getByName("instance-data");
return byName != null;
} catch (final UnknownHostException e) {
return false;
}
}
Using CloudFormation as Steffen suggested, might be useful in the general sense as well, mostly in order to retain consistency b/w my codebase and the actual AWS deployment, but that's something else.
One annoying point though: It's possible to define principals as actual usernames but not as Group of users, so you cannot actually say that "all developers are allowed to assume role x", but rather you have to list each and every developer specifically. This is pretty annoying, but I guess I'm not the only one with this complaint.
Update
AWS has just introduced Managed Policies for AWS Identity & Access Management, which provide a fresh approach to sharing and maintaining IAM policies across IAM entities, specifically aimed at reducing the current duplication of information and effort:
As the size and complexity of your AWS installation grows, you might find yourself editing multiple permission documents in order to add a new permission or to remove an existing one. Auditing and confirming permissions was also more difficult than necessary.
The security blog post An Easier Way to Manage Your Policies provides a walk through with more details.
The new feature is can be used via the AWS Management Console and the AWS Command Line Interface (AWS CLI) as usual (but presumably not via AWS CloudFormation yet).
Initial Answer
So far the best thing I came up with is perhaps to store the policy documents on disk, under version control, and write a tool that uses the aws api to upload the policy document to the both the instance role and the group. It's somewhat cumbersome so I was hoping for a little more agility.
A tool like this already exists and is in fact a major backing technology for many of AWS' own and 3rd party services - have a look at AWS CloudFormation, which gives developers and systems administrators an easy way to create and manage a collection of related AWS resources, provisioning and updating them in an orderly and predictable fashion.
More specifically, the AWS::IAM::Policy resource associates an IAM policy with IAM users, roles, or groups:
{
"Type": "AWS::IAM::Policy",
"Properties": {
"Groups" : [ String, ... ],
"PolicyDocument" : JSON,
"PolicyName" : String,
"Roles" : [ String, ...
"Users" : [ String, ... ],
}
}
There's much more to CloudFormation of course, it's a very powerful tool (see Getting Started with AWS CloudFormation).