I'm pretty new at working with AWS and I'm just experimenting and trying to learn. So I have an EC2 instance with an IAM role attached. I also have an EFS filesystem with the below policy in place. My intent was to restrict mounting the access point to EC2 instances with the IAM role attached.
But when I try to mount from the EC2 instance I get access denied.
mount.nfs4: access denied by server while mounting 127.0.0.1:
If I change the principal to "AWS" : "*" I can mount the access point. According to the docs I can specify the IAM role used by the EC2 instance as the principal but it doesn't seem to work.
I suspect my problem is somehow with the role I have attached to the EC2 instance. The role has EFS client actions but when I look at the role in the IAM console and check access adviser, it says the role is never accessed. So I may be doing something fundamentally wrong.
{
"Version": "2020-08-08",
"Id": "access-point-www",
"Statement": [
{
"Sid": "access-point-webstorage",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::12345678:role/wwwservers"
},
"Action": [
"elasticfilesystem:ClientMount",
"elasticfilesystem:ClientWrite"
],
"Resource": "arn:aws:elasticfilesystem:us-east-1:12345678:file-system/fs-987654da",
"Condition": {
"StringEquals": {
"elasticfilesystem:AccessPointArn": "arn:aws:elasticfilesystem:us-east-1:12345678:access-point/fsap-01ffffbfb38217bcd"
}
}
}
]
}
Did you enable IAM mounting? Otherwise AWS tries to mount the EFS volume as a anonymous principle.
For EC2, like your case, you might just provide -o iam as option to your call to mount.
See: https://docs.amazonaws.cn/en_us/efs/latest/ug/efs-mount-helper.html#mounting-IAM-option
For ECS/task definitions this can be done this way:
Like this here:
aws_ecs_task_definition.volume.efs_volume_configuration.authorization_config?
resource "aws_ecs_task_definition" "service" {
family = "something"
container_definitions = file("something.json")
volume {
name = "service-storage"
efs_volume_configuration {
file_system_id = aws_efs_file_system.efs[0].id
root_directory = "/"
transit_encryption = "ENABLED"
authorization_config {
iam = "ENABLED"
}
}
}
}
iam - (Optional) Whether or not to use the Amazon ECS task IAM role defined in a task definition when mounting the Amazon EFS file system. If enabled, transit encryption must be enabled in the EFSVolumeConfiguration. Valid values: ENABLED, DISABLED. If this parameter is omitted, the default value of DISABLED is used.
This will help you if you have errors in your CloudTrail that an anonymous principal tries to mount your EFS. Errors would look something like this then:
{
"eventVersion": "1.08",
"userIdentity": {
"type": "AWSAccount",
"principalId": "",
"accountId": "ANONYMOUS_PRINCIPAL"
},
"eventSource": "elasticfilesystem.amazonaws.com",
"eventName": "NewClientConnection",
"sourceIPAddress": "AWS Internal",
"userAgent": "elasticfilesystem",
"errorCode": "AccessDenied",
"readOnly": true,
"resources": [
{
"accountId": "XXXXXX",
"type": "AWS::EFS::FileSystem",
"ARN": "arn:aws:elasticfilesystem:eu-west-1:XXXXXX:file-system/YYYYYY"
}
],
"eventType": "AwsServiceEvent",
"managementEvent": true,
"eventCategory": "Management",
"recipientAccountId": "XXXXXX",
"sharedEventID": "ZZZZZZZZ",
"serviceEventDetails": {
"permissions": {
"ClientRootAccess": false,
"ClientMount": false,
"ClientWrite": false
},
"sourceIpAddress": "nnnnnnn"
}
}
Note: "principalId": "", and "accountId": "ANONYMOUS_PRINCIPAL"
Related
I want to attach an aws managed policy to an existing role. I am achieving this using template:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "AWS CloudFormation template to modify Role",
"Parameters": {
"MyRole": {
"Type": "String",
"Default": "MyRole",
"Description": "Role to be modified"
}
},
"Resources": {
"S3FullAccess": {
"Type": "AWS::IAM::ManagedPolicy",
"Properties": {
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"s3:*",
"s3-object-lambda:*"
],
"Resource": "*"
}]
},
"Roles": [
"MyRole"
]
}
}
}
}
This template will create a policy with s3FullAccess and attach it to MyRole. But I do not want to create a new policy, if I want to use the policy already present with aws for s3 full access, how can I do that.
And if I use this template:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "AWS CloudFormation template to modify Role",
"Resources": {
"IAMRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"Path": "/",
"ManagedPolicyArns": [
"arn:aws:iam::aws:policy/ReadOnlyAccess"
],
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"AWS": "*"
}
}]
},
"RoleName": "RoleName"
}
}
}
}
This will attempt to create a new role and attach ReadOnlyPolicy to it. But if I want to attach a policy to existing role, how to refer that role in the template.
You use your AWS::IAM::Role's ManagedPolicyArns property, where you just specify the ARN of the manage policy to attach.
To use existing role in CloudFormation, you have to import it. Then you will be able to manage it from CloudFormation.
In general, CloudFormation service is for creating resources. There is not a native support to do something with already created resources if you don't import them.
If you don't want to import them, then, you have an option to write CloudFormation custom resource. You can create a lambda function-backed custom resource passing in the ARNs of the IAM policy and the IAM role you want to attach the policy to by IAM AttachRolePolicy API. More details are in AWS documentation.
I often get Cloudwatch Authorization alerts because the role attached to my SageMaker instance doesn't seem to have enough SSM (Systems Manager) permissions to UpdateInstanceInformation. My understanding is that the agent amazon-ssm-agent wants to hit an AWS API but fails to do so.
My Role has full SSM permissions:
{
"Action": [
"ssm:*",
"ssmmessages:*"
],
"Resource": "*",
"Effect": "Allow"
}
but the error persists:
{
"eventVersion": "1.05",
"userIdentity": {
"type": "AssumedRole",
"principalId": "XXXXXXXXXXXXX:SageMaker",
"arn": "arn:aws:sts::XXXXXXXXXXXXX:assumed-role/sagemaker_prod_Notebook_Instance_Role/SageMaker",
"sessionContext": {
"sessionIssuer": {
"type": "Role",
"principalId": "XXXXXXXXXXXXX",
"arn": "arn:aws:iam::XXXXXXXXXXXXX:role/sagemaker_prod_Notebook_Instance_Role",
"accountId": "XXXXXXXXXXXXX",
"userName": "sagemaker_prod_Notebook_Instance_Role"
}
},
"invokedBy": "im.amazonaws.com"
},
"eventSource": "ssm.amazonaws.com",
"eventName": "UpdateInstanceInformation",
"sourceIPAddress": "im.amazonaws.com",
"userAgent": "im.amazonaws.com",
"errorCode": "AccessDenied",
"errorMessage": "An unknown error occurred",
"requestParameters": {
"instanceId": "i-045f627a2d2e469b1",
"agentVersion": "2.3.714.0",
"platformType": "Linux",
"agentName": "amazon-ssm-agent"
},
"eventType": "AwsApiCall"
}
Has anyone seen this before ?
This is a bit late but I had a similar issue so I reached out to AWS Support and it seems to be a somewhat of a bug.
I was told that the AWS Sagemaker team has ssm installed by default. The Sagemaker notebook runs in an aws service account, although when a customer assigns Sagemaker a role in their own account the role cannot perform UpdateInstance information via the customer assigned role.
Support suggested I create a lifecycle config and leverage the following code sample to fix it:
https://docs.aws.amazon.com/sagemaker/latest/dg/notebook-lifecycle-config.html
https://github.com/aws-samples/amazon-sagemaker-notebook-instance-lifecycle-config-samples/blob/master/scripts/disable-uninstall-ssm-agent/on-start.sh
I would like to have a CloudFormation template create an EC2 instance and give that instance access to a S3 bucket.
One way is to have the template create an IAM user with proper permissions and use its access key to grant access.
But what if I don't want to give that user access to the IAM service?
Is there a way to have that user deploy this template without IAM?
UPDATE:
I want to be able to just share that template, so I am wondering if it is possible to not have a dependency on pre-existing IAM resources (roles, policies, etc)
The common method to grant permissions for an instance is Instance Profiles. You create a role with all the required permissions, assign that role to an instance profile and then assign the profile to any instance you need.
You can do this with CloudFormation:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"myEC2Instance": {
"Type": "AWS::EC2::Instance",
"Version": "2009-05-15",
"Properties": {
"ImageId": "ami-205fba49",
"InstanceType": "t2.micro",
"IamInstanceProfile": {
"Ref": "RootInstanceProfile"
}
}
},
"MyRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version" : "2012-10-17",
"Statement": [ {
"Effect": "Allow",
"Principal": {
"Service": [ "ec2.amazonaws.com" ]
},
"Action": [ "sts:AssumeRole" ]
} ]
},
"Path": "/"
}
},
"RolePolicies": {
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyName": "s3",
"PolicyDocument": {
"Version" : "2012-10-17",
"Statement": [ {
"Effect": "Allow",
"Action":["s3:PutObject","s3:PutObjectAcl"],
"Resource":["arn:aws:s3:::examplebucket/*"],
} ]
},
"Roles": [ { "Ref": "MyRole" } ]
}
},
"RootInstanceProfile": {
"Type": "AWS::IAM::InstanceProfile",
"Properties": {
"Path": "/",
"Roles": [ { "Ref": "MyRole" } ]
}
}
}
}
If you want to avoid giving the user deploying this template IAM access, you can create the instance profile before deploying the template and specify the already existing instance profile in the template. I haven't tried that yet, but it seems that should only require ec2:AssociateIamInstanceProfile and you should be able to constrain that just to that one specific profile.
Depends on what you mean by IAM service.
You can create IAM User Access Keys that give permissions to specific AWS services and no others. Access Keys do not allow IAM Console Access (this requires login credentials or federation).
For your use case your user will need at a minimum:
Permission to use CloudFormation to execute your template.
Permission to create the EC2 instance.
These permissions are defined in a policy that you add to the IAM user in the AWS Management Console. You can create users that cannot log into the console. Then you generate the Access Keys that the user will use in their application, AWS CLI, etc.
Overview of IAM Policies
In Account A I created a s3 bucket with cloudformation, and a CodeBuild builds an artifact and uploads to this bucket. In Account B I try to create a stack with cloudformation, and use the artifact from Account A's bucket to deploy my Lambda function. But, I get an Access Denied error. Does anyone know the solution? Thanks...
"TestBucket": {
"Type": "AWS::S3::Bucket",
"DeletionPolicy": "Retain",
"Properties": {
"AccessControl": "BucketOwnerFullControl"
}
},
"IAMPolicy": {
"Type": "AWS::S3::BucketPolicy",
"Properties": {
"Bucket": {
"Ref": "TestBucket"
},
"PolicyDocument": {
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::xxxxxxxxxxxx:root",
"arn:aws:iam::xxxxxxxxxxxx:root"
]
},
"Action": [
"s3:GetObject"
],
"Resource": [
{
"Fn::Join": [
"",
[
"arn:aws:s3:::",
{
"Ref": "TestBucket"
},
"/*"
]
]
},
{
"Fn::Join": [
"",
[
"arn:aws:s3:::",
{
"Ref": "TestBucket"
}
]
]
}
]
}
]
}
}
}
Assuming that the xxxxx in below statement is the account number of Account B:
"AWS": [
"arn:aws:iam::xxxxxxxxxxxx:root",
"arn:aws:iam::xxxxxxxxxxxx:root"
]
You are saying that this bucket grants the access to Account B on the basis of IAM permissions/policies held by them in Account B IAM service.
So essentially all the users/instance profile/policy that have explicit S3 access will be able to access this bucket of Account A. This means that perhaps the IAM policy that you are attaching to the lambda role in Account B doesn't have explicit S3 access.
I would suggest giving S3 access to your Lambda function and this should work.
Please be aware that in future if you want to write to S3 bucket of Account A from Account B, you would have to make sure that you put the bucket-owner-full-control acl so that the objects are available across all the accounts.
Example:
Using CLI:
$ aws s3api put-object --acl bucket-owner-full-control --bucket my-test-bucket --key dir/my_object.txt --body /path/to/my_object.txt
Instead of "arn:aws:iam::xxxxxxxxxxxx:root" granting access to the root role only, try granting access to all identities in the account by specifying just the account ID as the item within the Principal/AWS object: "xxxxxxxxxxxx".
See Using a Resource-based Policy to Delegate Access to an Amazon S3 Bucket in Another Account for more details.
I'm trying to narrow down the minimal policy to run a predefined machine image. The image is based on two snapshots and I only want "m1.medium" instance types to be launched.
Based on that and with the help of this page and this article, I worked out the following policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1385026304010",
"Effect": "Allow",
"Action": [
"ec2:RunInstances"
],
"Condition": {
"StringEquals": {
"ec2:InstanceType": "m1.medium"
}
},
"Resource": [
"arn:aws:ec2:us-east-1::instance/*",
"arn:aws:ec2:us-east-1::image/ami-f1c3e498",
"arn:aws:ec2:us-east-1::snapshot/snap-e2f51ffa",
"arn:aws:ec2:us-east-1::snapshot/snap-18ca2000",
"arn:aws:ec2:us-east-1::key-pair/shenton",
"arn:aws:ec2:us-east-1::security-group/sg-6af56d02",
"arn:aws:ec2:us-east-1::volume/*"
]
}
]
}
The policy narrows down the exact image, snapshots, security group and key-pair while leaving the specific instance and volume open.
I'm using the CLI tools as follows, as described here:
aws ec2 run-instances --dry-run \
--image-id ami-f1c3e498 \
--key-name shenton \
--security-group-ids sg-6af56d02 \
--instance-type m1.medium
The ~/.aws/config is as follows:
[default]
output = json
region = us-east-1
aws_access_key_id = ...
aws_secret_access_key = ...
The command results in a generic You are not authorized to perform this operation message and the encoded authorization failure message indicates that none of my statements were matched and therefore it rejects the action.
Changing to "Resource": "*" resolves the issue obviously, but I want to gain more understanding as to why the above doesn't work. I fully realize that this involves some degree of guess work, so I welcome any ideas.
I've been contacted by Jeff Barr from Amazon Web Services and he kindly helped me find out what the issue was.
First you need to decode the authorization failure message using the following statement:
$ aws sts decode-authorization-message --encoded-message 6gO3mM3p....IkgLj8ekf
Make sure the IAM user / role has permission for the sts:DecodeAuthorizationMessage action.
The response contains a DecodedMessage key comprising another JSON encoded body:
{
"allowed": false,
"explicitDeny": false,
"matchedStatements": {
"items": []
},
"failures": {
"items": []
},
"context": {
"principal": {
"id": "accesskey",
"name": "testuser",
"arn": "arn:aws:iam::account:user/testuser"
},
"action": "ec2:RunInstances",
"resource": "arn:aws:ec2:us-east-1:account:instance/*",
"conditions": { ... }
}
}
Under context => resource it will show what resource it was attempting to match against the policy; as you can see, it expects an account number. The arn documentation should therefore be read as:
Unless otherwise specified, the region and account are required.
Adding the account number or * in the affected ARN's fixed the problem:
"Resource": [
"arn:aws:ec2:us-east-1:*:instance/*",
"arn:aws:ec2:us-east-1:*:image/ami-f1c3e498",
"arn:aws:ec2:us-east-1:*:snapshot/snap-e2f51ffa",
"arn:aws:ec2:us-east-1:*:snapshot/snap-18ca2000",
"arn:aws:ec2:us-east-1:*:key-pair/shenton",
"arn:aws:ec2:us-east-1:*:security-group/sg-6af56d02",
"arn:aws:ec2:us-east-1:*:volume/*"
]