Defining IAM role policy for Glue ETL job - amazon-web-services

I have a Glue job which triggers every time data lands in S3. I have initially used AWS managed AWSGlueServiceRole as policy for running my Glue job. But since this managed service role is not safe to use as it gives full accesses to services like S3 and ec2 which may be dangerous, I tried to modify few permissions and running into problems.
The main problem is with ec2 resources. For example see this part of the policy:
{
"Effect": "Allow",
"Action": [
"glue:*",
"s3:GetBucketLocation",
"s3:ListBucket",
"s3:ListAllMyBuckets",
"s3:GetBucketAcl",
"ec2:DescribeVpcEndpoints",
"ec2:DescribeRouteTables",
"ec2:CreateNetworkInterface",
"ec2:DeleteNetworkInterface",
"ec2:DescribeNetworkInterfaces",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSubnets",
"ec2:DescribeVpcAttribute",
"iam:ListRolePolicies",
"iam:GetRole",
"iam:GetRolePolicy",
"cloudwatch:PutMetricData"
],
"Resource": [
"*"
]
}
I have taken above from this page
I can restrict S3 access since I know the Buckets from where Glue job is reading from and writing to, but how can I restrict access to ec2 actions. Also what glue actions do I need to use as the policy above just gives full access to Glue as well ("glue:*")

Related

Restrict lambda permissions to access VPCs

When deploying a lambda function to a VPC you're required to grant a bunch of network interface related permissions to lambda's execution role. AWS manuals advice to use AWSLambdaVPCAccessExecutionRole managed policy for this, which looks like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"ec2:CreateNetworkInterface",
"ec2:DescribeNetworkInterfaces",
"ec2:DeleteNetworkInterface",
"ec2:AssignPrivateIpAddresses",
"ec2:UnassignPrivateIpAddresses"
],
"Resource": "*"
}
]
}
As one can see, this policy doesn't restrict network interfaces that the lambda can modify, thus potentially allowing it to mess with networking outside its own VPC. I'd like to limit the actions that the lambda can do to the VPC or subnets that it's actually deployed into. However, so far I failed to come with a working policy for that.
I tried to check the VPC in the policy like this:
"Condition": {"StringEquals": {"ec2:Vpc": "${my_vpc_arn}" }}
but still got permission denied.
CloudTrail event contains the following authorization message) decoded with aws sts decode-authorization-message): https://pastebin.com/P9t3QWEY where I can't see any useful keys to check.
So is it possible to restrict a VPC-deployed lambda to only modify particular network interfaces?
You can't restrict the policy to individual NIs, as you don't know their ids until after you create them. But you should be able to restrict access to a specific VPC using the following lambda execution policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AccessToSpecificVPC",
"Effect": "Allow",
"Action": [
"ec2:CreateNetworkInterface",
"ec2:DeleteNetworkInterface",
"ec2:UnassignPrivateIpAddresses",
"ec2:AssignPrivateIpAddresses",
"ec2:DescribeNetworkInterfaces"
],
"Resource": "*",
"Condition": {
"ArnLikeIfExists": {
"ec2:Vpc": "arn:aws:ec2:<your-region>:<your-account-id>:vpc/<vpc-id>"
}
}
},
{
"Sid": "CWLogsPermissions",
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:CreateLogGroup",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}
The Lambda Service needs to be able to create and remove network interfaces in your VPC. That's because a shared ENI will be deployed in the VPC. Once all execution contexts are terminated this shared ENI will be removed again. This also explains why the describe permissions are needed, because the service probably needs to figure out if a shared ENI is already deployed for the specific lambda function.
Unfortunately that means you can't restrict the delete/modify operations to any particular ENIs as those are created and removed dynamically.
According to the documentation the specific permissions the Role needs are:
ec2:CreateNetworkInterface
ec2:DescribeNetworkInterfaces
ec2:DeleteNetworkInterface
I checked the documentation and the Create + Delete actions allow (among others) the following conditions:
ec2:Subnet
ec2:Vpc
This means it should be possible. Maybe separating the ec2:* permissions into their own statement with the aforementioned conditions could help you.

Jenkins Packer AWS credentials validation

OK Here is the scenario.
I have a Jenkins Slave in AWS and I've attached to it a Role that allows it to create EC2 resources. I found the role via the Packer github issue list. Here is the role
I have my Packer project attempting to build on the slave. When the build starts it fails with:
[1;31mBuild 'amazon-ebs' errored: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors[0m
==> Some builds didn't complete successfully and had errors:
--> amazon-ebs: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
If I run aws configure and put in real credentials this obviously works, but I'm trying to avoid that. I have verified that the instance has the proper role attached. I have also verified that I can properly switch into this role via the command line.
What I seem to be missing is that with the role associated with the instance and packer specifying the role with: 'iam_instance_profile' why this continues to fail.
Any thoughts?
So after a lot of help from Castaglia I was able to get this to work. There seemed to be something wrong with the Role I had created. I deleted it and recreated it with the same name and same policy attached. After that it worked fine.
To note, I believe the Packer instructions have an error. They list the following as all that is needed for the role:
{
"Statement": [{
"Effect": "Allow",
"Action" : [
"ec2:AttachVolume",
"ec2:CreateVolume",
"ec2:DeleteVolume",
"ec2:CreateKeypair",
"ec2:DeleteKeypair",
"ec2:DescribeSubnets",
"ec2:CreateSecurityGroup",
"ec2:DeleteSecurityGroup",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CreateImage",
"ec2:CopyImage",
"ec2:RunInstances",
"ec2:TerminateInstances",
"ec2:StopInstances",
"ec2:DescribeVolumes",
"ec2:DetachVolume",
"ec2:DescribeInstances",
"ec2:CreateSnapshot",
"ec2:DeleteSnapshot",
"ec2:DescribeSnapshots",
"ec2:DescribeImages",
"ec2:RegisterImage",
"ec2:CreateTags",
"ec2:ModifyImageAttribute"
],
"Resource" : "*"
}]
}
But I believe you need one more piece:
{
"Sid": "PackerIAMPassRole",
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": [
"*"
]
}
Doing this allowed me to assume the role and build what I needed.

Correct Privileges in Amazon S3 Bucket Policy for AWS PHP SDK use

I have server S3 buckets belonging to different clients. I am using AWS SDK for PHP in my application to upload photos to the S3 bucket. I am using the AWS SDK for Laravel 4 to be exact but I don't think the issue is with this specific implementation.
The problem is unless I give the AWS user my server is using the FullS3Access it will not upload photos to the bucket. It will say Access Denied! I have tried first with only giving full access to the bucket in question, then I realized I should add the ability to list all buckets because that is probably what the SDK tries to do to confirm the credentials but still no luck.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::clientbucket"
]
}
]
}
It is a big security concern for me that this application has access to all S3 buckets to work.
Jeremy is right, it's permissions-related and not specific to the SDK, so far as I can see here. You should certainly be able to scope your IAM policy down to just what you need here -- we limit access to buckets by varying degrees often, and it's just an issue of getting the policy right.
You may want to try using the AWS Policy Simulator from within your account. (That link will take you to an overview, the simulator itself is here.) The policy generator is also helpful a lot of the time.
As for the specific policy above, I think you can drop the second statement and merge with the last one (the one that is scoped to your specific bucket) may benefit from some * statements since that may be what's causing the issue:
"Action": [
"s3:Delete*",
"s3:Get*",
"s3:List*",
"s3:Put*"
]
That basically gives super powers to this account, but only for the one bucket.
I would also recommend creating an IAM server role if you're using a dedicated instance for this application/client. That will make things even easier in the future.

What policy templates should an AWS IAM user have in order to deploy an EB application?

What policy templates should an AWS IAM user have in order to deploy and maintain an EB application (e.g. website code from a client machine)? IAMReadOnlyAccess plus PowerUserAccess seem sufficient, but I'm wondering whether the latter is overkill. Can I restrict policies to a single EB instance or application?
When you create an IAM role in the Web Console, there is a pre-defined role called ElasticBeanstalkFullAccess. This will give you full permission to all underlying resources that elastic beanstalk needs. You can see the general doc on this.
Restricting to specific environments or applications is much harder, but doable. It requires you to restrict the user to specific resources using arn's, including all underlying resources and their arn's. See the doc on this.
For reference, the full elastic beanstalk policy looks like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"elasticbeanstalk:*",
"ec2:*",
"elasticloadbalancing:*",
"autoscaling:*",
"cloudwatch:*",
"s3:*",
"sns:*",
"cloudformation:*",
"rds:*",
"sqs:*",
"iam:PassRole"
],
"Resource": "*"
}
]
}

How to restrict a user to a specific instance volume in AWS using IAM policy

I am working on Amazon web services. Designing the custom IAM policies.
I have a user which have restricted access on the instances like he can start,stop the instances. Similarly i want to restrict the user to attach,delete specific volumes.
I have created this policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "TheseActionsDontSupportResourceLevelPermissions",
"Effect": "Allow",
"Action": ["ec2:DescribeInstances","ec2:DescribeInstanceStatus","ec2:DescribeVolumeAttribute","ec2:DescribeVolumeStatus","ec2:DescribeVolumes"], ,
"Resource": "*"
},
{
"Sid": "TheseActionsSupportResourceLevelPermissions",
"Effect": "Allow",
"Action": [
"ec2:RunInstances",
"ec2:TerminateInstances",
"ec2:StopInstances",
"ec2:StartInstances",
"ec2:AttachVolume",
"ec2:DetachVolume"
],
"Resource": "arn:aws:ec2:us-west-2:AccountID:instance/instanceID",
"Resource": "arn:aws:ec2:us-west-2:AccountID:instance/instanceID",
"Resource": "arn:aws:ec2:us-west-2:AccountID:instance/instanceID",
"Resource": "arn:aws:ec2:us-east-1:123456789012:volume/volID",
"Resource": "arn:aws:ec2:us-east-1:123456789012:volume/volID",
"Resource": "arn:aws:ec2:us-east-1:123456789012:volume/volID"
}
]
}
when I apply this policy it does not show me any volumes.
I get an error:
error fetching the volume details.
Any lead is appreciated
Thanks
Update
The best way to test/debug IAM policies is by means of the fantastic IAM Policy Simulator (see Using the IAM Policy Simulator for the actual link and instructions). With its help, the solution below can easily be verified to be working correctly.
I recommend to add a dedicated test user to your account with no policies attached (i.e. implicit Deny All) and then using the Mode: New Policy to assemble and simulate the policy in question, e.g. for the use case at hand:
use two volumes and allow one via the policy, then simulate the policy with both resources, one will yield denied and the other allowed for AttachVolume and DetachVolume
Once satisfied, you can apply the assembled policy to the entities in your account and recheck via Mode: Existing Policies.
Initial Answer
I wonder how you have been able to apply this IAM policy, insofar it is syntactically invalid JSON (the Action field within the first Statement lacks any value)?
The syntax error aside, that's also the source of your problem:
As indicated by TheseActionsDontSupportResourceLevelPermissions, a few EC2 API actions do not support the comparatively new Resource-Level Permissions for EC2 and RDS Resources yet, see this note from Amazon Resource Names for Amazon EC2:
Important Currently, not all API actions support individual ARNs; we'll add support for additional API actions and ARNs for additional
Amazon EC2 resources later. For information about which ARNs you can
use with which Amazon EC2 API actions, as well as supported condition
keys for each ARN, see Supported Resources and Conditions for Amazon
EC2 API Actions.
You will find that all ec2:Describe* actions are indeed absent still from Supported Resources and Conditions for Amazon EC2 API Actions at the time of this writing. This also includes the ec2:DescribeVolume* actions, which is why you receive the error.
Fixing the first statement as outlined below should remedy the issue:
{
"Statement": [
{
"Sid": "TheseActionsDontSupportResourceLevelPermissions",
"Action": [
"ec2:DescribeVolumeAttribute",
"ec2:DescribeVolumeStatus",
"ec2:DescribeVolumes"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Sid": "TheseActionsSupportResourceLevelPermissions",
"Effect": "Allow",
"Action": [
"ec2:AttachVolume",
"ec2:DetachVolume"
],
"Resource": "arn:aws:ec2:<region>:<account number>:volume/<volume id>"
}
]
}