My code is running on an EC2 machine. I use some AWS services inside the code, so I'd like to fail on start-up if those services are unavailable.
For example, I need to be able to write a file to an S3 bucket. This happens after my code's been running for several minutes, so it's painful to discover that the IAM role wasn't configured correctly only after a 5 minute delay.
Is there a way to figure out if I have PutObject permission on a specific S3 bucket+prefix? I don't want to write dummy data to figure it out.
You can programmatically test permissions by the SimulatePrincipalPolicy API
Simulate how a set of IAM policies attached to an IAM entity works with a list of API actions and AWS resources to determine the policies' effective permissions.
Check out the blog post below that introduces the API. From that post:
AWS Identity and Access Management (IAM) has added two new APIs that enable you to automate validation and auditing of permissions for your IAM users, groups, and roles. Using these two APIs, you can call the IAM policy simulator using the AWS CLI or any of the AWS SDKs. Use the new iam:SimulatePrincipalPolicy API to programmatically test your existing IAM policies, which allows you to verify that your policies have the intended effect and to identify which specific statement in a policy grants or denies access to a particular resource or action.
Source:
Introducing New APIs to Help Test Your Access Control Policies
Have you tried the AWS IAM Policy Simulator. You can use it interactively, but it also has some API capabilities that you may be able to use to accomplish what you want.
http://docs.aws.amazon.com/IAM/latest/APIReference/API_SimulateCustomPolicy.html
Option 1: Upload an actual file when you app starts to see if it succeeds.
Option 2: Use dry runs.
Many AWS commands allow for "dry runs". This would let you execute your command at the start without actually doing anything.
The AWS CLI for S3 appears to support dry runs using the --dryrun option:
http://docs.aws.amazon.com/cli/latest/reference/s3/cp.html
The Amazon EC2 docs for "Dry Run" says the following:
Checks whether you have the required permissions for the action, without actually making the request. If you have the required permissions, the request returns DryRunOperation; otherwise, it returns UnauthorizedOperation.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/CommonParameters.html
Related
I would like to run a batch job on-prem and access AWS resources in our account.
I think the recommendation is to create an IAM user, which will be a machine user. Since I don't have a way to assign a role to the on-prem machine, or federate with AWS identity, I'll create an access key and install it on the on-prem machine. What's the best way to link my machine user to a policy?
I can create an IAM policy which allows the required actions (reading AWS SSM Parameters).
But, how should I link the machine user to the policy? I'm setting up these users/policies with Pulumi. Some options I'm aware of:
I can create a role, but then I think the machine user would have to assume the role. (My understanding is that roles do not have immediate "membership", it's just that users have the ability to assume roles. Or, AWS infrastructure can be set up with a role, like an EC2 or an EKS cluster can act as a role. In the future I do plan to move this job's execution to AWS infrastructure, but for now that's not an option.) Is assuming a role easy, for example a aws sts CLI call that I could put in my batch job's startup script before calling the main binary?
Or I could just attach the policy directly to the machine user. Generally that's not recommended from what I've read: you should have a layer between users and policies so when users change what they're doing you have indirection. But in this case maybe that's fine.
Or finally I could create a user group, attach the policy to the group, and add the machine user as a member of the group. Is that layer of indirection useful / an appropriate use of groups, especially if I'm already managing these policies with IaC? Most documentation recommends roles for the user-to-policy indirection, so I'm hesitant to use groups that way. However, that seems to be the expected approach for human users (glad for feedback on that too).
"Is it better to use AWS IAM User Group, or IAM Role for users to assume?" says a group would help manage permissions for multiple users (but so does Pulumi and I only have 1 or 2 machine users); and a role would help separate access rights from long-lived credentials but it seems like rotating the machine user's access key would have that benefit too without the extra assume-role step.
This is my first post here, I am working on a AWS CodePipeline which creates new AWS Accounts and assign users through AWS SSO, which has Permissions Set with specific managed IAM policies and inline policy as permission boundary set for the user groups. I would like to use a test pipeline that tests the specific user role from the vended AWS account and test whether the user(s) able to perform certain actions such as enabling internet access, create policy etc and based on the results proceed further pipeline steps.
Example: The pipeline runs on POC environment creates an account, then will have to run the test against the SSO user / local IAM user to check if the user can create internet gateway etc, usually this can be an IAM policy simulator cli and it results whether the user action allowed or not. My pipeline flow should proceed moving the source to the "master" branch for the production environment depend on the test results or discard if fails.
I am trying few tools such as Taskcat and others most of them do not perform such functional test, only checks the existence of the resource.
Any suggestions for tools that can allow me to perform such functional test as part of the pipeline would be appreciated.
Thanks in advance.
I managed to use "awspec" for achieving the functional test for the AWS resources, the one I was specifically looking for is of IAM policy simulator using the below "awspec" resource.
describe iam_role('my-iam-role') do
it { should be_allowed_action('ec2:DescribeInstances') }
it { should be_allowed_action('s3:Put*').resource_arn('arn:aws:s3:::my-bucket-name/*') }
end
We have started initially by defining roles with admin access policy attached. But now we want them to have policy with only specific permissions that are minimum and does not create any issues for using these roles.
Looking at "Access Advisor" tab on each role under AWS IAM console, it gives good amount of information that exactly which AWS services getting used and permission level information only for EC2, IAM, Lambda, and S3 management actions services. But for rest of other AWS services, missing that what specific permission for that particular service is required.
Also not having AWS Organizations master account access as mentioned in this tutorial: Viewing last accessed information for Organizations.
So is there a way I can get the permissions level info for services other than EC2, IAM, Lambda, and S3 management actions?
Thanks.
So is there a way I can get the permissions level info for services other than EC2, IAM, Lambda, and S3 management actions?
Sadly, there is no such way provided by AWS. So basically its try-and-see approach to get what you want. You can try some third party tools, which may be helpful, such as zero-iam, but ultimately, you will need custom solution to match your requirements.
There is also IAM Access Analyzer which is different then Access Advisor. But its also limited to some services only.
I have a AWS Lambda function in production. Triggering it can lead to monetary transactions. I want to block the feature of testing this lambda through AWS console so that users having console access cannot accidentally trigger it for the purpose of testing which they can do on the corresponding staging lambda. Is it somehow possible?
First solution that I would recommend is to not mix production and other workloads in the same AWS account. Combine that with not giving your developers and users credentials to the production account.
Assuming that you don't want to do that, you could apply a resource policy on the Lambda function that denies all regular IAM users permission to invoke the Lambda function. Be sure that your policy does not deny the 'real' source in your production system (e.g. API Gateway or SQS or S3). You should also prevent your users from modifying the resource policy on the Lambda function.
Alternatively, if all of your IAM users are managed under IAM groups, then you could apply an additional group policy that denied all actions on the Lambda function ARN. Again, ensure that they cannot modify the group policy to remove this control.
Consider I want to run some AWS CLI command, e.g. aws s3 sync dist/ "s3://${DEPLOY_BUCKET_NAME}" --delete.
How do I know what specific permissions (actions) do I need to grant in order for this command to work correctly? I want to adhere to the least privileged principle.
Just to clarify my question. I know where to find a list of all actions for S3 or other service and I know how to write a policy. The question is how do I know what specific actions do I need to grant for some CLI command?
Because, each command will use different actions and the arguments of the command also play a role here.
Almost every command used in the AWS CLI map one-to-one to IAM Actions.
However, the aws s3 commands such as sync are higher-level functions that call multiple commands.
For sync, I would imagine you would need:
ListBucket
CopyObject
GetObjectACL
PutObjectACL
If that still doesn't help, then you can use AWS CloudTrail to look at the underlying API calls that the AWS CLI made to your account. The CloudTrail records will show each API call and whether it succeeded or failed.
There's no definitive mapping to API actions from high-level awscli commands (like aws s3 sync) or from AWS console actions that I'm aware of.
One thing that you might consider is to enable CloudTrail, then temporarily enable all actions on all resources in an IAM policy, then run a test of aws s3 sync, and then review CloudTrail for what API actions were invoked on which resources. Not ideal, but it might give you something to start with.
You can use Athena to query CloudTrail Logs. It might seem daunting to set up at first, but it's actually quite easy. Then you can issue simple SQL queries such as:
SELECT eventtime, eventname, resources FROM trail20191021 ORDER BY eventtime DESC;
If you want to know for S3 specifically, that is documented in the S3 Developer Guide:
Specifying Permissions in a Policy
Specifying Conditions in a Policy
Specifying Resources in a Policy
In general, you can get what you need for any AWS resource from Actions, Resources, and Condition Keys for AWS Services
And you may find the AWS Policy Generator useful