This is my first post here, I am working on a AWS CodePipeline which creates new AWS Accounts and assign users through AWS SSO, which has Permissions Set with specific managed IAM policies and inline policy as permission boundary set for the user groups. I would like to use a test pipeline that tests the specific user role from the vended AWS account and test whether the user(s) able to perform certain actions such as enabling internet access, create policy etc and based on the results proceed further pipeline steps.
Example: The pipeline runs on POC environment creates an account, then will have to run the test against the SSO user / local IAM user to check if the user can create internet gateway etc, usually this can be an IAM policy simulator cli and it results whether the user action allowed or not. My pipeline flow should proceed moving the source to the "master" branch for the production environment depend on the test results or discard if fails.
I am trying few tools such as Taskcat and others most of them do not perform such functional test, only checks the existence of the resource.
Any suggestions for tools that can allow me to perform such functional test as part of the pipeline would be appreciated.
Thanks in advance.
I managed to use "awspec" for achieving the functional test for the AWS resources, the one I was specifically looking for is of IAM policy simulator using the below "awspec" resource.
describe iam_role('my-iam-role') do
it { should be_allowed_action('ec2:DescribeInstances') }
it { should be_allowed_action('s3:Put*').resource_arn('arn:aws:s3:::my-bucket-name/*') }
end
Related
I would like to run a batch job on-prem and access AWS resources in our account.
I think the recommendation is to create an IAM user, which will be a machine user. Since I don't have a way to assign a role to the on-prem machine, or federate with AWS identity, I'll create an access key and install it on the on-prem machine. What's the best way to link my machine user to a policy?
I can create an IAM policy which allows the required actions (reading AWS SSM Parameters).
But, how should I link the machine user to the policy? I'm setting up these users/policies with Pulumi. Some options I'm aware of:
I can create a role, but then I think the machine user would have to assume the role. (My understanding is that roles do not have immediate "membership", it's just that users have the ability to assume roles. Or, AWS infrastructure can be set up with a role, like an EC2 or an EKS cluster can act as a role. In the future I do plan to move this job's execution to AWS infrastructure, but for now that's not an option.) Is assuming a role easy, for example a aws sts CLI call that I could put in my batch job's startup script before calling the main binary?
Or I could just attach the policy directly to the machine user. Generally that's not recommended from what I've read: you should have a layer between users and policies so when users change what they're doing you have indirection. But in this case maybe that's fine.
Or finally I could create a user group, attach the policy to the group, and add the machine user as a member of the group. Is that layer of indirection useful / an appropriate use of groups, especially if I'm already managing these policies with IaC? Most documentation recommends roles for the user-to-policy indirection, so I'm hesitant to use groups that way. However, that seems to be the expected approach for human users (glad for feedback on that too).
"Is it better to use AWS IAM User Group, or IAM Role for users to assume?" says a group would help manage permissions for multiple users (but so does Pulumi and I only have 1 or 2 machine users); and a role would help separate access rights from long-lived credentials but it seems like rotating the machine user's access key would have that benefit too without the extra assume-role step.
Our AWS accounts are set up so that users login to one account, and then 'assume role' to different accounts to access various services.
We have TravisCI setup so that it runs an integration test against a test account, and then uploads a build artifact into S3.
Currently this is done using a single set of IAM user credentials with the user in the test account. I would like to move the user into a different account, and then have TravisCI assume the correct role in the test account to run the tests, and then assume a different role in another account to upload the build artifact. I do not want to add users to the accounts themselves.
I cannot see this functionality built in to the S3 deployment and have not had any luck finding anyone else trying to do this.
I think that this may be possible by dynamically populating environment variables during a setup phase, and then passing the variable on to later stages, but I cannot work out if this is possible.
Does anyone have assume role working with TravisCI?
So we have this aws account with some permissions and it was working fine at first. We were able to deploy to aws using serverless framework. But then the client decided to setup an organization since they have other aws accounts also and to consolidate the billing under 1 account, they added the account they gave us to the organization. Now the problem is when we deployed using serverless again, serverless can no longer see the deployment bucket with an access denied error. But when the account was removed from the organization, serverless is able to locate the bucket. Is there some addition permissions or changes to the permissions that needs to be done when an account is linked to an organization? Can someone explain to me cause I can't seem to find any example of my scenario in a google search. I am new to AWS and this is the first time I experience organzations in AWS.
The only implication to permissions from joining an OU (organization unit) would be via the Service Contol Policy (SCP). Verify that the SCP attached to the organization does not block the actions you are attempting to execute.
We would love to get more information if possible, but I would maybe start looking in the following places in your consolidated account:
Trusted access for AWS services - https://console.aws.amazon.com/organizations/home?#/organization/settings
https://console.aws.amazon.com/organizations/home?#/policies
See if anything was changed there, if someone added a policy, or if the AWS Resource Access Manager is disabled.
For small CloudFormation and CodePipeline templates we could "try - test" to get least privilege IAM Policy for the roles required.
This usually involves:
Starting with a minimal policy
Creating the stack
It fails with - stack doesn't have rights to someService:someAction
Add the service action to the policy
update stack and try again
This approach is too time consuming for larger CloudFormation Templates.
How are you developing Least Privilege IAM Policies?
Ideas:
Allow "*" and then scrape cloudtrail for events and build map for listed events to their equivalent roles - then reduce the roles to only those listed in the cloudtrail logs.
If you can isolate actions down to a user name this helps
https://github.com/byu-oit-appdev/aws-cloudwatch-parse
Access Advisor
Grant least privilege is a well-documented IAM Best Practice. The documentation recommends incrementally adding specific permissions, using the Access Advisor tab to determine which services are actually being used by an application (presumably using a broader set of permissions during the testing phase):
It's more secure to start with a minimum set of permissions and grant additional permissions as necessary, rather than starting with permissions that are too lenient and then trying to tighten them later.
Defining the right set of permissions requires some research to determine what is required for the specific task, what actions a particular service supports, and what permissions are required in order to perform those actions.
One feature that can help with this is the Access Advisor tab, which is available on the IAM console Summary page whenever you inspect a user, group, role, or policy. This tab includes information about which services are actually used by a user, group, role, or by anyone using a policy. You can use this information to identify unnecessary permissions so that you can refine your IAM policies to better adhere to the principle of least privilege. For more information, see Service Last Accessed Data.
This approach is similar to scraping CloudTrail for API events generated by a specific IAM Role/application, though the latter might be more difficult to filter through the entire event stream in order to find the relevant events, while the Access Advisor list is already filtered for you.
My code is running on an EC2 machine. I use some AWS services inside the code, so I'd like to fail on start-up if those services are unavailable.
For example, I need to be able to write a file to an S3 bucket. This happens after my code's been running for several minutes, so it's painful to discover that the IAM role wasn't configured correctly only after a 5 minute delay.
Is there a way to figure out if I have PutObject permission on a specific S3 bucket+prefix? I don't want to write dummy data to figure it out.
You can programmatically test permissions by the SimulatePrincipalPolicy API
Simulate how a set of IAM policies attached to an IAM entity works with a list of API actions and AWS resources to determine the policies' effective permissions.
Check out the blog post below that introduces the API. From that post:
AWS Identity and Access Management (IAM) has added two new APIs that enable you to automate validation and auditing of permissions for your IAM users, groups, and roles. Using these two APIs, you can call the IAM policy simulator using the AWS CLI or any of the AWS SDKs. Use the new iam:SimulatePrincipalPolicy API to programmatically test your existing IAM policies, which allows you to verify that your policies have the intended effect and to identify which specific statement in a policy grants or denies access to a particular resource or action.
Source:
Introducing New APIs to Help Test Your Access Control Policies
Have you tried the AWS IAM Policy Simulator. You can use it interactively, but it also has some API capabilities that you may be able to use to accomplish what you want.
http://docs.aws.amazon.com/IAM/latest/APIReference/API_SimulateCustomPolicy.html
Option 1: Upload an actual file when you app starts to see if it succeeds.
Option 2: Use dry runs.
Many AWS commands allow for "dry runs". This would let you execute your command at the start without actually doing anything.
The AWS CLI for S3 appears to support dry runs using the --dryrun option:
http://docs.aws.amazon.com/cli/latest/reference/s3/cp.html
The Amazon EC2 docs for "Dry Run" says the following:
Checks whether you have the required permissions for the action, without actually making the request. If you have the required permissions, the request returns DryRunOperation; otherwise, it returns UnauthorizedOperation.
Reference: http://docs.aws.amazon.com/AWSEC2/latest/APIReference/CommonParameters.html