AWS CodeCommit cross-account respository access not working - amazon-web-services

I have an AWS IAM user in my account (let's call it originAccount) that has access to another account (targetAccount) and I'm trying to clone a CodeCommit repository that exists in targetAccount using my originAccount credentials on my Windows machine.
I can log in and switch roles to targetAccount just fine, that's how I created the repository in the first place. I have full access to targetAccount except for billing. I do have MFA enabled on my IAM user. I have tried turning this off temporarily but it didn't help. However, with MFA turned off, I can do aws s3 ls successfully without error for targetAccount.
Neither SSH nor HTTPS work. I can clone it with static credentials, as a test, but that isn't acceptable long term. I'm amazed at how difficult this stuff is on AWS...
My user in originAccount has this policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": [
"arn:aws:iam::000000000000:role/Administrator"
]
}
]
}
The Administrator role has access to everything. targetAccount has this trust relationship:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111111111111:user/MyUser"
},
"Action": "sts:AssumeRole",
"Condition": {
"Bool": {
"aws:MultiFactorAuthPresent": "true"
}
}
}
]
}
I tried removing the MFA bit; didn't help. I even disabled MFA on my user account and that didn't help either.
My .aws\credentials file contains these lines:
[default]
aws_access_key_id = [originAccountKey]
aws_secret_access_key = [originAccountSecret]
[targetAccount]
aws_access_key_id = [originAccountKey]
aws_secret_access_key = [originAccountSecret]
I am using the environment variable to set the profile to use e.g.:
set AWS_DEFAULT_PROFILE=targetAccount
My .gitconfig contains:
[credential "https://git-codecommit.us-east-1.amazonaws.com"]
helper = !'C:\\path\\to\\git-credential-AWSSV4.exe' --profile='targetAccount'
UseHttpPath = true
Originally that was using the default profile but that didn't work either.
Questions:
Is this possible? What am I doing wrong?
Can both SSH and HTTPS work for cross-account access? The docs are confusing but it seems to suggest that only HTTPS with the credential helper will work (allegedly).
Will this work with multi-factor authentication? If not, can that be turned off only for CodeCommit?
Unfortunately none of the other questions I found had answers that worked for me...
I uninstalled and reinstalled Git for Windows just to be sure the credential manager for that wasn't installed (I couldn't remember), but it still doesn't work, says repository '...' not found. I can clone repositories in originAccount over HTTPS though.

The credential-helper is the only way to authenticate with CodeCommit using temporary session credentials like the ones acquired from AssumeRole. SSH will not work as SSH authentication is done by verifying an uploaded public key, which is not temporary.
I find that the following pattern is an easy one to follow for authentication using AssumeRole. In your .aws\credentials file:
[profile target_account]
role_arn = arn:aws:iam::000000000000:role/Administrator
mfa_serial = arn:aws:iam::000000000000:mfa/MFA_DEVICE
source_profile = origin_account
[profile origin_account]
aws_access_key_id = [originAccountKey]
aws_secret_access_key = [originAccountSecret]
This will allow AWS client tools to get temporary session credentials from target_account by using AssumeRole from the origin_account, effectively assuming the role of Administrator on target_account.
Your .gitconfig should specify the target_account profile.
If you are using msysgit, you should try upgrading to Git for Windows 2.x. The credential-helper will send : as the username and a AWS V4 Signature as the password. Session keys are usually pretty long and in msysgit, curl will truncate the username to 256 characters which will not include the complete session key.
I hope this helps!

Related

Grant S3 access to EC2 instance (simplest case)

I tried with simplest case following AWS documentation. I created role, assigned to instance and rebooted instance. To test access interactively, I logon to Windows instance and run aws s3api list-objects --bucket testbucket. I get error An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied.
Next test was to create .aws/credentials file and add profile to assume role. I modified role (assigned to instance) and added permission to assume role by any user in account. When I run same command aws s3api list-objects --bucket testbucket --profile assume_role, objects in bucket are listed.
Here is my test role Trust Relationship
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"ec2.amazonaws.com",
"ssm.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
},
{
"Sid": "UserCanAssumeRole",
"Effect": "Allow",
"Principal": {
"AWS": "111111111111"
},
"Action": "sts:AssumeRole"
}
]
}
Role has only one permission "AmazonS3FullAccess".
When I switch role in AWS console, I can see content of S3 bucket (and no other action is allowed in AWS console).
My assumption is that EC2 instance does not assume role.
How to pinpoint where is the problem?
Problem was with Windows proxy.
I checked proxy environment variables. None was set. When I checked Control Panel->Internet options I saw that Proxy text box shows value of proxy, but checkbox "Use proxy" was not checked. Next to it was text "Some of your settings are managed by organization." Skip proxy was having 169.254.169.254 listed.
I run command in debug mode and saw that CLI connects to proxy. Which cannot access 169.254.169.254 and no credentials are set. When I explicitly set environment variable set NO_PROXY=169.254.169.254 everything started to work.
Why AWS CLI uses proxy from Windows system I do not understand. Worst of all, it uses proxy but does not check bypass proxy. Lesson learned. Run command in debug mode and verify output.

aws cli does not ask for MFA code on the test user

It was recent past that I started working on AWS IAM.
My task is to ensure for a particular user, MFA code needs to be asked for all the commands when triggered from AWS CLI using temporary access credentials.
Here is what I did,
Using get-session-token I created the temporary credentials and set them in a profile.
when i execute aws s3 ls --profile <profile_name>, the cli does not ask for MFA code.
Unfortunately, nothing helped me out even though I referred many articles and responses on stackoverflow.
Please find the policy and the profile configuration that were set and used.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "*",
"Resource": "*",
"Condition": {
"BoolIfExists": {
"aws:MultiFactorAuthPresent": "true"
}
}
}
]
}
./aws/credentials file
[mfa_user]
aws_access_key_id = <AccessKeyId>
aws_secret_access_key = <SecretAccessKey>
aws_session_token = IQoJb3JpZ2luX2VjEKn//////////
mfa_serial = arn:aws:iam::9xxxxxxxxxxxx:mfa/some-user
Is there something that I am missing?
I followed the various online articles and nothing helped me out.
https://aws.amazon.com/premiumsupport/knowledge-center/authenticate-mfa-cli/
enforce MFA for AWS console login, but not for API calls
You will not be prompted for the MFA value.
Instead, call get-session-token` and supply the MFA value. You will then be provided back a set of temporary credentials.
Those credentials can be used for any call that require MFA authorization.
For an example, see: Authenticate access using MFA through the AWS CLI

AWS STS Temporary Credentials S3 Access Denied PutObject

I am following the How to Use AWS IAM with STS for access to AWS resources - 2nd Watch blog post and my understanding is the S3 Bucket Policy Principal requesting the temporary credentials, one approach would be to hard code the Id of this User but the post attempts a more elegant approach of evaluating the STS delegated user (when it works).
arn:aws:iam::409812999999999999:policy/fts-assume-role
IAM user policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "*"
}
]
}
arn:aws:s3:::finding-taylor-swift
s3 bucket policy
{
"Version": "2012-10-17",
"Id": "Policy1581282599999999999",
"Statement": [
{
"Sid": "Stmt158128999999999999",
"Effect": "Allow",
"Principal": {
"Service": "sts.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::finding-taylor-swift/*"
}
]
}
conor#xyz:~$ aws configure --profile finding-taylor-swift
AWS Access Key ID [****************6QNY]:
AWS Secret Access Key [****************+8kF]:
Default region name [eu-west-2]:
Default output format [text]: json
conor#xyz:~$ aws sts get-session-token --profile finding-taylor-swift
{
"Credentials": {
"SecretAccessKey": "<some text>",
"SessionToken": "<some text>",
"Expiration": "2020-02-11T03:31:50Z",
"AccessKeyId": "<some text>"
}
}
conor#xyz:~$ export AWS_SECRET_ACCESS_KEY=<some text>
conor#xyz:~$ export AWS_SESSION_TOKEN=<some text>
conor#xyz:~$ export AWS_ACCESS_KEY_ID=<some text>
conor#xyz:~$ aws s3 cp dreamstime_xxl_concert_profile_w500_q8.jpg s3://finding-taylor-swift
upload failed: ./dreamstime_xxl_concert_profile_w500_q8.jpg to s3://finding-taylor-swift/dreamstime_xxl_concert_profile_w500_q8.jpg An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
conor#xyz:~$
AWS CLI has been setup as described on Using Temporary Security Credentials with the AWS CLI
"When you run AWS CLI commands, the AWS CLI looks for credentials in a specific order—first in environment variables and then in the configuration file. Therefore, after you've put the temporary credentials into environment variables, the AWS CLI uses those credentials by default. (If you specify a profile parameter in the command, the AWS CLI skips the environment variables. Instead, the AWS CLI looks in the configuration file, which lets you override the credentials in the environment variables if you need to.)
The following example shows how you might set the environment variables for temporary security credentials and then call an AWS CLI command. Because no profile parameter is included in the AWS CLI command, the AWS CLI looks for credentials first in environment variables and therefore uses the temporary credentials."
There is no need to use a Bucket Policy for your scenario. A bucket policy is applied to an Amazon S3 bucket and is typically used to grant access that is specific to the bucket (eg public access).
Using an IAM Role
If you wish to provide bucket access to a specific IAM User, IAM Group or IAM Role, then the permissions should be attached to the IAM entity rather than the bucket.
(For get-session-token, see the end of my answer.)
Setup
Let's start by creating an IAM Role similar to what you had. I choose Create role, then for trusted entity I select Another AWS account (since it will be assumed by an IAM User rather than service).
I then create an Inline policy on the IAM Role to permit access to the bucket:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::my-bucket"
}
]
}
(It's normally not a good idea to assign s3:* permissions, since this lets the user delete content and even delete the bucket. Try to restrict it to the minimum permissions that are actually required.)
The Trust Relationship on the IAM Role determines who is allowed to assume the role. It could be one person, or anyone in the account (as long as they have been granted permission to call AssumeRole). In my case, I'll assign it to the whole account:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::MY-ACCOUNT-ID:root"
},
"Action": "sts:AssumeRole"
}
]
}
Now there are a few different ways to assume the role...
Simple method: IAM Role in credentials file
The AWS CLI has the ability to specify an IAM Role in the credentials file, and it will automatically assume the role for you.
See: Using an IAM Role in the AWS CLI
To use this, I add a section to my .aws/config file:
[profile foo]
role_arn = arn:aws:iam::MY-ACCOUNT-ID:role/MY-ROLE-NAME
source_profile = default
I could then simply use it with:
aws s3 ls s3://my-bucket --profile foo
This successfully lets me access that specific bucket.
Complex method: Assume the role myself
Rather than letting the AWS CLI do all the work, I can also assume the role myself:
aws sts assume-role --role-arn arn:aws:iam::MY-ACCOUNT-ID:role/MY-ROLE-NAME --role-session-name foo
{
"Credentials": {
"AccessKeyId": "ASIA...",
"SecretAccessKey": "...",
"SessionToken": "...",
"Expiration": "2020-02-11T00:43:30+00:00"
},
"AssumedRoleUser": {
"AssumedRoleId": "AROA...:foo",
"Arn": "arn:aws:sts::MY-ACCOUNT-ID:assumed-role/MY-ROLE-NAME/foo"
}
}
I then appended this information to the .aws/credentials file:
[foo2]
aws_access_key_id = ASIA...
aws_secret_access_key = ...
aws_security_token= ...
Yes, you could add this to the credentials file by using aws configure --foo2, but it does not prompt for the Security Token. Therefore, you need to edit the credentials file to add that information anyway.
I then used the profile:
aws s3 ls s3://my-bucket --profile foo2
It allowed me to successfully access and use the bucket.
Using GetSessionToken
The above examples use an IAM Role. This is typically used to grant cross-account access or to temporarily assume more-powerful credentials (eg an Admin performing sensitive operations).
Your Question references get-session-token. This provides temporary credentials based on a user's existing credentials and permissions. Thus, they cannot gain additional permissions as part of this API call.
This call is typically used either to supply an MFA token or to provide time-limited credentials for testing purposes. For example, I could give you credentials that effectively let you use my IAM User, but only for a limited time.

Access denied when trying to do AWS s3 ls using AWS cli

I launched an ec2 instance and created a role with a full S3 access policy for the instance. I installed awscli on it and configured my user's access key. My user has admin access and full S3 access policy too. I can see the buckets in the aws console but when I try to run aws s3 ls on the instance it returned An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied.
What else I need to do to add permission to the role or my user properly to be able to list and sync object between S3 and the instance?
I ran into this issue as well.
I ran aws sts get-caller-identity and noticed that the ARN did not match what I was expecting. It turns out if you have AWS configurations set in your bash_profile or bashrc, the awscli will default to using these instead.
I changed the enviornment variables in bash_profile and bashrc to the proper keys and everything started working.
Turns out I forgot I had to do mfa to get access token to be able to operate in S3. Thank you for everyone response.
There appears to be confusion about when to use IAM Users and IAM Roles.
When using an Amazon EC2 instance, the best method to grant permissions is:
Create an IAM Role and attach policies to grant the desired permissions
Associate the IAM Role with the Amazon EC2 instance. This can be done at launch time, or afterwards (Actions/Instance Settings/Attach IAM Role).
Any application running on the EC2 instance (including the AWS CLI) will now automatically receive credentials. Do not run aws configure.
If you are wanting to grant permissions to your own (non-EC2) computer, then:
Create an IAM User (or use your existing one) and attach policies to grant the desired permissions
On the computer, run aws configure and enter the Access Key and Secret Key associated with the IAM User. This will store the credentials in ~/.aws/credentials.
Any application running on this computer will then use credentials from the local credentials file
Create a IAM user with permission.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:::bucketName/*"
}
]
}
Save Access key ID & Secret access key.
sudo apt install awscli
aws configure
AWS Access Key ID [None]: AKIAxxxxxxxxxxxZI4
AWS Secret Access Key [None]: 8Bxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx8
Default region name [None]: region (ex. us-east-2)
Default output format [None]: json
aws s3 ls s3://s3testingankit1/
This problem can occurs not only from the CLI but also when executing S3 API for example.
The reason for this error can come from wrong configuration of the access permissions to the bucket.
For example with the setup below you're giving a full privileges to perform actions on the bucket's internal objects, BUT not specifying any action on the bucket itself:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>/*"
]
}
]
}
This will lead to the mentioned
... (AccessDenied) when calling the ListBuckets ...
error.
In order to fix this you should allow application to access the bucket (1st statement item) and to edit all objects inside the bucket (2nd statement item):
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::<name-of-bucket>/*"
]
}
]
}
There are shorter configurations that might solve the problem, but the one specified above tries also to keep fine grained security permissions.
I ran into this yesterday running a script I ran successfully in September 2021.
TL;DR: add --profile your.profile.name to the end of the command
I have multiple profiles on the login I was using. I think something in the aws environment changed, or perhaps I had done something that was able to bypass this before. Back in September I set the profile with
aws configure set region us-west-2 --profile my.profile.name
But yesterday, after the failure, I saw that aws sts get-caller-identity was returning a different identity. After some documentation search I found the additional method for specifying the profile, and operations like:
aws s3 cp myfile s3://my-s3-bucket --profile my.profile.name
all worked
I have an Windows machine with CyberDuck from which I was able to access a destination bucket, but when trying to access the bucket from a Linux machine with aws command, I got "An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied".
I then executed same command "aws s3 ls" from a command line interface on the Windows machine and it worked just fine. It looks like there is some security restriction on the AWS server for the machine/IP.

Is it possible to restrict access to S3 data from EMR (zeppelin) by IAM roles?

I have set up an EMR cluster with Zeppelin installed on it. I configured Zeppelin with Active Directory authentication and I have associated those AD users with IAM roles. I was hoping to restrict access to specific resources on S3 after logging into zeppelin using the AD credentials. However, it doesn't seem to be respecting the permissions the IAM role has defined. The EMR role has S3 access so I am wondering if that is overriding the permissions or that is actually the only role it cares about in this scenario
Does anyone have any idea?
I'm actually about to try to tackle this problem this week. I will try to post updates as I have some. I know that this is an old post, but I've found so many helpful things on this site that I figured it might help someone else even if doesn't help the original poster.
The question was if anyone has any ideas, and I do have an idea. So even though I'm not sure if it will work yet, I'm still posting my idea as a response to the question.
So far, what I've found isn't ideal for large organizations because it requires some per user modifications on the master node, but I haven't run into any blockers yet for a cluster at the scale that I need it to be. At least nothing that can't be fixed with a few configuration management tool scripts.
The idea is to:
Create a vanilla Amazon EMR cluster
Configure SSL
Configure authentication via Active Directory
(this step is what I am currently on) Configure Zeppelin to use impersonation (i.e. run the actual notebook processes as the authenticated user), which so far seems to require creating a local OS (Linux) user (with a username matching the AD username) for each user that will be authenticating to the Zeppelin UI. Employing one of the impersonation configurations can then cause Zeppelin run the notebooks as that OS user (there are a couple of different impersonation configurations possible).
Once impersonation is working, manually configure my own OS account's ~/.aws/credentials and ~/.aws/config files.
Write a Notebook that will test various access combinations based on different policies that will be temporarily attached to my account.
The idea is to have the Zeppelin notebook processes kick off as the OS user that is named the same as the AD authenticated user, and then have an ~/.aws/credentials and ~/.aws/config file in each users' home directory, hoping that that might cause the connection to S3 to follow the rules that are attached to the AWS account that is associated with the keys in each user's credentials file.
I'm crossing my fingers that this will work, because if it doesn't, my idea for how to potentially accomplish this will become significantly more complex. I'm planning on continuing to work on this problem tomorrow afternoon. I'll try to post an update when I have made some more progress.
One way to allow access to S3 by IAM user/role is to meet these 2 conditions:
Create S3 bucket policy matching S3 resources with IAM user/role. This should be done in S3/your bucket/Permissions/Bucket Policy.
Example:
{
"Version": "2012-10-17",
"Id": "Policy...843",
"Statement": [
{
"Sid": "Stmt...434",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::<account-id>:user/your-s3-user",
"arn:aws:iam::<account-id>:role/your-s3-role"
]
},
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::target-bucket/*",
"arn:aws:s3:::other-bucket/specific-resource"
]
}
]
}
Allow S3 actions for your IAM user/role. This should be done in IAM/Users/your user/Permissions/Add inline policy. Example:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:HeadBucket",
"s3:ListObjects"
],
"Resource": "s3:*"
}
]
}
Please note this might be not the only and/or best way, but it worked for me.