How to use EC2 instance role wht Aws Cli - amazon-web-services

When I run aws command like aws s3 ls, it uses default profile. Can I create a new profile to use a role attached to EC2 instance?
If so, how can I write credentials/config files?

From Credentials — Boto 3 Docs documentation:
The mechanism in which boto3 looks for credentials is to search through a list of possible locations and stop as soon as it finds credentials. The order in which Boto3 searches for credentials is:
Passing credentials as parameters in the boto.client() method
Passing credentials as parameters when creating a Session object
Environment variables
Shared credential file (~/.aws/credentials)
AWS config file (~/.aws/config)
Assume Role provider
Boto2 config file (/etc/boto.cfg and ~/.boto)
Instance metadata service on an Amazon EC2 instance that has an IAM role configured.
Since the Shared Credential File is consulted before the Instance Metadata service, it is not possible to use an assigned IAM Role if a credentials file is provided.
One idea to try: You could create another user on the EC2 instance that does not have a credentials file in their ~/.aws/ directory. In this case, later methods will be used. I haven't tried it, but using sudo su might be sufficient to change to this other user and use the IAM Role.

Unfortunately if you have a credentials file, use the environment variables or specify the IAM key/IAM secret via the SDK these will always take a higher precedence than the using the role itself.
If the credentials are required infrequently you could create another role that the EC2s IAM role can assume (using sts:AssumeRole) whenever it needs to perform these interactions. You would then remove the credentials file on disk.
If you must have a credentials file on the disk, the suggestion would be to create another user on the server exclusively for using these credentials. As a credentials file is only used by default for that user all other users will not use this file (unless explicitly stated within the SDK/CLI interaction as an argument).
Ensure that your local user that you create is locked down as much a possible to reduce the chance of unauthorized users gaining access to the user and its credentials.

This is what we solved this problem.I write this answer in case this is valuable for other people looking for answer.
Add a role "some-role" to a instance with id "i-xxxxxx"
$ aws iam create-instance-profile --instance-profile-name some-profile-name
$ aws iam add-role-to-instance-profile --instance-profile-name some-profile-name --role-name some_role
$ aws ec2 associate-iam-instance-profile --iam-instance-profile Name=some-profile-name --instance-id i-xxxxxx
Attach "sts:AssumeRole"
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"ec2.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
}
$ aws iam update-assume-role-policy --role-name some-role --policy-document file://policy.json
Define profile in the instance
Add "some-operator-profile" to use EC2 instace role.
~/.aws/config
[profile some-operator-profile]
credential_source = Ec2InstanceMetadata
Do what you want with the EC2 provided role
$ aws --profile some-operator-profile s3 ls

Related

ERROR 1045 (28000): Access denied for user 'db_user'#'ip' (using password: YES) while connecting to a RDS DB instance using IAM DB Authentication

Following is a quick summary of the question. Read the full description section for the underlying details.
Condensed description:
Assume you have an IAM user already existing and the user is already able to access other AWS services, such as S3, CloudFront, ECS, EC2...
Let's say we need to provide the user with read-only access over the RDS cluster and set up IAM DB Authentication as well.
We perform all the steps mentioned as per the official guide, in OUR local system and it works perfectly and we are able to generate correct auth token for db_user.
However, here is where it gets interesting.. when the user tries to generate the token for the db_user account, from their local machine.. the user will be denied access.
Full description:
Setup:
My RDS cluster instance runs the Aurora MySQL engine.
Engine version: 5.6.10a
I've been following the AWS knowledge center guide on How do I allow users to connect to Amazon RDS with IAM credentials?
The guide doesn't explicitly mention but while generating the authentication token, AWS CLI uses IAM credentials stored locally, to sign the request.
I'd like to highlight that in the below-mentioned snippet, admin is the profile name stored by AWS CLI for my admin IAM user while the db_user is the IAM user (with rds-db:connect privileges).
TOKEN="$(aws --profile admin rds generate-db-auth-token -h.. .. .. -u db_user)
Using the above snippet I'm able to authenticate with the generated token and connect to the cluster.
If --profile attribute is not mentioned, it reads the default profile saved in the credentials file.
Issue:
Instead of using --profile admin I'm looking to use an already existing non-admin IAM profile for generating an authentication token.
For instance, assume IAM user named developer, with RDS read-only privileges and the credentials stored locally under the profile rds_read_only
TOKEN="$(aws --profile rds_read_only rds generate-db-auth-token -h.. .. .. -u db_user)
If I use the above token, I get the following error:
ERROR 1045 (28000): Access denied for user 'db_user'#'ip' (using password: YES)
After hours of troubleshooting, I was able to conclude that my rds_read_only profile is unable to generate valid authentication tokens probably because IAM user developer is missing some required policies.
I tried attaching all policies available under RDS and RDS Data API (individually as well as in combinations) to IAM user developer, without any luck. If I attach the AdministrativeAccess policy to IAM user developer, only then it is able to generate the token successfully.
Question:
What are the mandatory policies required for non-admin IAM users to generate an authentication token successfully?
i saw your question in AWS Blog.
You need to create an IAM policy to define access to your AWS RDS instances. Check this docs
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"rds-db:connect"
],
"Resource": [
"arn:aws:rds-db:us-east-2:1234567890:dbuser:/"
]
}
]
}
Create User inside RDS DB instance with the instructions to use an IAM Plugin. Check this docs
Create the token check this docs
I found this nice plugin that builds a JDBC jar to allow IAM authentication.
This answers the specific question of #Ronnie regarding the Token generation.
Ronnie i am back. I used the following policy in my AWS Account: Sandbox I am an AWS Federated user with AssumeRole priveleges so Admin
You have to be very careful because as you said the article doesn't make the distinction from:
AWS IAM USER using a generated token to access the DB and
AWS IAM User/Role with the right Admin policies that generates a VALID Token
I will give an Example of how to identify the correct generated Tokens. For some reason AWS generates a value but it doesn't tell you whether is a useful token or not :-\
Token without admin special access= WILL NOT WORK
sandboxdb.asdasdffw.ap-southeast-2.rds.amazonaws.com:3306/?Action=connect&DBUser=human_user&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIA5XZIHS3GYVMRPRZF%2F20200424%2Fap-southeast-2%2Frds-db%2Faws4_request&X-Amz-Date=20200424T035250Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host&X-Amz-Signature=3efd467d548ea05a8bdf097c132b03661680908f723861e45323723c870ef646
Token with Access= Will Work! Look carefully and it contains X-Amz-Security-Token=
sandboxdb.ras21th1z8.ap-southeast-2.rds.amazonaws.com:3306/?Action=connect&DBUser=human_user&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=ASIA5XZIHS3GSAQI6XHO%2F20200424%2Fap-southeast-2%2Frds-db%2Faws4_request&X-Amz-Date=20200424T040756Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host&X-Amz-Security-Token=IQoJb3JpZ2luX2VjEEQaDmFwLXNvdXRoZWFzdC0yIkgwRgIhAOzVIondlMxYkJG5nWNeQlxS0M6B1pphgD1ewFwx2VfKAiEAkcp2jNHHmNMgwqUholnW545MwjzoEjS1uh4BHI4R4GAqvgMIbRABGgw5NDQ0NDAzMTc2NDUiDEvFkyEy833kd%2By4nyqbAybqK5dcP0nTlqZ19I2OVZxzwzz%2BUv9RVdVLMPHE5b%2FqXQGVG1CRtw90r9Lt4QkzTBeIVzdtIkXbpwFtqFh24Djb%2BiZHfvElj%2Fhz29ExzStU0fPYMewEB1u%2F2Osi72Fw6KbZ6TDy5EjuWcrrS08PZQ9CHc%2Fc8iDAIKs28vJ70KKcmow0SInVZGHGpD2JAgIL7jvnadVlcAW7lN2OAnxS72Kb4neqNuHcWzfPLfbXaOP1OaOs7vCR7zDlTTxX2aHoVflC69K9K67BqzdnDnnju%2F4XWQWU3r%2ByXylExwOsiG3y4Qq6wv002l%2BpQmF5%2BMXdTrFR5ewpfrcHf8TZLI5eq8HLA2gG1%2B255L%2Bqt%2BD80T%2FCzEdKSJPjppdYSq9FdeCMRSsqp5PpXP%2BDbQZwmhxiE2RmrbOKwNsFPJqUUnemQHXYLB8lily56nnswT2PYmQOGHqnZWRrv%2FTlGOAGlThuiR%2BLhQLBC08nBEGbBqK%2FjU4JwFMY4JfhgUHr8BA9CuGwAu0qIAFzG71M3HzCNX6o56k1gYJB%2F3%2FJaKlp7TCIxIn1BTrqASqywcfKrWhIaNX3t%2BV%2FZoYYO%2FtGVBZLyr3sSmByA%2Fwq538LiPHA0wDE3utOg%2FwNP%2BQGTcXhk1F%2BI0HOHztAQ2afnKW8r1oRbXxYAzb2j2b8MNEwrsaBju2gHFRgZHkM8YI%2FP5cvYr%2F8FQXWcE9eqjdme0hOo3rPETzxZfRwNQTHEntBbVVD1ec0d7DblfSEDZhLk%2By1%2BFMAYf7NeBIfU6GNsAN2hTdSkPPuto2fQKzRybRAwxQz5P3cO5CClUNIxu4J3bM1MUUTux%2BtMjqRvjGxDhB4yLIJmIPOOYLDSOXl3aWO2y4v89wu5A%3D%3D&X-Amz-Signature=1c6fcc472bb2af09055117075ca21d4a5f715910443115116c9230905721e79d
AWS IAM policy For DB User to Connect to AWS RDS DB Instance
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "rds-db:connect",
"Resource": "*"
}
]
}
To avoid messing with local configs i used the following testing process:
Token generation from an AWS EC2 instance running in the same VPC as the DB. Generation is successful
Token generation from local machine by using a Docker container with AWS CLI Token generation is successful.
Of course my user was created in the MySQL DB with the following command
mysql -h $HOSTNAME -u$ADMIN_USER -p$ADMIN_PASS <<EOF
CREATE USER db_human_user IDENTIFIED WITH AWSAuthenticationPlugin AS 'RDS';
GRANT SELECT ON $SPECIFIC_GIVEN_DB.* TO 'db_human_user';
EOF

How to use instance profile credentials available on running ec2 instance?

I want to create tags from within the running ec2 instance, for that I need credentials and I wanted to use the credentials available at curl -s http://169.254.169.254/latest/meta-data/identity-credentials/ec2/security-credentials/ec2-instance. I set access key, secret key and session token as env variables from the above url . Now I tried
aws ec2 create-tags --resources i-instanceid --tags Key=Test,Value=Testing --region us-east-1
its giving me the following error
An error occurred (UnauthorizedOperation) when calling the CreateTags
operation: You are not authorized to perform this operation. Encoded
authorization failure message
You can use these credentials by invoking aws cli without any parameters related to credentials, it will try to pick up the creds from the instance profile. Your problem is not that you do not have the credentials but that you do not have permission to invoke CreateTags operation. As the error message says it is an authorization problem not an authentication one. You need to change the instance profile policy and include the capability to change instance tags.
More here:
https://docs.aws.amazon.com/codedeploy/latest/userguide/getting-started-create-iam-instance-profile.html
Check if your role allows you to create, list, delete tags on EC2 or if you require a custom policy attached with this role to allow these actions.
In summary, you should have:
{
"Effect": "Allow",
"Action": [
"ec2:CreateTags",
"ec2:DeleteTags"
]
}

How to give ec2 instance access to s3 using boto3

By googling, I found this tutorial on accessing S3 from EC2 instance without credential file. I followed its instructions and got the desired instance. The aws web console page looks like
However, I don't want to do it manually using the web console every time. How can I create such EC2 instances using boto3?
I tried
s = boto3.Session(profile_name='dev', region_name='us-east-1')
ec2 = s.resource('ec2')
rc = ec2.create_instances(ImageId='ami-0e297018',
InstanceType='t2.nano',
MinCount=1,
MaxCount=1,
KeyName='my-key',
IamInstanceProfile={'Name': 'harness-worker'},
)
where harness-worker is the IAM role with access to S3, but nothing else.
It is also used in the first approach with the aws web console tutorial.
Then I got error saying
ClientError: An error occurred (UnauthorizedOperation) when calling
the RunInstances operation: You are not authorized to perform this
operation.
Did I do something obviously wrong?
The dev profile has AmazonEC2FullAccess. Without the line IamInstanceProfile={'Name': 'harness-worker'},, create_instances is able to create instance.
To assign an IAMProfile to an instance, AmazonEC2FullAccess is not sufficient. In addition, you need the following privilege to pass the role to the instance.
See: Granting an IAM User Permission to Pass an IAM Role to an Instance
{
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "*"
}
First you can give full IAM access to your dev profile and see it works. Then remove full IAM access and give only iam:PassRole and try again.
This has nothing to do with the role you are trying to assign the new EC2 instance. The Python script you are running doesn't have the RunInstances permission.

Mounting AWS S3 bucket using AWS IAM roles instead of using a passwd file

I am mounting an AWS S3 bucket as a filesystem using s3fs-fuse. It requires a file which contains AWS Access Key Id and AWS Secret Access Key.
How do I avoid the access using this file? And instead use AWS IAM roles?
As per Fuse Over Amazon document, you can specify the credentials using 4 methods. If you don't want to use a file, then you can set AWSACCESSKEYID and AWSSECRETACCESSKEY environment variables.
Also, if your goal is to use AWS IAM instance profile, then you need to run your s3fs-fuse from an EC2 instance. In that case, you don't have to set these credential files/environment variables. This is because while creating the instance, if you attach the instance role and policy, the EC2 instance will get the credentials at boot time. Please see the section 'Using Instance Profiles' in page 190 of AWS IAM User Guide
there is an argument -o iam_role=--- which helps you to avoid AccessKey and SecretAccessKey
The Full steps to configure this is given below
https://www.nxtcloud.io/mount-s3-bucket-on-ec2-using-s3fs-and-iam-role/

opscode aws cookbook iam role

I am trying to use aws cookbook with iam roles, but when I trying to not include aws_access_key and aws_secret_access_key in the aws_ebs_volume block, the chef keep showing an error: RightAws::AwsError: AWS access keys are required to operate on EC2.
I assume when cookbook mean omit the resource parameters aws_secret_access_key and aws_access_key, I just delete them from the block.
aws_ebs_volume "userhome_volume" do
provider "aws_ebs_volume"
volume_id node['myusers']['usershome_ebs_volid']
availability_zone node['myusers']['usershome_ebs_zone']
device node['myusers']['usershome_ebs_dev_id']
action :attach
end
Does anyone have the example of aws cookbook with iam roles please?
update:
Do I still need to define aws creeds data bag if I have already have proper iam role attached to the instance?
When I use iam role and aws cookbook, what does the was_ebs_volume block look like?
In order to manage AWS components, you need to provide authentication credentials to the nodein one of two ways:
explicitly pass credentials parameter to the resource
or let the resource pick up credentials from the IAM role assigned to the instance
When you provision the instance, you should assign it the appropriate role in "Step 3. Configure Instance Details" (when using the console). The setting "IAM role" for EC2 automatically deploys and rotates AWS credentials for you, eliminating the need to store your AWS access keys with your application. On an instance provisioned this way, you no longer need to include aws_access_key and aws_secret_access_key in the aws_ebs_volume block.
Here are code examples on how to launch an instance with an IAM role using the IAM and Amazon EC2 CLIs:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
and here are some code examples:
http://www.getchef.com/blog/2013/12/19/automating-iam-credentials-with-ruby-and-chef/
When you assign the appropriate IAM role during instance provisioning, your code should work without aws_access_key and aws_secret_access_key.
Here are the steps:
Set up your S3, Chef server, and IAM role as described here:
https://securosis.com/blog/using-amazon-iam-roles-to-distribute-security-credentials-for-chef
Execute “knife client ./” to create client.rb and validation.pem, then transfer them from your Chef server into your bucket.
Launch a new instance with the appropriate IAM Role you set up for Chef and your S3 bucket.
Specify your customized cloud-init script in the User Data field or command-line argument as described here:
https://securosis.com/blog/using-cloud-init-and-s3cmd-to-automatically-download-chef-credentials
You can also host the script as a file and load it from a central repository using an include.
Execute chef-client.