Ansible AWS deployment: Can we use IAM role instead of Keys? - amazon-web-services

I have Ansible Master running on an ubuntu ec2 server with IAM role having full permission on Ec2 and nothing else. All the instances deployed using this Ansible-master are although deployed but in terminated state.
Albiet, while I was testing another approach, I created a new master and provided my authentication keys which are of a root user having all the permissions.
Is there a problem with IAM role's permissions or deployment is known not to work with IAM roles?

It works as expected for me:
root#test-node:~# cat /etc/issue
Ubuntu 14.04.4 LTS \n \l
root#test-node:~# ansible --version
ansible 2.1.2.0
config file =
configured module search path = Default w/o overrides
root#test-node:~# pip list | grep boto
boto (2.42.0)
If no credentials are specified in env variables or config files, Boto (library that Ansible uses to connect to AWS) will try to fetch credentials from instance metadata.
You may try to fetch them manually with:
curl http://169.254.169.254/latest/meta-data/iam/security-credentials/<role-name>
and pass KeyId and Secret to Ansible via environment variables to test that role's permissions are correct.
Keep in mind, that:
IAM role should be attached to the EC2 instance before start
region should be always defined: either via module parameters or via environment variable.

Related

ec2-user vs root IAM instance profile change

I have a CloudFormation based deployment of an application, which creates an Amazon Linux 2 EC2 instance from a Marketplace AMI with an IAM Instance Profile. In the CloudFormation template, I am running scripts via cfn-init which have some AWS CLI commands in them, like ssm get-parameter, and also am mounting an EFS volume using mount and amazon-efs-utils. This has been working like a charm.
Now I have a customer who is running in their AWS account using the common AMI and CloudFormation templates. For them, the cfn-init is failing because cfn-init is running as root, but root strangely has no IAM privileges and can't run the scripts or the efs-helper based mount, even though the Instance Profile is there. But ec2-user does have IAM privileges from the Instance Profile!
To summarize:
ec2-user does have IAM privileges from the Instance Profile.
# logged in as ec2-user
aws ssm get-parameter --name "/aParameter"
returns a result
root does not have IAM privileges from the Instance Profile.
# logged in as ec2-user
sudo aws ssm get-parameter --name "/aParameter"
An error occurred (ParameterNotFound) when calling the GetParameter operation:
I expected that all users running in the instance would have the Instance Profile as their credentials if they have not explicitly authenticated some other way. I can't see anything in the customer's environment that would cause this - I was thinking maybe a Service Control Policy - but they have none of that.
Has anyone seen this behavior and have a fix?
Thanks so much in advance.....
I have no idea about the relationship between ec2-user and instance profile.
ec2-user is a user on linux.
instance profile is an IAM role attached to an EC2 instance.
I think you can check with other commands like aws s3 ls.
Btw, you are using sudo with root.
Turns out AWS credentials were left in the root user on the AMI. :-/
Asri Badlah's aws iam get-user suggestion showed the problem. Thank you!
The instance profile did not have access to get-user, but sudo get-user did and showed the user from the credentials in /root/.aws.

Have to delete environment variables for aws cli to work without --profile flag

ok so I am baffled by this aws cli behavior. Basically what is going on is that when I set my AWS creds related in environment variable, AWS CLI forces me to pass --profile flag each time I use the CLI.
So basically when AWS_ACCESS_KEY_ID AND AWS_SECRET_ACCESS_KEY then I cannot run commands like aws s3 ls without passing --profile flag to it even though my profile is [default]
Also, jus to note the environment variable values and the values inside my /.aws/credentials
file is exactly same. Also, I tried to set both AWS_PROFILE and AWS_DEFAULT_PROFILE to default hoping that if all values such as keys,secret and profile are set in environment variable then I do not have to pass any --profile flag explicitly. Not having to pass this flag explicitly is very important for me at this point because if I am running an application which connects with aws and picks up default credentials, there is no easy way to pass profile information to that app.
my credentials file look like following:
[default]
aws_access_key_id = AKIA****
aws_secret_access_key = VpR***
My config file looks like following:
[default]
region = us-west-1
output = json
And my environment variables do have the same values for corresponding entries. for key, secret and profile at least.
Any idea on how to solve this issue?
The AWS CLI looks for credentials using a series of providers in a particular order. (https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html#config-settings-and-precedence)
Specifically:
Command line options – You can specify --region, --output, and --profile as parameters on the command line.
Environment variables – You can store values in the environment variables: AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN. If they are present, they are used.
CLI credentials file – This is one of the files that is updated when you run the command aws configure. The file is located at ~/.aws/credentials on Linux or macOS, or at C:\Users\USERNAME\.aws\credentials on Windows. This file can contain the credential details for the default profile and any named profiles.
CLI configuration file – This is another file that is updated when you run the command aws configure. The file is located at ~/.aws/config on Linux or macOS, or at C:\Users\USERNAME\.aws\config on Windows. This file contains the configuration settings for the default profile and any named profiles.
Container credentials – You can associate an IAM role with each of your Amazon Elastic Container Service (Amazon ECS) task definitions. Temporary credentials for that role are then available to that task's containers. For more information, see IAM Roles for Tasks in the Amazon Elastic Container Service Developer Guide.
Instance profile credentials – You can associate an IAM role with each of your Amazon Elastic Compute Cloud (Amazon EC2) instances. Temporary credentials for that role are then available to code running in the instance. The credentials are delivered through the Amazon EC2 metadata service. For more information, see IAM Roles for Amazon EC2 in the Amazon EC2 User Guide for Linux Instances and Using Instance Profiles in the IAM User Guide.
Another potential option for you would be to unset any colliding variables in your env and rely on the aws credentials file to provide the appropriate access credentials from the default entry.

Dockerfile copy files from amazon s3 or another source that needs credentials

I am trying to build a Docker image and I need to copy some files from S3 to the image.
Inside the Dockerfile I am using:
Dockerfile
FROM library/ubuntu:16.04
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
# Copy files from S3 inside docker
RUN aws s3 COPY s3://filepath_on_s3 /tmp/
However, aws requires AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
I know I can probably pass them using ARG. But, is it a bad idea to pass them to the image at build time?
How can I achieve this without storing the secret keys in the image?
In my opinion, Roles is the best to delegate S3 permissions to Docker containers.
Create role from IAM -> Roles -> Create Role -> Choose the service that will use this role, select EC2 -> Next -> select s3policies and Role should be created.
Attach Role to running/stopped instance from Actions-> Instance Settings -> Attach/Replace Role
This worked successfully in Dockerfile:
RUN aws s3 cp s3://bucketname/favicons /var/www/html/favicons --recursive
I wanted to build upon #Ankita Dhandha answer.
In the case of Docker you are probably looking to use ECS.
IAM Roles are absolutely the way to go.
When running locally, locally tailored Docker file and mount your AWS CLI ~/.aws directory to the root users ~/.aws directory in the container (this allows it to use your or a custom IAM user's CLI credentials to mock behavior in ECS for local testing).
# local sytem
from ubuntu:latest
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
docker run --mount type=bind,source="~/.aws",target=/root/.aws
Role Types
EC2 Instance Roles define the global actions any instance can preform. An example would be having access to S3 to download ecs.config to /etc/ecs/ecs.config during your custom user-data.sh setup.
Use the ECS Task Definition to define a Task Role and a Task Execution Role.
Task Roles are used for a running container. An example would be a live web app that is moving files in and out of S3.
Task Execution Roles are for deploying the task. An example would be downloading the ECR image and deploying it to ECS, downloading an environment file from S3 and exporting it to the Docker container.
General Role Propagation
In the example of C# SDK there is a list of locations it will look in order to obtain credentials. Not everything behaves like this. But many do so you have to research it for your situation.
reference: https://docs.aws.amazon.com/sdk-for-net/latest/developer-guide/creds-assign.html
Plain text credentials fed into either the target system or environment variables.
CLI AWS credentials and a profile set in the AWS_PROFILE environment variable.
Task Execution Role used to deploy the docker task.
The running task will use the Task Role.
When the running task has no permissions for the current action it will attempt to elevate into the EC2 instance role.
Blocking EC2 instance role access
Because of the EC2 instance role commonly needing access for custom system setup such as configuring ECS its often desirable to block your tasks from accessing this role. This is done by blocking the tasks access to the EC2 metadata endpoints which are well known DNS endpoints in any AWS VPC.
reference: https://aws.amazon.com/premiumsupport/knowledge-center/ecs-container-ec2-metadata/
AWS VPC Network Mode
# ecs.confg
ECS_AWSVPC_BLOCK_IMDS=true
Bind Network Mode
# ec2-userdata.sh
# install dependencies
yum install -y aws-cli iptables-services
# setup ECS dependencies
aws s3 cp s3://my-bucket/ecs.config /etc/ecs/ecs.config
# setup IPTABLES
iptables --insert FORWARD 1 -i docker+ --destination 169.254.169.254/32 --jump DROP
iptables --append INPUT -i docker+ --destination 127.0.0.1/32 -p tcp --dport 51679 -j ACCEPT
service iptables save
Many people pass in the details through the args, which I see as being fine and the way I would personally do it. I think you can overkill certain processes and this I think this is one of them.
Example docker with args
docker run -e AWS_ACCESS_KEY_ID=123 -e AWS_SECRET_ACCESS_KEY=1234
Saying that I can see why some companies want to hide this away and get this from a private API or something. This is why AWS have created IAM roles - https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html.
Details could be retrieved from the private ip address which the S3 can only access meaning you would never have to store your credentials in your image itself.
Personally i think its overkill for what you are trying to do, if someone hacks your image they can console the credentials out and still get access to those details. Passing them in as args is safe as long as you protect yourself as you should anyway.
you should configure your credentials on the ~/.aws/credentials file
~$ cat .aws/credentials
[default]
aws_access_key_id = AAAAAAAAAAAAAAAAAAAAAAAAAAAAa
aws_secret_access_key = BBBBBBBBBBBBBBBBBBBBBBBBBBBBB

Jenkins on AWS EC2 instance unable to use instance profile after upgrade

I had a Jenkins 2.46 installation running on an EC2 box, associated to a IAM role through an instance profile.
Jenkins was able to do various tasks requiring AWS credentials (f.e. use terraform, upload files to s3, access CodeCommit git repos) using just the instance profile role (no access key or secret keys were stored on the instance).
After upgrading to Jenkins 2.89, this is no longer the case: every task requiring authentication with AWS fails with a 403 error.
However, running a command on the instance bash as the jenkins user still works fine (f.e. running sudo -u jenkins /usr/bin/aws s3 ls s3://my-bucket/ lists bucket files; running the same command into Jenkins' Script Console yelds a 403).
I read the release notes of every version from 2.46 to 2.89 but I did not find anything relevant.
Jenkins was installed and updated through yum, the aws cli was installed using the bundled installer provided by AWS.

User Data script to call aws cli

I am trying to get some files from S3 on startup in an EC2 instance by using a User Data script and the command
/usr/bin/aws s3 cp ...
The log tells me that permission was denied and I believe it is due to aws cli not finding any credentials when executing the user data script.
Running the command with sudo after the instance has started works fine.
I have run aws configure both with sudo and without.
I do not want to use cronjob to run something on startup since I am working with an AMI and often need to change the script, therefore it is more convenient for me to change the user data instead of creating a new AMI everytime the script changes.
If possible, I would also like to avoid writing the credentials into the script.
How can I configure awscli in such a way that the credentials are used when running a user data script?
I suggest you remove the AWS credentials from the instance/AMI. Your userdata script will be supplied with temporary credentials when needed by the AWS metadata server.
See: IAM Roles for Amazon EC2
Clear/delete AWS credentials configurations from your instance and create an AMI
Create a policy that has the minimum privileges to run your script
Create a IAM role and attach the policy you just created
Attach the IAM role when you launch the instance (very important)
Have your userdata script call /usr/bin/aws s3 cp ... without supplying credentials explicitly or using credentials file
You can configure your EC2 instance to receive a pre-defined IAM Role that has its credentials "baked-in" to the instance that it fetches upon instantiation, which it turn will allow it to call aws-cli commands in your User Data script without the need to configure credentials at all.
Here's more info on IAM Roles for EC2:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
It's worth noting here that you'll need to attach the appropriate policies to the IAM Role that you assign to your instance in order for the aws-cli commands to succeed. More information on that can be found here:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#working-with-iam-roles