How can codeBuild container run aws-cli commands without prior authentication? - amazon-web-services

Say I use aws-cli locally on my machine, I´d need to authenticate with credentials prior to any operation.
How do AWS services give permission to other services on my behalf? And more specifically, how does a container run aws-cli on my behalf without prior authentication?
I am asking this, after running my first pipeline successfully in codePipeline. My buildspec.yml does run aws s3 sync command flawlessly -which made me then wonder how do aws internally permissions work-.

AWS CodeBuild uses an IAM Service Role to provide AWS permissions to the CodeBuild environment. You should have had to create a service role for your CodeBuild configuration.
When the AWS cli tool runs, and it hasn't been previously configured with API access keys, it will check if it is running in an AWS environment like EC2 or Lambda and if so, it will use the AWS IAM role assigned to that runtime environment.

Related

AWS CLI - Get required permissions to execute specific command

When creating specific role/policy to assume for some pipelines which execute aws cli commands. So my questions is, are there any way to figure out which permissions are required to execute some aws cli commands?
For example, which permissions are required to be part of the role to perform an Elastic Beanstalk environment update with aws elasticbeanstalk update-environment?
You can try using iamlive, which allows you to generate IAM policies from AWS calls.

Unable to deploy code onto AWS EC2 instance from AWS CodeDeploy

I am trying to implement CI/CD using AWS CodeBuild, and trying to deploy an application onto an AWS EC2 instance, but the code deployment is failing and showing the error below:
The IAM role arn:aws:iam::341502448925:role/CodeDeployServiceRole does not give you permission to perform operations in the following AWS service: AmazonEC2
I have even created service role in the IAM console but it's not working for me. Someone let me know how can I resolve this issue.
Except for creating an IAM role you should also install aws codedeploy agent on your ec2 instance:
install aws-codedeploy agent

AWS ECS upload file to bucket from within container via bash

I have a ecs task running with aws fargate. I generate some files on the container and need to upload these files to an s3 bucket.
Can I do this by installing the aws cli to the container?
I'm not sure about the following stuff:
Do I need to use some rest api (like python boto3 library) or can I use the aws console?
How should I authenticate the requests (iam and aws secrets manager?)
Do I need to use some rest api (like python boto3 library) or can I
use the aws console?
Are you asking how to install the AWS CLI into the Docker container running in ECS? You would need to update your Docker image to include the AWS CLI and then redeploy the container to ECS. The AWS API, Boto3, or the AWS console are not going to help with that task.
How should I authenticate the requests (iam and aws secrets manager?)
By assigning an IAM role to the ECS task.

AWS - ECS load S3 files in entrypoint script

Hi all!
Code: (entrypoint.sh)
printenv
CREDENTIALS=$(curl -s "http://169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI")
ACCESS_KEY_ID=$(echo "$CREDENTIALS" | jq .AccessKeyId)
SECRET_ACCESS_KEY=$(echo "$CREDENTIALS" | jq .SecretAccessKey)
TOKEN=$(echo "$CREDENTIALS" | jq .Token)
export AWS_ACCESS_KEY_ID=$ACCESS_KEY_ID
export AWS_SECRET_ACCESS_KEY=$SECRET_ACCESS_KEY
export AWS_SESSION_TOKEN=$TOKEN
aws s3 cp s3://BUCKET/file.txt /PATH/file.txt
Problem:
I'm trying to fetch AWS S3 files to ECS inspired by:
AWS Documentation
(But I'm fetching from S3 directly, not throught VPC endpoint)
I have configured bucket policy & role policy (that is passed in taskDefinition as taskRoleArn & executionRoleArn)
Locally when I'm fetching with aws cli and passing temporary credentials (that I logged in ECS with printenv command in entrypoint script) everything works fine. I can save files on my pc.
On ECS I have error:
fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden
Where can I find solution? Someone had similar problem?
Frist thing, If you are working inside AWS, It strongly recommended to use AWS ECS service role or ECS task role or EC2 role. you do need to fetch credentials from metadata.
But seems like the current role does have permission to s3 or the entrypoint not exporting properly the Environment variable.
If your container instance has already assing role then do not need to export Accesskey just call the aws s3 cp s3://BUCKET/file.txt /PATH/file.txt and it should work.
IAM Roles for Tasks
With IAM roles for Amazon ECS tasks, you can specify an IAM role that
can be used by the containers in a task. Applications must sign their
AWS API requests with AWS credentials, and this feature provides a
strategy for managing credentials for your applications to use,
similar to the way that Amazon EC2 instance profiles provide
credentials to EC2 instances. Instead of creating and distributing
your AWS credentials to the containers or using the EC2 instance’s
role, you can associate an IAM role with an ECS task definition or
RunTask API operation.
So the when you assign role to ECS task or ECS service your entrypoint will be that simple.
printenv
aws s3 cp s3://BUCKET/file.txt /PATH/file.txt
Also, your export will not work as you are expecting, the best way to pass ENV to container form task definition, export will not in this case.
I will suggest assigning role to ECS task and it should work as you are expecting.

How I can inject artifact from AWS S3 inside Docker image?

I need to prepare Docker image with embedded Jar file to push it into ECR. Jar file is storing in S3 bucket. How I can inject jar inside image without explicit storing AWS access keys into image?
Maybe I can use AWS CLI or exist other way?
Also it is not recommended to add public access to my s3 bucket and set access keys via env variable during execute docker run.
You can define an AWS IAM Role and attach it to EC2 Instances. So any instance that needs to run this docker build command, can do so as long as it has the IAM role attached to it. You can do so from the AWS Console. This solves the problem of you putting AWS credentials on the instance itself.
You will still need to install the aws cli in your Dockerfile. Once IAM Role is attached, you don't have to worry about credentials.
Recommended docs:
IAM Roles for Amazon EC2
Here's an official blog post tutorial on how to do this:
Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI
Just make sure you specify in the IAM Role which S3 Buckets you want these instances to have access to.