Api credentials for AWS ECR - amazon-web-services

I created a new AWS account for AWS ECS. IN Jenkins I installed AWS ECS plugin and now I want to build and push images into registry.
But I need to create API key and secret in AWS in order to Jenkins to communicate with AWS ECR.
How I have to create in AWS these credentials?

Create an IAM user by following this documentation:
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html

Related

AWS ECS upload file to bucket from within container via bash

I have a ecs task running with aws fargate. I generate some files on the container and need to upload these files to an s3 bucket.
Can I do this by installing the aws cli to the container?
I'm not sure about the following stuff:
Do I need to use some rest api (like python boto3 library) or can I use the aws console?
How should I authenticate the requests (iam and aws secrets manager?)
Do I need to use some rest api (like python boto3 library) or can I
use the aws console?
Are you asking how to install the AWS CLI into the Docker container running in ECS? You would need to update your Docker image to include the AWS CLI and then redeploy the container to ECS. The AWS API, Boto3, or the AWS console are not going to help with that task.
How should I authenticate the requests (iam and aws secrets manager?)
By assigning an IAM role to the ECS task.

How can codeBuild container run aws-cli commands without prior authentication?

Say I use aws-cli locally on my machine, I´d need to authenticate with credentials prior to any operation.
How do AWS services give permission to other services on my behalf? And more specifically, how does a container run aws-cli on my behalf without prior authentication?
I am asking this, after running my first pipeline successfully in codePipeline. My buildspec.yml does run aws s3 sync command flawlessly -which made me then wonder how do aws internally permissions work-.
AWS CodeBuild uses an IAM Service Role to provide AWS permissions to the CodeBuild environment. You should have had to create a service role for your CodeBuild configuration.
When the AWS cli tool runs, and it hasn't been previously configured with API access keys, it will check if it is running in an AWS environment like EC2 or Lambda and if so, it will use the AWS IAM role assigned to that runtime environment.

"no basic auth credentials" when trying to pull an image from a private ECR

I have the following line somewhere in the middle of my Dockerfile to retrieve an image from my private ECR.
FROM **********.dkr.ecr.ap-southeast-1.amazonaws.com/prod/*************:ff03401
This is the error that I get in AWS Codebuild when trying to build this:
Step 21/36 : FROM **********.dkr.ecr.ap-southeast-1.amazonaws.com/prod/*************:ff03401
Get https://**********.dkr.ecr.ap-southeast-1.amazonaws.com/prod/*************/manifests/ff03401: no basic auth credentials
How can one provide these credentials in the most secure way, and in a way that can also be terraformed?
There are multiple ways to do it.
Using aws access and secret key. In which you set the aws credentials on the ec2 machine and run ecr login command. aws ecr get-login --no-include-email --registry-ids <some-id> --region eu-west-1 and then docker pull should work. But this is not a recommended secure way.
What I prefer is using aws iam roles.
Assuming you want to pull this image on your ec2 machine that was brought up using terraform. Make use of iam roles.
Create an iam role manually or using terraform iam resource.
For contents of iam policy refer this.
While bringing ec2 using terraform instance resource make use of iam_instance_profile attribute, the value of this attribute should be the name of iam role you created.
This should be enough to automatically pull docker images from ECR in a secure way.
Hope this helps.

How I can inject artifact from AWS S3 inside Docker image?

I need to prepare Docker image with embedded Jar file to push it into ECR. Jar file is storing in S3 bucket. How I can inject jar inside image without explicit storing AWS access keys into image?
Maybe I can use AWS CLI or exist other way?
Also it is not recommended to add public access to my s3 bucket and set access keys via env variable during execute docker run.
You can define an AWS IAM Role and attach it to EC2 Instances. So any instance that needs to run this docker build command, can do so as long as it has the IAM role attached to it. You can do so from the AWS Console. This solves the problem of you putting AWS credentials on the instance itself.
You will still need to install the aws cli in your Dockerfile. Once IAM Role is attached, you don't have to worry about credentials.
Recommended docs:
IAM Roles for Amazon EC2
Here's an official blog post tutorial on how to do this:
Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI
Just make sure you specify in the IAM Role which S3 Buckets you want these instances to have access to.

How do I change users in the EB CLI?

I deployed an application on EB with my own AWS account, but I need to do the same with another one. I have the user name, access key and secret access key for the AWS account I need to deploy from, but I don't even know how to switch out of my account to do it.
I've been able to sign into the AWS cli with those credentials, but I'm having trouble using the aws elasticbeanstalk cli, so help deploying my application through that would be helpful as well.
Thanks!
The AWS CLI credentials are set in credential file and can be overridden with enviroment variables.
To create differnent profiles can use the built in config tool:
aws configure --profile user2
Then when you use the aws to call elasticbeanstalk, you can specify this new profile to use
aws --profile user2 elasticbeanstalk ...blah...blah...blah...