Amazon container registry login - amazon-web-services

Since we use AWS for a number of other projects, when it came time to publish private docker images in a repository, I really wanted to use Amazon Elastic Container Registry.
However, the login process seems overly complicated.
Is it correct that the only way to log into the ECR is to use the aws command line tools to generate a 12hour token, and use that with the Docker login command?
Any advice on scripting this process without AWS tools?

You must use AWS tools to generate a temporary authorization token to be used by Docker CLI since it does not support the standard AWS authentication methods. Quoting the explanation from the official AWS ECR authentication documentation:
Because the Docker CLI does not support the standard AWS authentication methods, you must authenticate your Docker client another way so that Amazon ECR knows who is requesting to push or pull an image. If you are using the Docker CLI, then use the docker login command to authenticate to an Amazon ECR registry with an authorization token that is provided by Amazon ECR and is valid for 12 hours. The GetAuthorizationToken API operation provides a base64-encoded authorization token that contains a user name (AWS) and a password that you can decode and use in a docker login command. However, a much simpler get-login command (which retrieves the token, decodes it, and converts it to a docker login command for you) is available in the AWS CLI.
Pay attention that although you must use AWS tools to generate the authentication token, using the AWS CLI is not the only option. You can call the GetAuthorizationToken using any form of the AWS tools which is convenient to use from your scripts.
The get-login command is available only in the AWS CLI as opposed to other AWS tools. As quoted above, it is claimed to be a simple way to perform the authorization.

Related

Jenkins docker agent template with registry authentication using AWS credentials

I'm configuring a docker agent template in jenkins and specifying a docker image in an ECR. When I select the "registry authentication" credentials drop down, I can only see username/password credentials despite having other things like AWS credentials configured in jenkins. The credentials have global scope and I'm using them in another pipeline using withCredentials(aws(...)) binding plugin syntax.
I tried creating a username/password credential using the AWS access and secret key, but that doesn't seem to work - likely because it's just passing them directly as user/pass rather than using aws ecr get-login-password.
Is there any way to do this from the agent template configuration? I know I can specify the docker agent more explicitly in the Jenkins file, but I'm trying to stay consistent with other agents we use.

Gitlab CI/CD deploy to aws via aws-azure-cli authentication

When deploying to AWS from gitlab-ci.yml file, you usually use aws-cli commands as scripts. At my current workplace, before I can use the aws-cli normally, I have to login via aws-azure-cli, authenticate via 2FA, then my workstation is given a secret key than expires after 8 hours.
Gitlab has CI/CD variables where I would usually put the AWS_ACCESS_KEY and AWS_SECRET_KEY, but I can't create IAM role to get these. So I can't use aws-cli commands in the script, which means I can't deploy.
Is there anyway to authenticate Gitlab other than this? I can reach out to our cloud services team, but that will take a week.
You can configure OpenID to retrieve temporary credentials from AWS without needing to store secrets.
In my view its actually a best practice too, to use OopenID roles instead of storing actual credentials.
Add the identity provider fir gitlab in aws
Configure the role and trust
Retrieve a temporary credential
follow this https://docs.gitlab.com/ee/ci/cloud_services/aws/ or a more detailed version https://oblcc.com/blog/configure-openid-connect-for-gitlab-and-aws/

Calling AWS web services using extracted HTTP auth headers from Web Console session, AWS_ACCESS_KEY_ID not provided, so AWS client not available

So I got into a situation working for a client which does not provide in any way AWS_ACCESS_KEY_ID as security protection. We have only available for development AWS Web Console. So I started searching for another way of the programmatic script(speed-up) my dev tasks.
Note: we cannot use AWS client without AWS_ACCESS_KEY_ID and secret.
My assumptions: If the AWS web console can do the same thing as aws cli (.eg create bucket, load data into bucket, etc.), why not use web console auth mechanism (visible in http request headers) and bind it to aws cli (or some other api call code) to make it work even without aws keys?
Question: Is this possible? For sure I can see in http headers following artifacts:
aws-session-token
aws-session-id
awsccc
and dozen of others...
My idea is to automate this by:
Go to the web console and login have a script that will
automatically output from browser session required parameters to
some text file
Use this extracted information by some dev script
If this is not supported or impossible to achieve with aws cli, can I use some SDK or raw AWS Api calls with extracted information?
I can extract SAML content which has above mentioned aws-creds header also I see oauth client call with following headers:
https://signin.aws.amazon.com/oauth?
client_id=arn%3Aaws%3Asignin%3A%3A%3Aconsole%2Fcanvas&
code_challenge=bJNNw87gBewdsKnMCZU1OIKHB733RmD3p8cuhFoz2aw&
code_challenge_method=SHA-256&
response_type=code&
redirect_uri=https%3A%2F%2Fconsole.aws.amazon.com%2Fconsole%2Fhome%3Ffromtb%3Dtrue%26isauthcode%3Dtrue%26state%3DhashArgsFromTB_us-east-1_c63b804c7d804573&
X-Amz-Security-Token=hidden content&
X-Amz-Date=20211223T054052Z&
X-Amz-Algorithm=AWS4-HMAC-SHA256&
X-Amz-Credential=ASIAVHC3TML26B76NPS4%2F20211223%2Fus-east-1%2Fsignin%2Faws4_request&
X-Amz-SignedHeaders=host&
X-Amz-Signature=3142997fe7212f041ef90c1a87288f53cecca9236098653904bab36b17fa53ef
Can I use it with AWS SDK somehow?
To reset an S3 bucket to a known state, I would suggest looking at the AWS cli s3 sync command and the -delete switch. Create a "template" bucket with your default contents, then sync that bucket into your Dev Bucket to reset your Dev bucket.
As for your key problems, i would look at IAM Roles rather trying to hack the console auth.
As to how to run the AWS CLI, you have several options. It can be done from Lambda, ECS (containers running on your own Ec2) or an ec2 instance. All 3 allow you to attach an IAM role. That role can have policies attached (for your S3 bucket) - but there is no key to manage.
Thx for feedback to #MisterSmith! It kinda helped with follow up.
I have found also SAML call during analysis of Chrome traffic from login page to AWS console, I have found this project: https://github.com/Versent/saml2aws#linux
Which extracted all ~/.aws/credentials variables needed for aws cli to work.

In a containerized application that runs in AWS/Azure but needs to access GCLOUD commands, what is the best way to setup gcloud authentication?

I am very new to GCP and I would greatly appreciate some help here ...
I have a docker containerized application that runs in AWS/Azure but needs to access gcloud SDK as well as through "Google cloud client libraries".
what is the best way to setup gcloud authentication from an application that runs outside of GCP?
In my Dockerfile, I have this (cut short for brevity)
ENV CLOUDSDK_INSTALL_DIR /usr/local/gcloud/
RUN curl -sSL https://sdk.cloud.google.com | bash
ENV PATH $PATH:$CLOUDSDK_INSTALL_DIR/google-cloud-sdk/bin
RUN gcloud components install app-engine-java kubectl
This container is currently provisioned from an Azure app service & AWS Fargate. When a new container instance is spawned, we would like it to be gcloud enabled with a service account attached already so our application can deploy stuff on GCP using its deployment manager.
I understand gcloud requires us to run gcloud auth login to authenticate to your account. How we can automate the provisioning of our container if this step has to be manual?
Also, from what I understand, for cloud client libraries, we can store the path to service account key json file in an environment variable (GOOGLE_APPLICATION_CREDENTIALS). So this file either has to be stored inside the docker image itself OR has to be mounted from an external storage at the very least?
How safe is it to store this service account key file in an external storage. What are the best practices around this?
There are two main means of authentication in Google Cloud Platform:
User Accounts: Belong to people, represent people involved in your project and they're associated to a Google Account
Service Accounts: Used by an application or an instance.
Learn more about their differences here.
Therefore, you are not required to use the command gcloud auth login to perform gcloud commands.
You should be using gcloud auth activate-service-account instead, along with the --key-file=<path-to-key-file> flag, which will allow you to authenticate without the need of signing into a Google Account with access to your project every time you need to call an API.
This key should be stored securely, preferably encrypted in the platform of your choice. Learn how to do it in GCP here following these steps as an example.
Take a look at these useful links for storing secrets in Microsoft Azure and AWS.
On the other hand, you can deploy services to GCP programmatically either using Cloud Libraries with your programming language of choice, or using Terraform is very intuitive if you prefer to do so over using the Google Cloud SDK through the CLI.
Hope this helped.

Uploading Docker Images to AWS ECR

I am trying to create a workflow where developers in my organisation can upload docker images to our AWS ECR. The following commands work :
Step-1: Get Token
aws ecr get-login-password --region <region> | docker login --username AWS --password-stdin <repo-url>
Step-2: Tag the already built image
docker tag <local-image:tag> <ecr-repo-url>:latest
Step-3: Finally Push
docker push <ecr-repo-url>:latest
Now this works absolutely fine.
However as I am trying to automate the above steps. I will NOT have AWS CLI configured on end users machine. So Step-1 will fail for the end user
So two quick queries:
Can I get the token from a remote machine and Step-2 and Step-3 can happen from client
Can I do all the three steps in remote and I have a service that uploads the local docker image to the remote server which in turn will take care of tag - push
I'm hoping that the end-user will have docker installed
In that case you can make use AWS CLI docker image to obtain the token from ECR.
The token itself is just a temporary password so whether you use the AWS CLI on the remote server or not it will be valid for the Docker credentials.
You of also have the option of using the AWS SDK that you could package with a small application to perform this action, such as Boto3 although you would need to ensure that the host itself has the relevant programming language configured.
Alternatively if you want this to be automated you could actually look at using a CI/CD pipeline.
GitHub has Actions, BitBucket has Pipelines and GitLab has arguably the most CI/CD built into it. This would have these services perform all of the above actions for you.
As a final suggestion you could use CodeBuild within a CodePipeline to build your image and then tag and deploy it to ECR for you. This will be automated by a trigger and not require any permanent infrastructure.
More information about this option is available in the Build a Continuous Delivery Pipeline for Your Container Images with Amazon ECR as Source article.