Was trying to integrate kubernetes with AWS Container Registry. From what I have read it sounds like it should be automatically setup if the cluster is deployed to AWS. Which my cluster is.
I also granted the IAM Roles the necessary permissions to pull from ecr but I still get unable to pull image when trying to deploy on kubernetes. It also says authentication failed.
Really just wanted to see if anyone else was having issues or if someone was able to pull an image from aws ecr and how did they accomplish it.
Related
I am getting the below error.
Does anyone have any idea how to solve it?
Failed to create pipeline job. Error: Vertex AI Service Agent
'XXXXX#gcp-sa-aiplatform-cc.iam.gserviceaccount.com' should be granted
access to the image gcr.io/gcp-project-id/application:latest
{PROJECT_NUMBER}#gcp-sa-aiplatform-cc.iam.gserviceaccount.com is google's AI Platform service agent.
This Service agent requires access to read/pull the docker image from your project's gcr to create container for pipeline run.
If You have permission to edit IAM roles, You can try adding Artifact Registry roles to the above service agent.
You can start with adding roles/artifactregistry.reader.
Hope this helps :)
This error may have occurred due to missing roles or permissions for pulling and pushing images into Container Registry. All the users and service accounts must be given appropriate permissions for Cloud Storage who interact with Container Registry. You can give roles/storage.objectViewer, roles/storage.legacyBucketWriter and roles/storage.admin to your service account to access your image in Container Registry using the service-account. You can follow this doc for giving appropriate roles and permissions to the Service Account.
I am trying to deploy an ECR container to ECS Fargate in a cross account.
There are 2 account
Tooling Account
Development Account
The Tooling Account is where AWS CodePipeline builds the images and stores the images in ECR.
The Devlopment Account is where ECS runs the image in Fargate
I have the AWS CodePipeline building the image and storing it in ECR on the Tooling Account.
Now I cannot find any documentation anywhere on how I go about deploying the image in the Development Account.
I have thought of a few options;
Create a AWS CodeDeploy in the Tooling Account and deploy the image to Fargate in the Devlopment Account. issues - I don't know how AWS CodeDeploy in the Tooling Account can trigger the deployment in another account.
Create a AWS CodeDeploy in the Devlopment Account which can deploy to Fargate. issues - I don't know how to trigger the AWS CodeDeploy in the Development Account from the Tooling Account.
There is lots of documentation out there on how to run containers on EC2 or Lambda, but not Fargate and especially cross-account Fargate deployment.
I would like to run this all from CloudFormation, but I could build that once I have gotten this working via the console first.
What is the best solution for this achieving this?
I am trying to create a workflow where developers in my organisation can upload docker images to our AWS ECR. The following commands work :
Step-1: Get Token
aws ecr get-login-password --region <region> | docker login --username AWS --password-stdin <repo-url>
Step-2: Tag the already built image
docker tag <local-image:tag> <ecr-repo-url>:latest
Step-3: Finally Push
docker push <ecr-repo-url>:latest
Now this works absolutely fine.
However as I am trying to automate the above steps. I will NOT have AWS CLI configured on end users machine. So Step-1 will fail for the end user
So two quick queries:
Can I get the token from a remote machine and Step-2 and Step-3 can happen from client
Can I do all the three steps in remote and I have a service that uploads the local docker image to the remote server which in turn will take care of tag - push
I'm hoping that the end-user will have docker installed
In that case you can make use AWS CLI docker image to obtain the token from ECR.
The token itself is just a temporary password so whether you use the AWS CLI on the remote server or not it will be valid for the Docker credentials.
You of also have the option of using the AWS SDK that you could package with a small application to perform this action, such as Boto3 although you would need to ensure that the host itself has the relevant programming language configured.
Alternatively if you want this to be automated you could actually look at using a CI/CD pipeline.
GitHub has Actions, BitBucket has Pipelines and GitLab has arguably the most CI/CD built into it. This would have these services perform all of the above actions for you.
As a final suggestion you could use CodeBuild within a CodePipeline to build your image and then tag and deploy it to ECR for you. This will be automated by a trigger and not require any permanent infrastructure.
More information about this option is available in the Build a Continuous Delivery Pipeline for Your Container Images with Amazon ECR as Source article.
I have gone through the documents and couldn't find a solution for this..
I have two accounts dev and prod. my amplify app exist in dev but code-commit exist prod. Is there any way to connect them?
I have configured assume-role and have also tried using temporary credentials in a different profile and connecting it with:
aws amplify create-app --name app-name-in-dev --repository repo-in-prod
aws amplify create-app --name app-name-in-dev --repository repo-in-prod --iam-service-role-arn arn:aws:sts::prod:assumed-role/CrossAccountRepositoryContributorRole/cross-account
The problem remains the same. It seems impossible to connect amplify with code-commit until, repository and amplify-app exist in the same account.
Is there anyway to achieve this or is it really not configurable?
references:
https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_cross-account-with-roles.html
https://docs.aws.amazon.com/cli/latest/reference/sts/assume-role.html
https://forums.aws.amazon.com/thread.jspa?threadID=300224
Incase Anyone comes looking for same:
After creating a ticket with AWS, I have received back a response that it is not currently possible as Amplify is still a newer service and only allow repository from same account.
I have tried setting this up at my end and observed the same. I was able to connect to the repositories only in the same account. I did further research on this and could confirm that currently, we cannot integrated with a cross account CodeCommit repository for Amplify applications.
I have everything setup and working with rolling deploys and being able to do git aws.push but how do I add a authorized key to EB server so my CI server can deploy as well?
Since you are using Shippable, I found this guide on Continuous Delivery using Shippable and Amazon Elastic Beanstalk that shows how to set it up on their end. Specifically, step 3 is what you are looking for.
It doesn't look like you need an authorized key, instead, you just need to give an AWS ID and AWS Secret Key that will allow Shippable to make API calls on your behalf. To do this, I recommend creating an IAM role that is specifically for Shippable. That way you can revoke it if you ever need to and only give it the permissions that it needs.