How do you push updated container images to AWS ECR? - amazon-web-services

So my understanding of containers and the associated terminology is tenuous at best, so if I say anything completely off please correct me.
I have a Lambda function that I am publishing as a container instance. I successfully pushed my container and ran my Lambda code, which had a bug (side note: Is there really no other way to test minor code changes rather than rebuild and push the entire container every time? I'm not using SAM, which is a decision I regret and am stuck with).
I went to fix the bug, and the absolute only thing I can find is the original commands:
docker build -T CONTAINER-NAME .
aws ecr create-repository --repository-name CONTAINER-NAME --image-scanning-configuration scanOnPush=true
docker tag CONTAINER-NAME:latest 1234.dkr.ecr.us-west-2.amazonaws.com/CONTAINER-NAME:latest
aws ecr get-login-password | docker login --username AWS --password-stdin 1234.dkr.ecr.us-west-2.amazonaws.com
docker push 1234.dkr.ecr.us-west-2.amazonaws.com/CONTAINER-NAME:latest
These worked fine the first time around but now run without error, but do not update my image. I suspect it has to do with the tags, which the documentation is very unclear about--are they just tags like the rest of AWS resources', or something more?. I've experimented with different combinations of changing tags, which generally leads to errors about a given tag or repo not existing.

docker tag $IMAGE_REPO_NAME:$IMAGE_TAG $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
You can try with these code, create a new image with different tag. Mine is using date command to seperate tag. You specify the latest tag, and the running environment already has it, so it won't pull the new one.

Related

Created a pipeline using AWS copilot, original push worked but when I make changes to code and push them to github they don't show up

would appreciate any help with this:
I've followed the guide for AWS copilot here: https://aws.github.io/copilot-cli/docs/getting-started/first-app-tutorial/ and then the guide for creating a pipeline and connecting it to github here: https://aws.github.io/copilot-cli/docs/concepts/pipelines/. That all appears to have worked and I can view the react app I'm working on at the url indicated in aws.
My problem is that when I make changes to my code and then push it to the tracked github branch, the changes don't appear when viewing the app at the url. However, when I make the push to github, the pipeline does register that a change has occured. It indicates that a change has been made and goes through the flow of creating a new build. But whatever I try, the changes don't seem to actually show up.
I assume that I'm missing something simple here, and that for some reason, docker is building the app based on the original code. But I can't figure out why that would be. Maybe something is weird with my DockerFile?
My docker file looks like this:
FROM node:16.14
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json ./
COPY package-lock.json ./
RUN npm i
COPY . ./
CMD ["npm", "run", "server"]
My understanding of how this should work, is that I push up new code to github, that is sent to the aws pipeline and a new image is generated based on that code, which is then used to create a container that is hosted on ECS. But clearly I am missing something.
copilot deploy does work. I'm unsure if
the problem is that my pipeline is successfully building (as it does not throw an error in the console) and then just not hosting it at the same url as copilot deploy. Or
the pipeline is hitting an error that just doesn't show up in the pipeline console. Digging into the logs I find this:
echo "Cloudformation stack and config files were not generated. Please check build logs to see if there was a manifest validation error." 1>&2;
Which seems to point towards the second option. Any suggestions on how resolve whatever it going on in the container if that is the problem?
The error suggests that I check build logs but these are the build logs. Are there more granular build logs I can examine?
When running containers in ECS, unless your container is already crashing because of an error, it often won't pick up code changes from your new image unless you force a new deployment. You can do this from the command line using the AWS CLI with the following:
aws ecs update-service --cluster <cluster_name> --service <service_name> --force-new-deployment --profile <aws_profile_name>
Note that the profile is optional if you're using your default aws cli configuration profile.

Gitlab CI doesn't pull updated docker image from cr to EC2 - [image] is up-to-date

I have followed this guide
to create a CI/CD pipeline with Gitlab, Docker and AWS EC2. I managed it to work with some modifications, but now the problem I have is the following:
The first time I push my code the pipeline works ok, images are stored on Gitlab's container registry, they are pulled and deployed on my EC2.
If I push a new commit to my repo, when pulling images from container registry with deploy.sh script I have this response: [MY_IMAGE] is up-to-date, when in reality it is not. In fact if I run the command manually from the aws machine it detects the new images and pulls them.
I tried tagging images with $CI_COMMIT_SHA, but no luck.
Any way I can make this to work properly?
I figured it out: I didn't set the right env variables to use in the setup_env.sh. Now everything is ok.

Docker Image tagging in ECR

I am pushing docker image in AWS ECR using Jenkins.
While pushing the image I am providing tag as $Build_Number. So in ECR repo I have images with tags like 1,2,3,4.
But when I am trying to pull the image from EC2 with below command from Jenkins job
docker pull 944XXX.dkr.ecr.us-east-1.amazonaws.com/repository1:latest
I am getting an error as there is no image with tag as latest.
Here I want to pull the latest image (with tag 4). I cannot hard-code tag number here as docker pull command will run from Jenkins job automatically. So what way can I pull the latest image?
I believe that the correct approach here would be to push the same image twice with different tags. One push would include the image with no tag and then the second push would be the same image after you have tagged it.
Note that you don't have to build the image twice. You only need to issue the docker push twice.
ECR is "smart" enough to recognise that the image digest did not change and it will not try to actually upload the image twice. On the second push only the tag will be send to ECR.
Now that you have an untagged version and a tagged version, you can pull the image without the tag specification and you will get the :latest image. Here is a reference to the AWS docs where they mention that the :latest tag will be added if no tag was sent by the user.
The flow would look something like this:
# Build the image
docker build -f ./Dockerfile -t my-web-app
# Push the untagged image (will become the ":latest")
docker push my-web-app
# Tag the image with your build_number
docker tag my-web-app my-web-app:build_number
# Push the tagged image
docker push my-web-app:build_number
You will now be able to:
docker pull my-web-app:build_number
docker pull my-web-app
Which will result in 2 identical images with only the tag differentiating between them.
one solution is suggested by #Lix that you can try, or if you are interested with just latest pushed image and no matter whats the tag of a latest image then you can get the latest image from AWS-CLI.
So your Jenkins job command will be
TAG=$(aws ecr describe-images --output json --repository-name stage/redis --query 'sort_by(imageDetails,& imagePushedAt)[-1].imageTags[0]' | jq . --raw-output)
docker pull 944XXX.dkr.ecr.us-east-1.amazonaws.com/repository1:$TAG
aws-cli-ecr-list-images-get-newest
If you want a latest tag on ECR, you need to add it and push it there when you build the image. You can use the -t to docker build multiple times (see https://docs.docker.com/v17.09/engine/reference/commandline/build/); just make sure to push them all.

AWS Deploy ECS with Updated Image

It appears that one must provide a new full task definition for each service update. Even though most of the time new deployments exclusively consists of updates to one of the underlying docker images
While this is understandable as a core architectural choice. It is quite cumbersome. Is there a command-line option that makes this easier as the full JSON spec for task definitions are quite complex?
Right now the developers needs to provide complex scripts and deployment orchestrations to achieve this relatively routine task in their CI/CD processes
I see attempts at this Here and Here. These solutions do not appear to work in all cases (for example, for Fargate launches)
I know that if the updated image uses the re-use the same tag this problem is made easier, however in dev cultures that values reproducibility and audibility that is simply not an reasonable option
Is there no other option than to leverage both the AWS API & JSON manipulation libraries?
EDIT It appears this project does a fairly good job https://github.com/fabfuel/ecs-deploy
I found a few approaches
As mentioned in my comment, use ecs-deploy script per the Github link
Create a task definition via the --generate-cli-skeleton option on awscli.
Fill out all details except for execution-rule-arn, task-role-arn, image
These cannot be filled out because they will change per commit or per environment you want to deploy to
Commit this skeleton to git, so it is part of your workspace on the CI
Then use a JSON traversing/parsing library or utility such as https://jqplay.org/ to replace at build time on the CI the roleArn and image name
Use https://github.com/fabfuel/ecs-deploy.
If you want to update only the tag of an existent task:
ecs deploy <CLUSTER NAME> <SERVICE NAME> --region <REGION NAME> --tag <NEW TAG>
e.g. ecs deploy default web-service --region us-east-1 --tag v2.0
In your ci/cd you use git hash:
Using git rev-parse HEAD, will return a hash like: d63c16cd4d0c9a30524c682fe4e7d417faae98c9
docker build -t image-name:$(git rev-parse HEAD) .
docker push image-name:$(git rev-parse HEAD)
And use the same tag on task:
ecs deploy default web-service --region us-east-1 --tag $(git rev-parse HEAD)

How to delete AWS ECS repositories which contain images using Ansible

I want to delete an AWS ECS repository using Ansible.
My Ansible version is 2.4.1.0 and it "should" support this as you can lookup here: http://docs.ansible.com/ansible/latest/ecs_ecr_module
However it doesn't work as intended because my repository still contains docker images.
Here's the code snippet:
- name: destroy-ecr-repos
ecs_ecr: name=jenkins-app state=absent
The resulting error message is:
...
The error was: RepositoryNotEmptyException: An error occurred (RepositoryNotEmptyException) when calling the DeleteRepository operation: The repository with name 'jenkins-app' in registry with id 'xyz' cannot be deleted because it still contains images
...
In the AWS Console it works perfectly fine. There's just a warning text which reminds you that there are still images left in the repository. But you're still able to force the deletion.
And now my question(s):
Is it somehow possible to force the deletion of the repository including its images?
... OR ...
Can I delete them with another tool separately before deleting the repository?
Maybe there simply is no implementation from the ansible side and I have to use the 'shell' module instead (and maybe open a feature request for that).
I'm very grateful for any advise.
First things first: Thanks to #vikas027
Solution from his/her/its answer:
https://docs.aws.amazon.com/cli/latest/reference/ecr/delete-repository.html#examples
History:
Ok, now I figured out, that there currently is no ansible functionality which supports the implicit deletion of images when deleting repositories on ecs.
BUT
I've implemented a workaround that despite its ugliness works for me.
I simply delete the image per shell module using the aws cli before actually removing the ecs repo.
Here's the snippet to do so:
- name: Delete remaining images in our repositories
shell: |
aws ecr list-images --repository-name jenkins-app --query 'imageIds[*]' --output text | while read imageId; do aws ecr batch-delete-image --repository-name jenkins-app --image-ids imageDigest=$imageId; done
- name: destroy-ecr-repo jenkins-app
ecs_ecr: name=jenkins-app state=absent
Hope that helps someone who faces this issue before ansible implements a possibility to delete images via built-in module.