I was able to setup integration between github and AWS CodePipeline, so now my code is uploaded to S3 after a push event by a lambda function. That works very well.
A new ZIP with source code on S3 trigger a pipeline, which builds the code. That's fine. Now I'd like to also build a docker image for the project.
The first problem is that you can't mix a project (nodejs) build and docker build. That's fine, makes sense. Next issue is that you can't have another buildspec.yml for the docker build. You have specify the build commands manually, ok, that works as a workaround.
The biggest problem though, or lack of my understanding, is how to put the docker build as part of the pipeline? First build step build the project, the the next build step builds the docker image. Two standalone AWS CodeBuilds.
The thing is that a pipeline build step have to produce an artifact on the output. But a docker build doesn't produce any files and it looks that the final docker push after docker build is not qualified as an artifact by the pipeline service.
Is there a way how to do it?
Thanks
A bit late, but hopefully will be helpful for someone. You should have the docker image published as part of your post_build phase commands. Here's an example of a buildspec.yml:
version: 0.1
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --region $AWS_REGION)
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t $IMAGE .
- "docker tag $IMAGE $REPO/$IMAGE:${CODEBUILD_BUILD_ID##*:}"
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- "docker push $REPO/$IMAGE:${CODEBUILD_BUILD_ID##*:}"
- "echo {\\\"image\\\":\\\"$REPO/$IMAGE:${CODEBUILD_BUILD_ID##*:}\\\"} > image.json"
artifacts:
files:
- 'image.json'
As you can see, the CodeBuild project expects few parameters - AWS_REGION, REPO and IMAGE and publishes the image on AWS ECR (but you can use registry of your choice). It also uses the existing CODEBUILD_BUILD_ID environment variable to extract dynamic value for the image tag. After the image is pushed, it creates json file with the full path to the image and publishes it as an artifact for CodePipeline to use.
For this to work, the CodeBuild project "environment image" should be of type "docker" with the "priviledged" flag activated. When creating the CodeBuild project in your pipeline, you can also specify the environment variables that are used the buildspec file above.
There is a good tutorial on this topic here:
http://queirozf.com/entries/using-aws-codepipeline-to-automatically-build-and-deploy-your-app-stored-on-github-as-a-docker-based-beanstalk-application
Sorry about the inconvenience. Making it less restrictive is in our roadmap. Meanwhile, in order to use CodeBuild action, you can use a dummy file as the output artifact.
Related
I'm new to AWS CodeBuild and have been trying to work out how to run the parts of the build in parallel (or even just use the same buildspec.yml for each project in my solution).
I thought the batch -> build-list was the way to go. From my understanding of the documentation this will run the phases in the buildspec for each item in the build list.
Unfortunately that does not appear to be the case - the batch section appears to be ignored and the buildspec runs the phases once, for the default environment variables held at project level.
My buildspec is
version: 0.2
batch:
fast-fail: false
build-list:
- identifier: getPrintJobNote
env:
variables:
IMAGE_REPO_NAME: getprintjobnote
FOLDER_NAME: getPrintJobNote
ignore-failure: false
- identifier: GetPrintJobFilters
env:
variables:
IMAGE_REPO_NAME: getprintjobfilters
FOLDER_NAME: GetPrintJobFilters
ignore-failure: false
phases:
pre_build:
commands:
- echo Logging into Amazon ECR
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
build:
commands:
- echo Building lambda docker container
- echo Build path $CODEBUILD_SRC_DIR
- cd $CODEBUILD_SRC_DIR/src/$FOLDER_NAME
- docker build -t $IMAGE_REPO_NAME:$IMAGE_TAG .
- docker tag $IMAGE_REPO_NAME:$IMAGE_TAG $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
post_build:
commands:
- echo Pushing to Amazon ECR
- docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
Is there something wrong in my buildspec, does build-list not do what I think it does, or is there something else needed to be configured somewhere to enable this?
In the project configuration I found a setting for "enable concurrent build limit - optional". I tried changing this but got an error:
Project-level concurrent build limit cannot exceed the account-level concurrent build limit of 1.
This may not be related but could be because my account is new... I think the default should be 60 anyway.
Had similar problem, turned out that batch builds are a separate build type. Go to project -> start build with overrides, then select batch build.
I also split buildspec file -> 1st spec has batch config, second one has "actual" phases. Use buildspec: directive. Not sure if this is required though.
Also: if builds are hook-triggered, this also has to be configured to run batch build.
Cloud Build fails with Timeout Error (I'm trying to deploy CloudRun with Prophet). Therefore I'm trying to split the Dockerfile into two (saving the image in between in case it fails). I'd split the Dockerfile like this:
Dockerfile_one: python + prophet's dependencies
Dockerfile_two: image_from_Dockerfile_one + prophet + other dependencies
What should cloudbuild.yaml should look like to:
if there is a previously image available skip the step, else run the step with the Dockerfile_one and save the image
use the image from the step (1), add more dependencies to it and save the image for deploy
Here is how cloudbuild.yaml looks like right now
steps:
# create gcr source directory
- name: 'bash'
args:
- '-c'
- |
echo 'Creating gcr_source directory for ${_GCR_NAME}'
mkdir _gcr_source
cp -r cloudruns/${_GCR_NAME}/. _gcr_source
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/$PROJECT_ID/${_GCR_NAME}', '.']
dir: '_gcr_source'
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/$PROJECT_ID/${_GCR_NAME}']
# Deploy container image to Cloud Run
- name: 'gcr.io/cloud-builders/gcloud'
entrypoint: gcloud
args:
- run
- deploy
- ${_GCR_NAME}
- --image=gcr.io/$PROJECT_ID/${_GCR_NAME}
Thanks a lot!
You need to have 2 pipelines
The first one create the base image. Like that, you can trigger it everytime that you need to rebuild this base image, with, possibly a different lifecycle than your application lifecycle. Something similar to that
steps:
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/<PROJECT_ID>/base-image', '-f', 'DOCKERFILE_ONE', '.']
images: ['gcr.io/<PROJECT_ID>/base-image']
Then, in your second dockerfile, start from the base image and use a second Cloud Build pipeline to build, push and deploy it (as you do in your 3 last steps in your question)
FROM gcr.io/<PROJECT_ID>/base-image
COPY .....
....
...
Not the answer, but as a workaround. If anybody has the same issue, using Python3.8 instead 3.9 worked for Cloud Build.
This what the Dockerfile looks like:
RUN pip install --upgrade pip wheel setuptools
# Install pystan
RUN pip install Cython>=0.22
RUN pip install numpy>=1.7
RUN pip install pystan==2.19.1.1
# Install other prophet dependencies
RUN pip install -r requirements.txt
RUN pip install prophet
Though figuring out how to iteratively build images for CloudRun, would be really great.
Why did your Cloud Build fail with Timeout Error ?
While building images in docker, it is important to keep the image size down. Often multiple dockerfiles are created to handle the image size constraint. In your case, you were not able to reduce the image size and include only what is needed.
What can be done to rectify it ?
As per this documentation, multi-stage builds, (introduced in
Docker 17.05) allows you to build your app in a first "build"
container and use the result in another container, while using the
same Dockerfile.
You use multiple FROM statements in your Dockerfile. Each FROM
instruction can use a different base, and each of them begins a new
stage of the build. You can selectively copy artifacts from one stage
to another, leaving behind everything you don’t want in the final
image. To show how this works, follow this link.
You only need a single Dockerfile.
The result is the same tiny production image as before, with a
significant reduction in complexity. You don’t need to create any
intermediate images and you don’t need to extract any artifacts to
your local system at all.
How does it work?
You can name your build stages. By default, the stages are not
named, and you refer to them by their integer number, starting with 0
for the first FROM instruction. However, you can name your stages, by
adding an AS to the FROM instruction.
When you build your image, you don’t necessarily need to build the
entire Dockerfile including every stage. You can specify a target
build stage.
When using multi-stage builds, you are not limited to copying from
stages you created earlier in your Dockerfile. You can use the
COPY --from instruction to copy from a separate image,either
using the local image name, a tag available locally or on a Docker
registry, or a tag ID.
You can pick up where a previous stage left off by referring
to it when using the FROM directive.
In the Google documentation, there is an example of dockerfile
which uses multi-stage builds. The hello binary is built in a first
container and injected in a second one. Because the second container
is based on scratch, the resulting image contains only the hello
binary and not the source file and object files needed during the
build.
FROM golang:1.10 as builder
WORKDIR /tmp/go
COPY hello.go ./
RUN CGO_ENABLED=0 go build -a -ldflags '-s' -o hello
FROM scratch
CMD [ "/hello" ]
COPY --from=builder /tmp/go/hello /hello
Here is a tutorial to understand how multi staging builds work.
I've started using docker buildx to tag and push mutli-platform images to ECR. However, ECR appears to apply the tag to the parent manifest, and leaves each related manifest as untagged. ECR does appear to prevent deletion of the child manifests, but it makes managing cleanup of orphaned untagged images complicated.
Is there a way to tag these child manifests in some way?
For example, consider this push:
docker buildx build --platform "linux/amd64,linux/arm64" --tag 1234567890.dkr.ecr.eu-west-1.amazonaws.com/my-service/my-image:1.0 --push .
Inspecting the image:
docker buildx imagetools inspect 1234567890.dkr.ecr.eu-west-1.amazonaws.com/my-service/my-image:1.0
Shows:
Name: 1234567890.dkr.ecr.eu-west-1.amazonaws.com/my-service/my-image:1.0
MediaType: application/vnd.docker.distribution.manifest.list.v2+json
Digest: sha256:4221ad469d6a18abda617a0041fd7c87234ebb1a9f4ee952232a1287de73e12e
Manifests:
Name: 1234567890.dkr.ecr.eu-west-1.amazonaws.com/my-service/my-image:1.0#sha256:c1b0c04c84b025357052eb513427c8b22606445cbd2840d904613b56fa8283f3
MediaType: application/vnd.docker.distribution.manifest.v2+json
Platform: linux/amd64
Name: 1234567890.dkr.ecr.eu-west-1.amazonaws.com/my-service/my-image:1.0#sha256:828414cad2266836d9025e9a6af58d6bf3e6212e2095993070977909ee8aee4b
MediaType: application/vnd.docker.distribution.manifest.v2+json
Platform: linux/arm64
However, ECR shows the 2 child images as untagged
I'm running into the same problem. So far my solution seems a little easier than some of the other suggestions, but I still don't like it.
After doing the initial:
docker buildx build --platform "linux/amd64,linux/arm64" --tag 1234567890.dkr.ecr.eu-west-1.amazonaws.com/my-service/my-image:1.0 --push .
I follow up with:
docker buildx build --platform "linux/amd64" --tag 1234567890.dkr.ecr.eu-west-1.amazonaws.com/my-service/my-image:1.0-amd --push .
docker buildx build --platform "linux/arm64" --tag 1234567890.dkr.ecr.eu-west-1.amazonaws.com/my-service/my-image:1.0-arm --push .
This gets me the parallel build speed of building multiple platforms at the same time, and gets me the images tagged in ECR. Thanks to having the build info cached it is pretty quick, it appears to just push the tags and that is it. In a test I just did the buildx time for the first command was 0.5 seconds. and the second one took 0.7 seconds.
That said, I'm not wild about this solution, and found this question while looking for a better one.
There are several ways to tag the image, but they all involve pushing the platform specific manifest with the desired tag. With docker, you can pull the image, retag it, and push it, but the downside is you'll have to pull every layer.
A much faster option is to only transfer the manifest json with registry API calls. You could do this with curl, but auth becomes complicated. There are several tools for working directly with registries, including Googles crane, RedHat's skopeo, and my own regclient. Regclient includes the regctl command which would implement this like:
image=1234567890.dkr.ecr.eu-west-1.amazonaws.com/my-service/my-image
tag=1.0
regctl image copy \
${image}#$(regctl image digest --platform linux/amd64 $image:$tag) \
${image}:${tag}-linux-amd64
regctl image copy \
${image}#$(regctl image digest --platform linux/arm64 $image:$tag) \
${image}:${tag}-linux-arm64
You could also script an automated fix to this, listing all tags in the registry, pulling the manifest list for the tags that don't already have the platform, and running the image copy to retag each platform's manifest. But it's probably easier and faster to script your buildx job to include something like regctl after buildx pushes the image.
Note if you use a cred helper for logging into ECR, regctl can use this with the local command. If want to run regctl as a container, and you are specifically using ecr-login, use the alpine version of the images since they include the helper binary.
In addition to what Brandon mentioned above on using regctl, here's the command for skopeo if you're looking to use it with ECR credential helper. https://github.com/awslabs/amazon-ecr-credential-helper
skopeo copy \
docker://1234567890.dkr.ecr.us-west-2.amazonaws.com/stackoverflow#sha256:1badbc699ed4a1785295baa110a125b0cdee8d854312fe462d996452b41e7755 \
docker://1234567890.dkr.ecr.us-west-2.amazonaws.com/stackoverflow:1.0-linux-arm64
https://github.com/containers/skopeo
Paavan Mistry, AWS Containers DA
.. aaaand me again :)
This time with a very interesting problem.
Again AWS Lambda function, node.js 12, Javascript, Ubuntu 18.04 for local development, aws cli/aws sam/Docker/IntelliJ, everything is working perfectly in local and is time to deploy.
So I did set up an AWS account for tests, created and assigned an access key/secret and finally did try to deploy.
Almost at the end an error pop up aborting the deployment.
I'm showing the SAM cli version from a terminal, but the same happens with IntelliJ.
(of course I mask/change some names)
From a terminal I'm going where I have my local sandbox with the project and then :
$ sam deploy --guided
Configuring SAM deploy
======================
Looking for config file [samconfig.toml] : Not found
Setting default arguments for 'sam deploy'
=========================================
Stack Name [sam-app]: MyActualProjectName
AWS Region [us-east-1]: us-east-2
#Shows you resources changes to be deployed and require a 'Y' to initiate deploy
Confirm changes before deploy [y/N]: y
#SAM needs permission to be able to create roles to connect to the resources in your template
Allow SAM CLI IAM role creation [Y/n]: y
Save arguments to configuration file [Y/n]: y
SAM configuration file [samconfig.toml]: y
SAM configuration environment [default]:
Looking for resources needed for deployment: Not found.
Creating the required resources...
Successfully created!
Managed S3 bucket: aws-sam-cli-managed-default-samclisourcebucket-7qo1hy7mdu9z
A different default S3 bucket can be set in samconfig.toml
Saved arguments to config file
Running 'sam deploy' for future deployments will use the parameters saved above.
The above parameters can be changed by modifying samconfig.toml
Learn more about samconfig.toml syntax at
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-config.html
Error: Unable to upload artifact MyFunctionName referenced by CodeUri parameter of MyFunctionName resource.
ZIP does not support timestamps before 1980
$
I spent quite some time looking around for this problem but I found only some old threads.
In theory this problems was solved in 2018 ... but probably some npm libraries I had to use contains something old ... how in the world I fix this stuff ?
In one thread I found a kind of workaround.
In the file buildspec.yml somebody suggested to add AFTER the npm install :
ls $CODEBUILD_SRC_DIR
find $CODEBUILD_SRC_DIR/node_modules -mtime +10950 -exec touch {} ;
Basically the idea is to touch all the files installed after the npm install but still the error happens.
This my buildspec.yml file after the modification :
version: 0.2
phases:
install:
commands:
# Install all dependencies (including dependencies for running tests)
- npm install
- ls $CODEBUILD_SRC_DIR
- find $CODEBUILD_SRC_DIR/node_modules -mtime +10950 -exec touch {} ;
pre_build:
commands:
# Discover and run unit tests in the '__tests__' directory
- npm run test
# Remove all unit tests to reduce the size of the package that will be ultimately uploaded to Lambda
- rm -rf ./__tests__
# Remove all dependencies not needed for the Lambda deployment package (the packages from devDependencies in package.json)
- npm prune --production
build:
commands:
# Use AWS SAM to package the application by using AWS CloudFormation
- aws cloudformation package --template template.yml --s3-bucket $S3_BUCKET --output-template template-export.yml
artifacts:
type: zip
files:
- template-export.yml
I will continue to search but again I wonder if somebody here had this kind of problem and thus some suggestions/methodology about how to solve it.
Many many thanks !
Steve
I have angular client and Nodejs server deployed into one elasticBeanstalk.
The structure is that I put angular client files in 'html' folder and the proxy is defined in .ebextensions folder.
-html
-other serverapp folder
-other serverapp folder
-.ebextensions
....
-package.json
-server.js
Everytime when I do a release, I build angular app and put it into html folder in the node app, zip it and upload to elasticBeanstalk.
Now I want to move on to CICD. Basically I want to automate the above step, use two source(angular and node app), do the angular build and put it into html folder of node app and generate only one output artifact.
I've got to the stage where have separate pipeline works for each app. I'm not very familiar with AWS yet, I just have vague idea that I might need to use aws lambda.
Any help would be really appreciated.
The output artifact your CodeBuild job creates can be thought of as a directory location that you ask CodeBuild to zip as artifact. You can use regular UNIX commands to manipulate this directory before the packaging of the artifacts. Following 'buildspec.yml' is an example:
version: 0.2
phases:
build:
commands:
# build commands
#- command
post_build:
commands:
- mkdir /tmp/html
- cp -R ./ /tmp/html
artifacts:
files:
- '**/*'
base-directory: /tmp/html
Buildspec reference: https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-syntax