I'm implementing a codebuild project, but I'm getting the error YAML_FILE_ERROR Message: batch yaml definition is required. Searched everywhere but with no luck.
Full Error:
[Container] 2021/07/13 17:09:46 Waiting for agent ping
[Container] 2021/07/13 17:09:49 Waiting for DOWNLOAD_SOURCE
[Container] 2021/07/13 17:09:56 Phase is DOWNLOAD_SOURCE
[Container] 2021/07/13 17:09:56 CODEBUILD_SRC_DIR=/codebuild/output/src805371762/src/git-codecommit.us-east-1.amazonaws.com/v1/repos/test_repo
[Container] 2021/07/13 17:09:56 YAML location is /codebuild/output/src805371762/src/git-codecommit.us-east-1.amazonaws.com/v1/repos/test_repo/buildspec.yaml
[Container] 2021/07/13 17:09:56 Phase complete: DOWNLOAD_SOURCE State: FAILED
[Container] 2021/07/13 17:09:56 Phase context status code: YAML_FILE_ERROR Message: batch yaml definition is required
Here is my buildspec.yaml (I even put this "batch" attribute but the same error occurs.
version: 0.2
batch:
fast-fail: false
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --no-include-email --region us-east-1)
build:
commands:
- echo Build started on `date`
- echo Set script permissions...
- chmod a+x docker-entrypoint.sh
- chmod a+x docker-entrypoint.d/*
- echo Building the Docker image...
- docker image build -f $DOCKER_FILE -t $IMAGE_REPO_NAME .
- docker image tag $IMAGE_REPO_NAME $AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker image push $AWS_ACCOUNT_ID.dkr.ecr.us-east-1.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
- echo Writing ECSForceNewDeployment definition file...
- cat ecs-force-deploy.json > ECSForceNewDeployment.json
artifacts:
files: ECSForceNewDeployment.json
Thanks for all the help.
Check your codebuild project configuration and make sure it has the CodeBuild action setting "build type" to "single build" and not to "batch build".
If you need it as "batch build" then you have to configure it properly in the buildspec. But I understand from the question that is not the case.
To edit this go to your pipeline -> edit -> edit stage (build) -> and then click on edit icon in AwsCodeBuild action card:
option screen capture
If you go through CDK then make sure CodeBuildAction have this property set to false:
executeBatchBuild: false
(Which is the default value, but I'm putting like this to avoid confusion.)
I don't think you want the batch build settings. The rest of your buildspec looks like a "single build".
batch:
fast-fail: false
I had the same problem and it was caused by my "Primary source webhook events" being configured to do a 'batch build', rather than a 'single build'
if there is no pipeline , start the build with Start build with overrides Instead of the default start build .
After pressing Start build with overrides you have to option to change back to Single build from Batch Build
Related
I've been trying to figure this out for a couple days and the issue is compounded by the fact that I'm not getting a useful error message.
I'm using the following buildspec.yml file in codebuild to build docker containers and then send to AWS ECR.
version: 0.2
env:
parameter-store:
AWS_DEFAULT_REGION: "/docker_test/region"
IMAGE_REPO_NAME: "/docker_test/repo_name"
IMAGE_TAG: "/docker_test/img_tag"
AWS_ACCOUNT_ID: "account_id"
phases:
install:
runtime-versions:
docker: 18
pre_build:
commands:
- echo Logging in to Amazon ECR and DockerHub...
- docker login -u AWS -p $(aws ecr get-login-password --region $AWS_DEFAULT_REGION) $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker-compose -f docker-compose.yml -f docker-compose.prod.yml build
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker-compose -f docker-compose.yml -f docker-compose.prod.yml push
artifacts:
files:
- 'Dockerrun.aws.json'
I've tried docker 19, slightly different versions of the docker login line and made sure my roles were set. I get "login succeeded" so I assume the login line is good.
[Container] 2021/12/22 16:19:20 Running command docker login -u AWS -p $(aws ecr get-login-password --region $AWS_DEFAULT_REGION) $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[Container] 2021/12/22 16:19:26 Phase complete: PRE_BUILD State: SUCCEEDED
[Container] 2021/12/22 16:19:26 Phase context status code: Message:
[Container] 2021/12/22 16:19:26 Entering phase BUILD
The post_build phase fails however with the following:
Successfully built d6878cbb68ba
Successfully tagged ***.dkr.ecr.***.amazonaws.com/***:latest
[Container] 2021/12/22 16:21:58 Phase complete: BUILD State: SUCCEEDED
[Container] 2021/12/22 16:21:58 Phase context status code: Message:
[Container] 2021/12/22 16:21:58 Entering phase POST_BUILD
[Container] 2021/12/22 16:21:58 Running command echo Build completed on `date`
Build completed on Wed Dec 22 16:21:58 UTC 2021
[Container] 2021/12/22 16:21:58 Running command echo Pushing the Docker image...
Pushing the Docker image...
[Container] 2021/12/22 16:21:58 Running command docker-compose -f docker-compose.yml -f docker-compose.prod.yml push
Pushing myapp (***.dkr.ecr.***.amazonaws.com/***:latest)...
The push refers to repository [***.dkr.ecr.***.amazonaws.com/***]
EOF
[Container] 2021/12/22 16:22:49 Command did not exit successfully docker-compose -f docker-compose.yml -f docker-compose.prod.yml push exit status 1
[Container] 2021/12/22 16:22:49 Phase complete: POST_BUILD State: FAILED
[Container] 2021/12/22 16:22:49 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: docker-compose -f docker-compose.yml -f docker-compose.prod.yml push. Reason: exit status 1
[Container] 2021/12/22 16:22:49 Phase complete: UPLOAD_ARTIFACTS State: SUCCEEDED
[Container] 2021/12/22 16:22:49 Phase context status code: Message:
I'm just not sure how to get more information on this error - that would be ideal.
EDIT:
I'm adding the docker-compose.prod.yml file for additional context:
version: "3.2"
services:
myapp:
image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/${IMAGE_TAG}
command: bash -c "
python manage.py migrate
&& gunicorn --bind :8000 --workers 3 --threads 2 --timeout 240 project.wsgi:application"
restart: always
ports:
- "80:80"
celery_worker:
image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/${IMAGE_TAG}
command: celery -A project worker --loglevel=${CELERY_LOG_LEVEL:-WARNING}
restart: always
OK, so I figured it out. Your question about making sure the repo exists pointed me in the right direction #mreferre. I was confused about the use of IMAGE_TAG and IMAGE_REPO_NAME in the code samples I referenced when trying to build this. They were essentially supposed to be the same thing so the push was failing because I was trying to push to an ECR repo named "proj-name" which didn't exist. I just needed to change it to "repo-name" so the image in docker-compose.prod.yml becomes:
image: ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_DEFAULT_REGION}.amazonaws.com/${IMAGE_REPO_NAME}
yaml executed successfully in AWS code build but image not send to aws ecr.
buildspec.yml file output is given below
`[Container] 2020/10/26 09:50:07 Phase complete: PRE_BUILD State: SUCCEEDED
[Container] 2020/10/26 09:50:07 Phase context status code: Message:
[Container] 2020/10/26 09:50:07 Entering phase BUILD
[Container] 2020/10/26 09:50:07 Phase complete: BUILD State: SUCCEEDED
[Container] 2020/10/26 09:50:07 Phase context status code: Message:
[Container] 2020/10/26 09:50:07 Entering phase POST_BUILD
[Container] 2020/10/26 09:50:07 Phase complete: POST_BUILD State: SUCCEEDED
[Container] 2020/10/26 09:50:07 Phase context status code: Message: `
Every phase is executed successfully with SUCCEEDED message.
Below is the buildspec.yml file code snippet
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t $REPOSITORY_URI:latest .
- docker tag $REPOSITORY_URI:latest $REPOSITORY_URI:$IMAGE_TAG
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker images...
- docker push $REPOSITORY_URI:latest
- docker push $REPOSITORY_URI:$IMAGE_TAG
- echo Writing image definitions file...
- printf '[{"name":"ui","imageUri":"%s"}]' $REPOSITORY_URI:$IMAGE_TAG > imagedefinitions.json
- cat imagedefinitions.json
No command is being run due to bad indentation. Please rectify the indentation using the buildspec reference as guide:
https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html
Also I do not see a docker login before the push:
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
A buildspce sample for Docker is here:
https://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker.html#sample-docker-files
I am getting the following error from the BUILD stage of my CodeBuild build process:
"Error while executing command: docker build -t ..." Reason: exit status 1
I have a code build service role set up with permissions for ecr, the aws ecr login stage has succeeded, and my buildspec.yml is really simple - pretty much just the standard template. Runtime is the Amazon-managed ubuntu image, standard.
Is there any reason why the Docker build could be failing and anything anyone would suggest to troubleshoot?
Thank you
Full buildspec.yml file:
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --no-include-email --region eu-west-1)
build:
commands:
- echo Building the Docker image...
- docker build -t maxmind:latest .
- docker tag maxmind:latest 381475286792.dkr.ecr.eu-west-1.amazonaws.com/maxmind:latest
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push 381475286792.dkr.ecr.eu-west-1.amazonaws.com/maxmind:latest
Full error message (BUILD stage):
COMMAND_EXECUTION_ERROR: Error while executing command docker build -t maxmind:latest .. Reason: exit status 1
Full error message (POST_BUILD stage):
COMMAND EXECUTION_ERROR: Error while executing command: docker push 381475286792.dkr.ecr.eu-west-1.amazonaws.com/maxmind:latest. Reason: exit status 1
Full error message (logstream):
[Container] 2020/05/20 09:28:54 Running command docker build -t maxmind:latest .
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
[Container] 2020/05/20 09:28:54 Command did not exit successfully docker build -t maxmind:latest . exit status 1
[Container] 2020/05/20 09:28:54 Phase complete: BUILD State: FAILED
Things I have tried
Attached AmazonEC2ContainerRegistryPowerUser policy to the codebuild-service-role created by my build process
Based on the comments.
There were two issues. The first one was not using PrivilegedMode mode in the CodeBuild project. The mode is required when building a docker image inside a docker container.
The second issue was missing permission iam:DeletePolicyVersion.
Enabling the mode and adding the missing permissions, solved the issue.
Just want to share this in case anyone still has this issue.
This issue can be caused of 3 reasons:
Not having PrivilegedMode enabled id the CodeBuild project
Not having enough permissions for the IAM role
An error with your dockerfile build
In my case it was the 3rd reason.
I activated s3 logs which helped me see better error messages as it turned out to be that I was missing a folder in my project which my build dockerfile tried to COPY.
But it can be any error, like running an npm command that doesn't exists.
I'm building a CI/CD pipeline using git, codebuild and elastic beanstalk.
During codebuild execution when build fails due to syntax error of a test case, I see codebuild progress to next stage and ultimately go on to produce the artifacts.
My understanding was if the build fails, execution should stop. is this a correct behavior ?
Please see the buildspec below.
version: 0.2
phases:
install:
commands:
- echo Installing package.json..
- npm install
- echo Installing Mocha...
- npm install -g mocha
pre_build:
commands:
- echo Installing source NPM placeholder dependencies...
build:
commands:
- echo Build started on `date`
- echo Compiling the Node.js code
- mocha modules/**/tests/*.js
post_build:
commands:
- echo Build completed on `date`
artifacts:
files:
- modules/*
- node_modules/*
- package.json
- config/*
- server.js
CodeBuild detects build failures by exit codes. You should ensure that your test execution returns a non-zero exit code on failure.
POST_BUILD will always run as long as BUILD was also run (regardless of BUILD's success or failure.) The same goes for UPLOAD_ARTIFACTS. This is so you can retrieve debug information/artifacts.
If you want to do something different in POST_BUILD depending on the success or failure of BUILD, you can test the builtin environment variable CODEBUILD_BUILD_SUCCEEDING, which is set to 1 if BUILD succeeded, and 0 if it failed.
CodeBuild uses the environment variable CODEBUILD_BUILD_SUCCEEDING to show if the build process seems to go right.
the best way I found right now is to create a small script in the install secion and then alway use this like:
phases:
install:
commands:
- echo '#!/bin/bash' > /usr/local/bin/ok; echo 'if [[ "$CODEBUILD_BUILD_SUCCEEDING" == "0" ]]; then exit 1; else exit 0; fi' >> /usr/local/bin/ok; chmod +x /usr/local/bin/ok
post_build:
commands:
- ok && echo Build completed on `date`
The post_build section is run even if the build section might fail. Expanding on the previous answers, you can use the variable CODEBUILD_BUILD_SUCCEEDING in the post_build section of the buildspec.yml file. You can make the post_build section to run if and only if the build section completed successfully. Below is an example of how this can be achieved:
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --no-include-email --region $AWS_DEFAULT_REGION)
- CODEBUILD_RESOLVED_SOURCE_VERSION="${CODEBUILD_RESOLVED_SOURCE_VERSION:-$IMAGE_TAG}"
- IMAGE_TAG=$(echo $CODEBUILD_RESOLVED_SOURCE_VERSION | cut -c 1-7)
- IMAGE_URI="$AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG"
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t $IMAGE_URI .
post_build:
commands:
- bash -c "if [ /"$CODEBUILD_BUILD_SUCCEEDING/" == /"0/" ]; then exit 1; fi"
- echo Build stage successfully completed on `date`
- docker push $IMAGE_URI
- printf '[{"name":"clair","imageUri":"%s"}]' "$IMAGE_URI" > images.json
artifacts:
files: images.json
add this in build section
build:
on-failure: ABORT
I just wanted to point out that if you want the whole execution to stop when a command fails, you may specify the -e option:
When running a bash file
- /bin/bash -e ./commands.sh
Or when running a set of commands/bash file
#!/bin/bash
set -e
# ... commands
The post_build stage will be executed and the artifacts will be produced. The post_build is good to properly shut down the build environment, if necessary, and the artifacts could be useful even if the build failed. E.g. extra logs, intermediate files, etc.
I would suggest to use post_build only for commands what are agnostic to the result of your build, and properly de-initialise the build environment. Otherwise you can just exclude that step.
I'm trying to get a simple docker app to build using AWS codebuild, but I am coming across an error where the aws command is not found:
[Container] 2016/12/10 04:29:17 Build started on Sat Dec 10 04:29:17 UTC 2016
[Container] 2016/12/10 04:29:17 Running command echo Building the Docker image...
[Container] 2016/12/10 04:29:17 Building the Docker image...
[Container] 2016/12/10 04:29:17 Running command docker build -t aws-test .
[Container] 2016/12/10 04:29:17 sh: 1: docker: not found
[Container] 2016/12/10 04:29:17 Command did not exit successfully docker build -t aws-test . exit status 127
[Container] 2016/12/10 04:29:17 Phase complete: BUILD Success: false
[Container] 2016/12/10 04:29:17 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: docker build -t aws-test .. Reason: exit status 127
I've got a super simple docker file which builds a simple express app:
FROM node:6.2.0
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
# Bundle app source
COPY . /usr/src/app
EXPOSE 3000
CMD npm install && npm start
And I've got a super simple buildspec.yml which is suppose to build the docker container and push it to the aws registry:
version: 0.1
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --region us-west-2)
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t <CONTAINER_NAME> .
- docker tag <CONTAINER_NAME>:latest <ID>.dkr.ecr.us-west-2.amazonaws.com/<CONTAINER_NAME>:latest
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push <ID>.dkr.ecr.us-west-2.amazonaws.com/<CONTAINER_NAME>:latest
However once ran, it throws the error posted above ^^ I'm not sure why the aws cli utils aren't found? This guide here:
http://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker.html
Suggests I don't need to do anything to setup the aws cli utils anywhere?
Also one other thing I noticed, I removed $(aws ecr get-login --region us-west-2) step from the buildspec file, built it again and it then said that the docker command was not found?! Have I missed a step somewhere (I don't think I have).
So it turned out I was using the wrong environment. Here is what I'm using now:
I was trying to specify my own docker image, which was ultimately not setup with any of the AWS cli utils!
Thanks to #Clare Liguori for tipping me off!