Story:
I have built a docker image in my local which is laradock/workspace.
I can use the npm and node inside the docker image in my local.
I uploaded the same image at AWS EC2 Container and use it at AWS CodeBuild.
Problem:
But when I do the node -v inside the buildspec.yml it is not working and always return a status 127.
Reference:
Here is the simple code for my buildspec.yml:
version: 0.2
phases:
install:
commands:
- npm -v
- node -v
Related
I have a CodeBuild Project which runs a docker build command with a buildspec.yml like this
version: 0.2
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t $IMAGE_REPO_NAME:$IMAGE_TAG .
- docker tag $IMAGE_REPO_NAME:$IMAGE_TAG $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
However during the docker build process I have a sh script that runs the aws s3 cp command. I have given the service role permissions to the bucket and the error I get is.
+ aws s3 cp s3://bucket/file /var/www/html/filelocal
fatal error: Unable to locate credentials
Do roles not propagate through to Docker on Codebuild?
You are building an isolated file system. If you ran the docker build locally with the credentials on your local machine, you would see the same behavior. You would have to add credentials to your container to run those same operations. With that said, you could add credentials to your container via build-args or you could just use the codebuild role to gather the files you need and then copy those into the container during the build process. I would vote for the second way so you don't have to worry about cleaning up the credentials before publishing the container. Although you can query the environment to get the role's temporary credentials, which means you probably wouldn't have to worry about cleaning those up, you could just as easily remove those concerns by just letting the codebuild role handle gathering the files you need to build the container.
so far in my buildspec.yml file I can create a docker image and store it in the ECR repository (I am using codepipeline). My question is how do I deploy it to my ECS instance through the buildspec.yml using the aws cli commands?
i am sharing buildspec.yaml file have a look
version: 0.1
phases:
pre_build:
commands:
- echo Setting timestamp for container tag
- echo `date +%s` > timestamp
- echo Logging into Amazon ECR...
- $(aws ecr get-login --region $AWS_DEFAULT_REGION)
build:
commands:
- echo Building and tagging container
- docker build -t $REPOSITORY_NAME .
- docker tag $REPOSITORY_NAME $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$REPOSITORY_NAME:$BRANCH-`cat ./timestamp`
post_build:
commands:
- echo Pushing docker image
- docker push $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$REPOSITORY_NAME:$BRANCH-`cat ./timestamp`
- echo Preparing CloudFormation Artifacts
- aws s3 cp s3://$ECS_Bucket/$ECS_SERVICE_KEY task-definition.template
- aws s3 cp s3://$ECS_Bucket/$ECS_SERVICE_PARAMS_KEY cf-config.json
artifacts:
files:
- task-definition.template
- cf-config.json
You can edit this more command for ECS instance i have return template which goes to cloud formation.
you can write simple awscli command to create cluster and pull images check this aws documentation: https://docs.aws.amazon.com/cli/latest/reference/ecs/index.html
sharing my own git check it out for more info: https://github.com/harsh4870/ECS-CICD-pipeline
Here is my root folder and i want to deploy AWS Lambda functions to codecommit from the Backend folder.
Therefore i wrote this command, but AWS CodeBuild gives this error (This command can only be run in a Serverless service directory).
version: 0.1
phases:
install:
commands:
- npm install -g serverless#1.20.2
post_build:
commands:
- cd Backend
- serverless deploy --region eu-west-1 --verbose
How can i deploy it from the backend folder?
Edit: I forgot to edit the version. Now i changed it to version: 0.2 and it works fine.
can you change to
- cd Backend && serverless deploy --region eu-west-1 --verbose
I forgot to edit the version of buildspec.yml. Now i changed it to version: 0.2 and it works fine.
version: 0.2
phases:
install:
commands:
- npm install -g serverless#1.20.2
post_build:
commands:
- cd Backend
- serverless deploy --region eu-west-1 --verbose
I have been using my Docker-hub account till now in CircleCI, and now for some reason I'm trying to use my ECR repository image in the same place as build image in CircleCI (2.0)
But I see ECR doesn't support public images. So I can't mention my image as below as I did for Dockerhub image,
version: 2
jobs:
build:
working-directory: ~/tmp
docker:
- image: <dockerhub-name>/<image>
as,
version: 2
jobs:
build:
working-directory: ~/tmp
docker:
- image: aws-id.dkr.ecr.eu-central-1.amazonaws.com/image
It will throw error,
no basic auth credentials
In a straight forward operation it needs to get authenticated via command,
aws ecr get-login --region <region-name>
and then running,
docker login -u AWS -p <password> -e none https://aws-id.dkr.ecr.eu-central-1.amazonaws.com
I tried putting this commands in Pre-dependency commands section of CircleCI plan settings and didn't work.
Ideas?
What "Pre-dependency commands"? That sounds like you're referring to configuration structure from CircleCI 1.0, which you don't seem to be using.
Because of the way AWS requires you to authenticate with ECR, I wouldn't use an image from there with the docker executor. Either use some random image, and then use setup_remote_docker or use the machine executor.
This doc shows the former, and this one covers the latter.
I'm trying to get AWS CodePipeline working with S3 source, CodeBuild and Elastic Beanstalk (nodejs environment)
My problem lies between CodeBuild and Beanstalk.
I have CodeBuild outputting a zip file of the final nodeJS app via the artifacts. Here is my CodeBuild buildspec.yml
version: 0.1
phases:
install:
commands:
- echo Installing Node Modules...
- npm install -g mocha
- npm install
post_build:
commands:
- echo Performing Test
- npm test
- zip -r app-api.zip .
artifacts:
files:
- app-api.zip
When I manually run CodeBuild it successfully puts the zip into S3. When I run CodePipeline it puts the zip on each Elastic Beanstalk instance in /var/app/current as app-api.zip
What I would like is for it to extract app-api.zip as /var/app/current. Just like the manual deploy via the Elastic Beanstalk console interface.
First, a quick explanation. CodePipeline sends whatever files you specified as artifacts to Elastic Beanstalk. In your case, you are sending app-api.zip
What you probably want to do instead, is to send all the files, but not wrap them in a ZIP.
Let's change your buildspec.yml to not create app-api.zip and instead, send the raw files to CodePipeline.
version: 0.1
phases:
install:
commands:
- echo Installing Node Modules...
- npm install -g mocha
- npm install
post_build:
commands:
- echo Performing Test
- npm test
# - zip -r app-api.zip . **<< Remove this line**
artifacts:
files:
- '**/*'
# Replace artifacts/files with the value shown above