circleci 2.0 can't find awscli - amazon-web-services

I'm using circleCI 2.0 and they can't find aws but their documents clearly say that aws is installed in default
when I use this circle.yml
version: 2
jobs:
build:
working_directory: ~/rian
docker:
- image: node:boron
steps:
- checkout
- run:
name: Pre-Dependencies
command: mkdir ~/rian/artifacts
- restore_cache:
keys:
- rian-{{ .Branch }}-{{ checksum "yarn.lock" }}
- rian-{{ .Branch }}
- rian-master
- run:
name: Install Dependencies
command: yarn install
- run:
name: Test
command: |
node -v
yarn run test:ci
- save_cache:
key: rian-{{ .Branch }}-{{ checksum "yarn.lock" }}
paths:
- "~/.cache/yarn"
- store_artifacts:
path: ~/rian/artifacts
destination: prefix
- store_test_results:
path: ~/rian/test-results
- deploy:
command: aws s3 sync ~/rian s3://rian-s3-dev/ --delete
following error occurs:
/bin/bash: aws: command not found
Exited with code 127
so if I edit the code this way
- deploy:
command: |
apt-get install awscli
aws s3 sync ~/rian s3://rian-s3-dev/ --delete
then i get another kind of error:
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package awscli
Exited with code 100
Anyone knows how to fix this???

The document you are reading is for CircleCI 1.0 and for 2.0 is here:
https://circleci.com/docs/2.0/
In CircleCI 2.0, you can use your favorite Docker image. The image you are currently setting is node:boron, which does not include the aws command.
https://hub.docker.com/_/node/
https://github.com/nodejs/docker-node/blob/14681db8e89c0493e8af20657883fa21488a7766/6.10/Dockerfile
If you just want to make it work for now, you can install the aws command yourself in circle.yml.
apt-get update && apt-get install -y awscli
However, to take full advantage of Docker's benefits, it is recommended that you build a custom Docker image that contains the necessary dependencies such as the aws command.
You can write your custom aws-cli Docker image something like this:
FROM circleci/python:3.7-stretch
ENV AWS_CLI_VERSION=1.16.138
RUN sudo pip install awscli==${AWS_CLI_VERSION}

I hit this issue when deploying to AWS lambda functions and pushing files to S3 bucket. Finally solved it and then built a docker image to save time in installing the AWS CLI every time. Here is a link to the image and the repo!
https://github.com/wilson208/circleci-awscli
https://hub.docker.com/r/wilson208/circleci-awscli/
Fire a PR in or open an issue if you need anything added to the image and I will get to it when I can.
Edit:
Also, checkout the readme on github for examples of deploying a package to Lambda or pushing files to S3

Related

React-snap faild in postbulid when uploading to AWS Amplify

I`ve pushed my react project to AWS amplify with git. I installed the react-snap package for SEO reasons. When inspecting the Amplify Console, it shows that the provision step succeeded, but the building step failed. The error log shows this:
Some older post on github describe the same problem that I have:
https://github.com/thinkJin6/BokuNews/issues/64
I have tried out several things like adding and configuring amplify.yml file and configuring the package.json as described here:. https://github.com/puppeteer/puppeteer/issues/765
Finally I tried out some stuff from this link: https://github.com/puppeteer/puppeteer/blob/main/docs/troubleshooting.md#running-puppeteer-on-aws-ec2-instance-running-amazon-linux I enable amazon-linux-extras like this, sudo amazon-linux-extras install epel -y, and installed install Chromium like this, sudo yum install -y chromium.I used AWS CloudShell, but the error log message keeps the same.
I had the same problem recently and with help of a colleague, the solution was found here: LINK
I added the following to my package.json
"reactSnap": {
"puppeteerArgs": [
"--no-sandbox",
"--disable-setuid-sandbox"
],
"puppeteerExecutablePath": "/opt/google/chrome/google-chrome"
}
Then I updated amplify.yml in was amplify build settings like so
version: 1
frontend:
phases:
preBuild:
commands:
- '# This installs Chrome on any RHEL/CentOS/Amazon Linux variant.'
- curl https://intoli.com/install-google-chrome.sh | bash
- npm i
build:
commands:
- npm run build
artifacts:
baseDirectory: build
files:
- '**/*'
cache:
paths:
- node_modules/**/*

What is the best way to do CI/CD with AWS CDK (python) using GitLab CI?

I am using AWS CDK (with Python) for a containerized application that runs on Fargate. I would like to run cdk deploy in a GitLab CI process and pass the git tag as an environment variable that replaces the container running in Fargate. I am currently doing something similar with CloudFormation (aws cloudformation update-stack ...). Is anyone else doing CI/CD with AWS CDK in this way? Is there a better way to do it?
Also, what should I use for my base image for this job? I was thinking that I can either start with a python container and install node or vice versa. Or maybe there is prebuilt container somewhere that I haven't been able to find yet.
Here is start that seems to be working well:
CDK:
image: python:3.8
stage: deploy
before_script:
- apt-get -qq update && apt-get -y install nodejs npm
- node -v
- npm i -g aws-cdk
- cd awscdk
- pip3 install -r requirements.txt
script:
- cdk diff
- cdk deploy --require-approval never
Edit 2020-05-04:
CDK can build docker images during cdk deploy, but it needs access to docker. If you don't need docker, the above CI job definition should be fine. Here's the current CI job I'm using:
cdk deploy:
image: docker:19.03.1
services:
- docker:19.03.5-dind
stage: deploy
only:
- master
before_script:
- apk add --no-cache python3
- python3 -V
- pip3 -V
- apk add nodejs-current npm
- node -v
- npm i -g aws-cdk
- cd awscdk
- pip3 install -r requirements.txt
script:
- cdk bootstrap aws://$AWS_ACCOUNT_ID/$AWS_DEFAULT_REGION
- cdk deploy --require-approval never
The cdk bootstrap is needed because I am using assets in my cdk code:
self.backend_task.add_container(
"DjangoBackend",
image=ecs.AssetImage(
"../backend",
file="scripts/prod/Dockerfile",
target="production",
),
logging=ecs.LogDrivers.aws_logs(stream_prefix="Backend"),
environment=environment_variables,
command=["/start_prod.sh"],
)
Here's more information on cdk bootstrap: https://github.com/aws/aws-cdk/blob/master/design/cdk-bootstrap.md
you definitely have to use CDK deploy inside the CI/CD pipeline if you have lambda or ECS assets, otherwise, you could run CDK synth and pass the resulting Cloudformation to AWS Code Deploy. That means a lot of your CI/CD will be spent deploying which might drain your free tier build minutes or just means you pay more (AWS Code Deploy is free)
I do something similar with Golang in CircleCi. I use the Go base image and install nodejs and cdk. I use this base image to build all my go binaries, the vuejs frontend and compile cdk typescript and deploy it.
FROM golang:1.13
RUN go get -u -d github.com/magefile/mage
WORKDIR $GOPATH/src/github.com/magefile/mage
RUN go run bootstrap.go
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash -
RUN apt-get install -y nodejs
RUN npm i -g aws-cdk#1.36.x
RUN npm i -g typescript
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt update && apt install yarn
I hope that helps.
Also, what should I use for my base image for this job? I was thinking that I can either start with a python container and install node or vice versa. Or maybe there is prebuilt container somewhere that I haven't been able to find yet.
For anyone looking for how to implement CI/CD with AWS CDK Python in 2022, here's a tested solution:
Use python:3.10.8 as the base image in your CI/CD
(or any image with Debian 11)
Install Node.js 16 from NodeSource: curl -fsSL https://deb.nodesource.com/setup_16.x | bash - && apt-get install -y nodejs
Install aws-cdk: npm i -g aws-cdk
You can add the two latter steps as inline scripts in your CI/CD pipeline so you do not need to build your own Docker image.
Here's a full example for Bitbucket Pipelines:
image: python:3.10.8
run-tests: &run-tests
step:
name: Run tests
script:
# Node 16
- curl -fsSL https://deb.nodesource.com/setup_16.x | bash - && apt-get install -y nodejs
- npm i -g aws-cdk
- pip install -r requirements-dev.txt
- pytest
pipelines:
pull-requests:
"**":
- <<: *run-tests
branches:
master:
- <<: *run-tests
Note that the above instructions do not install Docker engine. In Bitbucket Pipelines, Docker can be used simply by adding
services:
- docker
in the configuration file.
If cdk deploy is giving you the error:
/usr/lib/node_modules/aws-cdk/lib/index.js:12422
home = path.join((os.userInfo().homedir ?? os.homedir()).trim(), ".cdk");
then the node version is out of date. This can be fixed by updating the docker image which also requires pip3:
cdk deploy:
image: docker:20.10.21
services:
- docker:20.10.21-dind
stage: deploy
only:
- master
before_script:
- apk add --no-cache python3
- python3 -V
- apk add py3-pip
- pip3 -V

Bitbucket Pipelines to build Java app, Docker image and push it to AWS ECR?

I am setting up Bitbucket Pipelines for my Java app and what I want to achive is whenever I merge something with branch master, Bitbucket fires the pipeline, which in first step build and test my application, and in second step build Docker image from it and push it to ECR. The problem is that in second step it isn't possible to use the JAR file made in first step, because every step is made in a separate, fresh Docker container. Any ideas how to solve it?
My current files are:
1) Bitbucket-pipelines.yaml
pipelines:
branches:
master:
- step:
name: Build and test application
services:
- docker
image: openjdk:11
caches:
- gradle
script:
- apt-get update
- apt-get install -y python-pip
- pip install --no-cache-dir docker-compose
- bash ./gradlew clean build test testIntegration
- step:
name: Build and push image
services:
- docker
image: atlassian/pipelines-awscli
caches:
- gradle
script:
- echo $(aws ecr get-login --no-include-email --region us-west-2) > login.sh
- sh login.sh
- docker build -f Dockerfile -t my-application .
- docker tag my-application:latest 212234103948.dkr.ecr.us-west-2.amazonaws.com/my-application:latest
- docker push 212234103948.dkr.ecr.us-west-2.amazonaws.com/my-application:latest
2) Dockerfile:
FROM openjdk:11
VOLUME /tmp
EXPOSE 8080
COPY build/libs/*.jar app.jar
ENTRYPOINT ["java", "-jar", "/app.jar"]
And the error I receive:
Step 4/5 : COPY build/libs/*.jar app.jar
COPY failed: no source files were specified
I have found the solutions, it's quite simple - we should just use "artifacts" feature, so in first step the additional line:
artifacts:
- build/libs/*.jar
solves the problem.

CircleCi 2.0: aws command not found

I try to migrate circleci from v1.0 to v2.0.
First I can't install awscli but finally can install it with the code below but got another error that cannot found aws command.
version: 2
jobs:
build:
docker:
- image: circleci/node:8.9.1
steps:
- checkout
- restore_cache:
key: dependency-cache-{{ checksum "package.json" }}
- save_cache:
key: dependency-cache-{{ checksum "package.json" }}
paths:
- node_modules
deploy:
docker:
- image: circleci/node:8.9.1
steps:
- checkout
- run:
name: Install yarn
command: yarn install
- run:
name: Install awscli
command: |
sudo apt-get install python-pip python-dev
pip install 'pyyaml<4,>=3.10' awscli --upgrade --user
- run:
name: AWS S3
command: aws s3 sync build s3://<URL> --delete
workflows:
version: 2
build-and-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: master
It show "aws: command not found". I'm not sure that I do something wrong or not but I want to know what's the problem and how to solve it. Thanks.
I would rework your config. Each job should have a focus/point. For deployment for example, you don't need NodeJS, you need the AWS CLI so use an image for that.
version: 2
jobs:
build:
docker:
- image: circleci/node:8.9.1
steps:
- checkout
- restore_cache:
key: dependency-cache-{{ checksum "package.json" }}
- save_cache:
key: dependency-cache-{{ checksum "package.json" }}
paths:
- node_modules
- persist_to_workspace:
root: /home/circleci
paths: project
deploy:
docker:
- image: cibuilds/aws:1.16.1
steps:
- checkout
- attach_workspace:
at: /home/circleci
- run:
name: AWS S3
command: aws s3 sync build s3://<URL> --delete
workflows:
version: 2
build-and-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: master
Try with the following step (taken from their v2 docs);
steps:
- run:
name: Install PIP
command: sudo apt-get install python-pip python-dev
- run:
name: Install awscli
command: sudo pip install awscli
- run:
name: Deploy to S3
command: aws s3 sync build s3://<URL> --delete
This method for installing awscli seems to work fine on a variety of systems. Tested on circleci/openjdk:8-jdk, requires no additional installation.
Edit
Seems that the node image lacks the installation of libpython-dev.
##################
# Install AWS CLI
##################
# For node images on Circle, install libpython-dev
sudo apt-get install -y libpython-dev
# Download awscli bundle
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
# Unzip the downloaded bundle
unzip awscli-bundle.zip
# Run the install script and install to ~/bin/aws directory
./awscli-bundle/install -b ~/bin/aws
After that, to run awscli commands, specify the full path to the aws executable, for example:
~/bin/aws s3 ls
Resources
Helpful thread GitHub
Example GitHub repository with Circle config on node:8.9.1
The CircleCI builds

AWS CodeBuild - Unable to find DockerFile during build

Started playing with AWS CodeBuild.
Goal is to have a docker images as a final results with the nodejs, hapi and sample app running inside.
Currently i have an issue with:
"unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /tmp/src049302811/src/Dockerfile: no such file or directory"
Appears on BUILD stage.
Project details:
S3 bucket used as a source
ZIP file stored in respective S3 bucket contains buildspec.yml, package.json, sample *.js file and DockerFile.
aws/codebuild/docker:1.12.1 is used as a build environment.
When i'm building an image using Docker installed on my laptop there is no issues so i can't understand which directory i need to specify to get rid off this error message.
Buildspec and DockerFile attached below.
Thanks for any comments.
buildspec.yml
version: 0.1
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --region eu-west-1)
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t <CONTAINER_NAME> .
- docker tag <CONTAINER_NAME>:latest <ID>.dkr.ecr.eu-west-1.amazonaws.com/<CONTAINER_NAME>:latest
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push <id>.eu-west-1.amazonaws.com/<image>:latest
DockerFile
FROM alpine:latest
RUN apk update && apk upgrade
RUN apk add nodejs
RUN rm -rf /var/cache/apk/*
COPY . /src
RUN cd /src; npm install hapi
EXPOSE 80
CMD ["node", "/src/server.js"]
Ok, so the solution was simple.
Issue was related to the Dockerfile name.
It was not accepting DockerFile (with capital F, strange it was working locally) but Dockerfile (with lower-case f) worked perfectly.
Can you validate that Dockerfile exists in the root of the directory? One way of doing this would be to run ls -altr as part of the pre-build phase in your buildspec (even before ecr login).