CircleCi 2.0: aws command not found - amazon-web-services

I try to migrate circleci from v1.0 to v2.0.
First I can't install awscli but finally can install it with the code below but got another error that cannot found aws command.
version: 2
jobs:
build:
docker:
- image: circleci/node:8.9.1
steps:
- checkout
- restore_cache:
key: dependency-cache-{{ checksum "package.json" }}
- save_cache:
key: dependency-cache-{{ checksum "package.json" }}
paths:
- node_modules
deploy:
docker:
- image: circleci/node:8.9.1
steps:
- checkout
- run:
name: Install yarn
command: yarn install
- run:
name: Install awscli
command: |
sudo apt-get install python-pip python-dev
pip install 'pyyaml<4,>=3.10' awscli --upgrade --user
- run:
name: AWS S3
command: aws s3 sync build s3://<URL> --delete
workflows:
version: 2
build-and-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: master
It show "aws: command not found". I'm not sure that I do something wrong or not but I want to know what's the problem and how to solve it. Thanks.

I would rework your config. Each job should have a focus/point. For deployment for example, you don't need NodeJS, you need the AWS CLI so use an image for that.
version: 2
jobs:
build:
docker:
- image: circleci/node:8.9.1
steps:
- checkout
- restore_cache:
key: dependency-cache-{{ checksum "package.json" }}
- save_cache:
key: dependency-cache-{{ checksum "package.json" }}
paths:
- node_modules
- persist_to_workspace:
root: /home/circleci
paths: project
deploy:
docker:
- image: cibuilds/aws:1.16.1
steps:
- checkout
- attach_workspace:
at: /home/circleci
- run:
name: AWS S3
command: aws s3 sync build s3://<URL> --delete
workflows:
version: 2
build-and-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: master

Try with the following step (taken from their v2 docs);
steps:
- run:
name: Install PIP
command: sudo apt-get install python-pip python-dev
- run:
name: Install awscli
command: sudo pip install awscli
- run:
name: Deploy to S3
command: aws s3 sync build s3://<URL> --delete

This method for installing awscli seems to work fine on a variety of systems. Tested on circleci/openjdk:8-jdk, requires no additional installation.
Edit
Seems that the node image lacks the installation of libpython-dev.
##################
# Install AWS CLI
##################
# For node images on Circle, install libpython-dev
sudo apt-get install -y libpython-dev
# Download awscli bundle
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
# Unzip the downloaded bundle
unzip awscli-bundle.zip
# Run the install script and install to ~/bin/aws directory
./awscli-bundle/install -b ~/bin/aws
After that, to run awscli commands, specify the full path to the aws executable, for example:
~/bin/aws s3 ls
Resources
Helpful thread GitHub
Example GitHub repository with Circle config on node:8.9.1
The CircleCI builds

Related

docker compose failing on gitlab-ci build stage

I am trying to build gitlab-ci but one of the stages is failing the build. I get stuck on build stage. it does not recognise python and i am trying to install it so i can build the image and get it tested with robot framework
gitlab-ci.yaml
image: python:latest
services:
- name: docker:dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh"]
stages:
- compile
- build
- test
- deploy
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
MOUNT_POINT: /builds/$CI_PROJECT_PATH/mnt
REPOSITORY_URL: $AWS_ACCOUNT_ID.dkr.ecr.eu-west-2.amazonaws.com/apps_web
TASK_DEFINITION_NAME: apps_8000
CLUSTER_NAME: QA-2
SERVICE_NAME: apps_demo
ARTIFACT_REPORT_PATH: "app/reports/"
before_script:
- docker info
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
- export WEB_IMAGE=$IMAGE:web
- apk add --no-cache openssh-client bash
- chmod +x ./setup_env.sh
- bash ./setup_env.sh
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
unittests:
stage: test
before_script:
- python -m venv env
- source env/bin/activate
- python -m pip install --upgrade pip
- pip install -r app/app-requirements.txt
variables:
DOCKER_IMAGE_TAG: ${CI_REGISTRY_IMAGE}:${CI_COMMIT_TAG}
image: ${DOCKER_IMAGE_TAG}
script:
- source env/bin/activate
- python app/manage.py jenkins --enable-coverage
artifacts:
reports:
junit: app/reports/junit.xml
paths:
- $ARTIFACT_REPORT_PATH
expire_in: 30 days
when: on_success
only:
refs:
- merge_requests
variables:
- $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "qa"
migrations:
stage: compile
before_script:
- python -m venv env
- source env/bin/activate
- pip install -r app/app-requirements.txt
script:
- python app/manage.py makemigrations
artifacts:
paths:
- "app/*/migrations/*.py"
expire_in: 1 day
only:
refs:
- merge_requests
variables:
- $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "qa"
build:
image:
name: docker/compose:1.25.4
entrypoint: [ "" ]
stage: build
variables:
DOCKER_IMAGE_TAG: ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_NAME}
before_script:
- apt-get install python3
- python -m venv env
- source env/bin/activate
- python -m pip install --upgrade pip
- pip install -r app/app-requirements.txt
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
- export WEB_IMAGE=$IMAGE:web
script:
- apk add --no-cache bash
- chmod +x ./setup_env.sh
- bash ./setup_env.sh
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
- docker pull $IMAGE:web || true
- docker-compose -f docker-compose.ci.yml build
- docker push $IMAGE:web
- docker tag app
- docker build -t ${DOCKER_IMAGE_TAG} .
after_script:
- docker push ${DOCKER_IMAGE_TAG}
- docker logout
dependencies:
- migrations
only:
refs:
- merge_requests
variables:
- $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "qa"
deploy_qa:
stage: deploy
image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-ecs:latest
before_script:
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
- export WEB_IMAGE=$IMAGE:web
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
script:
- echo $IMAGE
- echo $WEB_IMAGE
- docker pull $WEB_IMAGE
environment:
name: qa
url: https://app.domain.com
only:
refs:
- merge_requests
variables:
- $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "qa"
It is failing with error /bin/sh: eval: line 153: apt-get: not found
Like #slauth said in his comment, the docker/compose image is based on Alpine Linux which uses the apk package manager, not apt. However, you most likely wouldn't be able to use a debian image since you need the functionality of docker/compose. In that case, you can use apk to install python instead of apt-get, just like you're installing bash in the script section of this job:
apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python
(This comes from a related answer here).
However, Installing and updating packages in a CI/CD Pipeline is generally a bad practice since depending on the number of pipelines you run, it can significantly slow down your development process. Instead, you can create your own docker images based on whichever image you need, and install your packages there. For example, you can create a new image based on docker/composer, install python, bash, etc there. Then push the new image either to Docker Hub, Gitlab's built-in docker registry, or another registry you might have available. Finally, in your .gitlab-ci.yml file, you simply change docker/compose to your new image.
For more information on this part, you can see another answer I wrote for a similar question here.

how to deploy to aws using ci/cd for zappa(python)

I'm using zappa to deploy on aws. And I wanted to implement CI/CD on AWS.
So, I created a pipeline and successfully did Aws COMMIT and AWS BUILD.
I'm unable to deploy the same using AWS CODE DEPLOY.
The Buildspec.yaml looks like this:
version: 0.2
phases:
install:
commands:
- echo Setting up virtualenv
- python -m venv venv
- source venv/bin/activate
- echo Installing requirements from file
- pip install -r requirements.txt
build:
commands:
- echo Build started on `date`
- echo Building and running tests
- python tests.py
- flask db upgrade
post_build:
commands:
- echo Build completed on `date`
- echo Starting deployment
- zappa update dev
- echo Deployment completed
How should I execute zappa deploy or zappa update on AWS?
I'm not sure how to add create appspec.yaml file.
Please HELP! Stuck!!
Here's a buildspec.yml file that I use. You could adjust this to suit your needs (for example, including the DB upgrade command).
version: 0.2
phases:
install:
commands:
- mkdir /tmp/src/
- mv $CODEBUILD_SRC_DIR/* /tmp/src/
- cd /tmp/src/
- python3 -m venv docker_env && source docker_env/bin/activate && pip install --upgrade pip==9.0.3 && pip install -r requirements.txt && zappa update production && deactivate && rm -rf docker_env
post_build:
commands:
- cd $CODEBUILD_SRC_DIR
- rm -rf /tmp/src/
- echo Build completed on `date`
Note that this is using the Docker image danielwhatmuff/zappa:python3.6 in CodeBuild. I use this image as it's based on AWS Lambda and has been tuned for Zappa.
Zappa update to Code Deploy:
Your Buildspec.yaml looks fair good but there is one important point to consider.
Postbuild will always run regardless of success/failure. Debug information can be pulled from a failed build.
Either check the reason for failure from build log, or modify your yml to look like below (caution: this is only draft change, test before using in systems):
version: 0.2
phases:
install:
commands:
- yum -y groupinstall development
- yum -y install zlib-devel
- yum -y install openssl-devel
- wget https://www.python.org/ftp/python/3.6.0/Python-3.6.0.tar.xz
- tar xJf Python-3.6.0.tar.xz
- cd Python-3.6.0
- ./configure
- make
- make install
- ln -s /usr/local/bin/python3.6 /usr/bin/python3
- curl "https://bootstrap.pypa.io/get-pip.py" -o "get-pip.py"
- python3 get-pip.py
- pip3 install virtualenv
- virtualenv -p /usr/bin/python3 venv
- source venv/bin/activate
- pip3 install -r requirements.txt
build:
commands:
- echo Build started on `date`
- echo Building and running tests
- python3 tests.py
- flask db upgrade
post_build:
commands:
- if [ $CODEBUILD_BUILD_SUCCEEDING = 1 ]; then echo Build completed on `date`; echo Starting deployment; zappa update dev; else echo Build failed ignoring deployment; fi
- echo Deployment completed
Hope it answers.
Zappa update to AWS
Below are the steps to do Zappa update on AWS
Configure AWS with IAM user
Configure AWS cli in the local host using command
a. pip install awscli
b. aws configure
Call "Zappa init", it will generate zappa_settings.json based on details provided
Zappa deploy <name provided for environment in step3>
Now your application will be deployed to AWS. Whenever you need to update call
Zappa update <name provided for environment in step3>

GitLab issues connecting to us-gov-west-1

There's a GitLab.com update rolling out today, and I'm seeing issues connecting to a particular AWS region with Ansible: us-gov-west-1.
This is odd, since in my CI job I'm able to use the AWS CLI just fine:
CI build step:
$ aws ec2 describe-instances
Output (truncated):
{
"Reservations": [
{
"Instances": [
{
"Monitoring": {
"State": "disabled"
},
"PublicDnsName": "ec2-...
The very build step is as follows, notice that it fails to connect to the region:
CI build step:
$ ansible-playbook -vvv -i inventory/ec2.py -e ansible_ssh_private_key_file=aws-keypairs/gitlab_keypair.pem playbooks/deploy.yml
Output (truncated)
Using /builds/me/my-project/ansible.cfg as config file
ERROR! Attempted to execute "inventory/ec2.py" as inventory script: Inventory script (inventory/ec2.py) had an execution error: region name: us-gov-west-1 likely not supported, or AWS is down. connection to region failed.
ERROR: Job failed: exit code 1
Is anyone else seeing this?
It was working this morning. Any idea why this might be failing now?
I wrote a small Python script to dive deeper into boto. When I googled how to list the regions,I was reminded of the differences in boto 2 vs boto 3. Then, I reviewed the mechanism I was using to install boto. It looks like the boto installation was the problem.
Here’s the buggy version of my .gitlab-ci.yml file:
image: ansible/ansible:ubuntu1604
test_aws:
stage: deploy
before_script:
- apt-get update
- apt-get -y install python
- apt-get -y install python-boto python-pip
- pip install awscli
script:
- 'aws ec2 describe-instances'
deploy_app:
stage: deploy
before_script:
- apt-get update
- apt-get -y install python
- apt-get -y install python-boto python-pip
- pip install awscli
script:
- 'chmod 400 aws-keypairs/gitlab_keypair.pem'
- 'ansible-playbook -vvv -i inventory/ec2.py -e ansible_ssh_private_key_file=aws-keypairs/gitlab_keypair.pem playbooks/deploy.yml'
And here’s the fixed version:
image: ansible/ansible:ubuntu1604
all_in_one:
stage: deploy
before_script:
- rm -rf /var/lib/apt/lists/*
- apt-get update
- apt-get -y install python python-pip
- pip install boto==2.48.0
- pip install awscli
- pip install ansible==2.2.2.0
script:
- 'chmod 400 aws-keypairs/gitlab_keypair.pem'
- 'aws ec2 describe-instances'
- 'python ./boto_debug.py'
- 'ansible-playbook -vvv -i inventory/ec2.py -e ansible_ssh_private_key_file=aws-keypairs/gitlab_keypair.pem playbooks/deploy.yml'
Notice that I switched from using apt-get install to using pip install. Hopefully others will come across this post in the future and avoid installing boto with apt-get -y install python-boto!

circleci 2.0 can't find awscli

I'm using circleCI 2.0 and they can't find aws but their documents clearly say that aws is installed in default
when I use this circle.yml
version: 2
jobs:
build:
working_directory: ~/rian
docker:
- image: node:boron
steps:
- checkout
- run:
name: Pre-Dependencies
command: mkdir ~/rian/artifacts
- restore_cache:
keys:
- rian-{{ .Branch }}-{{ checksum "yarn.lock" }}
- rian-{{ .Branch }}
- rian-master
- run:
name: Install Dependencies
command: yarn install
- run:
name: Test
command: |
node -v
yarn run test:ci
- save_cache:
key: rian-{{ .Branch }}-{{ checksum "yarn.lock" }}
paths:
- "~/.cache/yarn"
- store_artifacts:
path: ~/rian/artifacts
destination: prefix
- store_test_results:
path: ~/rian/test-results
- deploy:
command: aws s3 sync ~/rian s3://rian-s3-dev/ --delete
following error occurs:
/bin/bash: aws: command not found
Exited with code 127
so if I edit the code this way
- deploy:
command: |
apt-get install awscli
aws s3 sync ~/rian s3://rian-s3-dev/ --delete
then i get another kind of error:
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package awscli
Exited with code 100
Anyone knows how to fix this???
The document you are reading is for CircleCI 1.0 and for 2.0 is here:
https://circleci.com/docs/2.0/
In CircleCI 2.0, you can use your favorite Docker image. The image you are currently setting is node:boron, which does not include the aws command.
https://hub.docker.com/_/node/
https://github.com/nodejs/docker-node/blob/14681db8e89c0493e8af20657883fa21488a7766/6.10/Dockerfile
If you just want to make it work for now, you can install the aws command yourself in circle.yml.
apt-get update && apt-get install -y awscli
However, to take full advantage of Docker's benefits, it is recommended that you build a custom Docker image that contains the necessary dependencies such as the aws command.
You can write your custom aws-cli Docker image something like this:
FROM circleci/python:3.7-stretch
ENV AWS_CLI_VERSION=1.16.138
RUN sudo pip install awscli==${AWS_CLI_VERSION}
I hit this issue when deploying to AWS lambda functions and pushing files to S3 bucket. Finally solved it and then built a docker image to save time in installing the AWS CLI every time. Here is a link to the image and the repo!
https://github.com/wilson208/circleci-awscli
https://hub.docker.com/r/wilson208/circleci-awscli/
Fire a PR in or open an issue if you need anything added to the image and I will get to it when I can.
Edit:
Also, checkout the readme on github for examples of deploying a package to Lambda or pushing files to S3

How to deploy an app from circleCI to aws eb

Currently I have a circle.yml which looks like :
dependencies:
pre:
- rvm install 2.3.3
- sudo pip install -U pip setuptools
- sudo apt-get install python-dev
- sudo pip install awsebcli
- gem install bundler
- bundle install
general:
branches:
only:
- st5-ci
deployment:
production:
branch: xt5-ci
commands:
- eb init
- eb deploy --profile default
However the eb init command is stuck forever and doesnt move forward, and if I try to run the yml without init, eb deploy fails.
I am pretty new to aws tools and cli, can someone please help on this?
eb init creates a file at location - ./elasticbeanstalk/config.yml. Perhaps you can try adding that manually and see if it works .
It's content would be like -
branch-defaults:
develop:
environment: yourdevelopbranch
deploy:
artifact: build/yourartifact.war
global:
application_name: your-application-name
default_ec2_keyname: ec2-key-pair-name
default_platform: 64bit Amazon Linux 2015.03 v1.4.3 running Ruby 2.2 (Puma)
default_region: us-east-1
profile: eb-cli
sc: git
eb init needs some inputs. Looks at - http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb3-init.html
OR
you can try eb init --profile profilename . So for default profile it will eb init --profile default