There's a GitLab.com update rolling out today, and I'm seeing issues connecting to a particular AWS region with Ansible: us-gov-west-1.
This is odd, since in my CI job I'm able to use the AWS CLI just fine:
CI build step:
$ aws ec2 describe-instances
Output (truncated):
{
"Reservations": [
{
"Instances": [
{
"Monitoring": {
"State": "disabled"
},
"PublicDnsName": "ec2-...
The very build step is as follows, notice that it fails to connect to the region:
CI build step:
$ ansible-playbook -vvv -i inventory/ec2.py -e ansible_ssh_private_key_file=aws-keypairs/gitlab_keypair.pem playbooks/deploy.yml
Output (truncated)
Using /builds/me/my-project/ansible.cfg as config file
ERROR! Attempted to execute "inventory/ec2.py" as inventory script: Inventory script (inventory/ec2.py) had an execution error: region name: us-gov-west-1 likely not supported, or AWS is down. connection to region failed.
ERROR: Job failed: exit code 1
Is anyone else seeing this?
It was working this morning. Any idea why this might be failing now?
I wrote a small Python script to dive deeper into boto. When I googled how to list the regions,I was reminded of the differences in boto 2 vs boto 3. Then, I reviewed the mechanism I was using to install boto. It looks like the boto installation was the problem.
Here’s the buggy version of my .gitlab-ci.yml file:
image: ansible/ansible:ubuntu1604
test_aws:
stage: deploy
before_script:
- apt-get update
- apt-get -y install python
- apt-get -y install python-boto python-pip
- pip install awscli
script:
- 'aws ec2 describe-instances'
deploy_app:
stage: deploy
before_script:
- apt-get update
- apt-get -y install python
- apt-get -y install python-boto python-pip
- pip install awscli
script:
- 'chmod 400 aws-keypairs/gitlab_keypair.pem'
- 'ansible-playbook -vvv -i inventory/ec2.py -e ansible_ssh_private_key_file=aws-keypairs/gitlab_keypair.pem playbooks/deploy.yml'
And here’s the fixed version:
image: ansible/ansible:ubuntu1604
all_in_one:
stage: deploy
before_script:
- rm -rf /var/lib/apt/lists/*
- apt-get update
- apt-get -y install python python-pip
- pip install boto==2.48.0
- pip install awscli
- pip install ansible==2.2.2.0
script:
- 'chmod 400 aws-keypairs/gitlab_keypair.pem'
- 'aws ec2 describe-instances'
- 'python ./boto_debug.py'
- 'ansible-playbook -vvv -i inventory/ec2.py -e ansible_ssh_private_key_file=aws-keypairs/gitlab_keypair.pem playbooks/deploy.yml'
Notice that I switched from using apt-get install to using pip install. Hopefully others will come across this post in the future and avoid installing boto with apt-get -y install python-boto!
Related
I am new to AWS codeBuild and code pipeline, what I am wondering is if there is a way I can fail a pipeline or codeBuild project if the coverage of my test cases is below a certain threshold, i.e 80%?
I have an angular project configured with Karma & Jasmine.
Upon research I have found that I can use this command aws codebuild stop-build --id ${CODEBUILD_BUILD_ID}, not sure if this will help me in any way?
Buildspec.yml
version: 0.2
phases:
install:
commands:
# Install the Angular CLI
- npm install -g #angular/cli
# Install puppeteer as a dev dependency
- npm i -D puppeteer
- npm i -D #angular-devkit/build-angular
# Print out missing libs
- echo "Missing Libs" || ldd ./node_modules/puppeteer/.local-chromium/linux-549031/chrome-linux/chrome | grep not
# Upgrade apt
- apt-get upgrade
# Update libs
- apt-get update
# Install apt-transport-https
- apt-get install -y apt-transport-https
# Use apt to install the Chrome dependencies
- apt-get install -y libxcursor1
- apt-get install -y libgtk-3-dev
- apt-get install -y libxss1
- apt-get install -y libasound2
- apt-get install -y libnspr4
- apt-get install -y libnss3
- apt-get install -y libx11-xcb1
# Print out missing libs
- echo "Missing Libs" || ldd ./node_modules/puppeteer/.local-chromium/linux-549031/chrome-linux/chrome | grep not
# Install project dependencies
- npm install
# - usermod -g 0 -o codebuild-user
build:
# run-as: codebuild-user
commands:
# Run headless Chrome tests
- ng test --browsers=ChromeHeadlessNoSandbox --watch=false --code-coverage
- printenv
I wish to condition the ng tests output here to fail the pipeline.
- aws codebuild stop-build --id ${CODEBUILD_BUILD_ID}
artifacts:
files: imagedefinitions.json
AWS beginner here
I have a repo in GitLab which has a python script and a requirements.txt file, and the python script has to be deployed in the EC2 ubuntu instance (and the script has to be triggered only once a day) via Gitlab CI. I am creating a deployment package of the repo using CI and through this, I am deploying the zipped package in the S3 bucket. My .gitlab-ci.yml file:
image: ubuntu:18.04
variables:
AWS_DEFAULT_REGION: eu-central-1
GIT_SUBMODULE_STRATEGY: recursive
S3_TEST_BUCKET: $BUCKET_UNPACK
stages:
- deploy
TestJob:
stage: deploy
script:
- apt-get -y update
- apt-get -y install python3-pip python3.7 zip
- python3.7 -m pip install --upgrade pip
- python3.7 -V
- pip3.7 install virtualenv
- mv iso_forest_ad.py ~ # This is the python script
- mv requirements.txt ~
# Setup virtual environment
- mkdir ~/forEC2
- cd ~/forEC2
- virtualenv -p python3 venv
- source venv/bin/activate
- pip3.7 install -r ~/requirements.txt -t ~/forEC2/venv/lib/python3.7/site-packages/
# Package environment and dependencies
- cd ~/forEC2/venv/lib/python3.7/site-packages/
- zip -r9 ~/forEC2/archive.zip .
- cd ~
- zip -g ~/forEC2/archive.zip iso_forest_ad.py
- pip install awscli --upgrade
- export PATH=$PATH:~/.local/bin
- aws configure set aws_access_key_id $AWS_TEST_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_TEST_SECRET_ACCESS_KEY
- aws configure set default.region $AWS_DEFAULT_REGION
- aws s3 cp ~/forEC2/archive.zip $BUCKET_UNPACK/anomaly-detection-deployment.zip
Contents of requirements.txt
-i https://pypi.org/simple
joblib==0.16.0; python_version >= '3.6'
numpy==1.19.0
pandas==1.0.5
psycopg2-binary==2.8.5
python-dateutil==2.8.1; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'
pytz==2020.1
scikit-learn==0.23.1
scipy==1.5.1; python_version >= '3.6'
six==1.15.0; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'
sqlalchemy==1.3.18
threadpoolctl==2.1.0; python_version >= '3.5'
Now, I would like to transfer the script and install the dependencies in the ubuntu EC2 instance and run the script.
I know one way would be to connect to the EC2 instance and do
aws s3 sync s3://s3-bucket-name/folder /home/ubuntu
as suggested in the post: Moving files from s3 to EC2 instance. But doing this, I was not able to install the dependencies from the requirements.txt file.
I would like to know if there is an alternate way (perhaps maybe by using shell script or some other way?) for achieving this. Since I am using ubuntu locally too, using putty is not an option for me.
The link you've posted already shows one way of doing this. Namely, by using UserData.
Therefore, you would have to develop a bash script which would not only download the zip file as shown in the link, but also unpack it, and install the requirements.txt file along side with any other dependencies or configuration setup you require.
So the UserData for your instance would be something like this (pseudo-code, this is only a rough example):
#!/bin/bash
apt update
apt install -y zip awscli python3-pip # awscli is not normally on ubuntu
aws s3 sync s3://optimal-aws-nz-play-config/package.zip .
unzip package.zip
cd package
pip install -r ./requirenements.txt
If this is something you do often, you could create lunch template with the instance settings and the UserData to automatically execute these steps for each instance launched from the template.
There are also other possibilities, involving CodeDeploy, CodePipeline, but plain old UserData would be a good start.
Alternative would be to use run-command. The execution of the command would be triggered from gitlab following upload of the new s3 package.
An example of how to invoke the run-command is in the docs:
aws ssm send-command \
--document-name "AWS-RunPowerShellScript" \
--parameters commands=["echo helloWorld"] \
--targets Key=tag:Env,Values=Dev,Test
Instead of echo helloWorld you would have to write your own bash commands to be executed.
I am trying to run a code pipeline with github as the source, codeBuild as the builder and elastic beanstalk as the server infrastructure. I am using a docker image amazonlinux:2018.03 which works perfectly locally but during the codebuild in the pipeline i get the following error:
docker-compose: command not found
I have tried to install docker, docker-compose etc. but it keeps giving me this error. I've set the build to use a file buildspec.yaml:
version: 0.2
phases:
install:
commands:
- echo "installing"
- sudo yum install -y yum-utils
- sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
- sudo curl -L "https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- sudo chmod +x /usr/local/bin/docker-compose
- sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
- docker-compose --version
build:
commands:
- bash compose-local.sh
compose-local.sh:
#!/bin/bash
sudo docker-compose up
I have tried for a couple of days. And i am not sure if i am overseeing something with codeBuild i dont know?
Run /usr/local/bin/docker-compose up instead.
If using Ubuntu 2.0+ or Amazon Linux 2 image, we need to specify docker as the runtime-versions in install phase at buildspec.yml file, e.g.:
version: 0.2
phases:
install:
runtime-versions:
docker: 18
build:
commands:
- echo Build started on `date`
- echo Building the Docker image with docker-compose...
- docker-compose -f docker-compose.yml build
Also please make sure to enable privilege mode: https://docs.aws.amazon.com/codebuild/latest/userguide/create-project.html#create-project-console
I'm using zappa to deploy on aws. And I wanted to implement CI/CD on AWS.
So, I created a pipeline and successfully did Aws COMMIT and AWS BUILD.
I'm unable to deploy the same using AWS CODE DEPLOY.
The Buildspec.yaml looks like this:
version: 0.2
phases:
install:
commands:
- echo Setting up virtualenv
- python -m venv venv
- source venv/bin/activate
- echo Installing requirements from file
- pip install -r requirements.txt
build:
commands:
- echo Build started on `date`
- echo Building and running tests
- python tests.py
- flask db upgrade
post_build:
commands:
- echo Build completed on `date`
- echo Starting deployment
- zappa update dev
- echo Deployment completed
How should I execute zappa deploy or zappa update on AWS?
I'm not sure how to add create appspec.yaml file.
Please HELP! Stuck!!
Here's a buildspec.yml file that I use. You could adjust this to suit your needs (for example, including the DB upgrade command).
version: 0.2
phases:
install:
commands:
- mkdir /tmp/src/
- mv $CODEBUILD_SRC_DIR/* /tmp/src/
- cd /tmp/src/
- python3 -m venv docker_env && source docker_env/bin/activate && pip install --upgrade pip==9.0.3 && pip install -r requirements.txt && zappa update production && deactivate && rm -rf docker_env
post_build:
commands:
- cd $CODEBUILD_SRC_DIR
- rm -rf /tmp/src/
- echo Build completed on `date`
Note that this is using the Docker image danielwhatmuff/zappa:python3.6 in CodeBuild. I use this image as it's based on AWS Lambda and has been tuned for Zappa.
Zappa update to Code Deploy:
Your Buildspec.yaml looks fair good but there is one important point to consider.
Postbuild will always run regardless of success/failure. Debug information can be pulled from a failed build.
Either check the reason for failure from build log, or modify your yml to look like below (caution: this is only draft change, test before using in systems):
version: 0.2
phases:
install:
commands:
- yum -y groupinstall development
- yum -y install zlib-devel
- yum -y install openssl-devel
- wget https://www.python.org/ftp/python/3.6.0/Python-3.6.0.tar.xz
- tar xJf Python-3.6.0.tar.xz
- cd Python-3.6.0
- ./configure
- make
- make install
- ln -s /usr/local/bin/python3.6 /usr/bin/python3
- curl "https://bootstrap.pypa.io/get-pip.py" -o "get-pip.py"
- python3 get-pip.py
- pip3 install virtualenv
- virtualenv -p /usr/bin/python3 venv
- source venv/bin/activate
- pip3 install -r requirements.txt
build:
commands:
- echo Build started on `date`
- echo Building and running tests
- python3 tests.py
- flask db upgrade
post_build:
commands:
- if [ $CODEBUILD_BUILD_SUCCEEDING = 1 ]; then echo Build completed on `date`; echo Starting deployment; zappa update dev; else echo Build failed ignoring deployment; fi
- echo Deployment completed
Hope it answers.
Zappa update to AWS
Below are the steps to do Zappa update on AWS
Configure AWS with IAM user
Configure AWS cli in the local host using command
a. pip install awscli
b. aws configure
Call "Zappa init", it will generate zappa_settings.json based on details provided
Zappa deploy <name provided for environment in step3>
Now your application will be deployed to AWS. Whenever you need to update call
Zappa update <name provided for environment in step3>
I try to migrate circleci from v1.0 to v2.0.
First I can't install awscli but finally can install it with the code below but got another error that cannot found aws command.
version: 2
jobs:
build:
docker:
- image: circleci/node:8.9.1
steps:
- checkout
- restore_cache:
key: dependency-cache-{{ checksum "package.json" }}
- save_cache:
key: dependency-cache-{{ checksum "package.json" }}
paths:
- node_modules
deploy:
docker:
- image: circleci/node:8.9.1
steps:
- checkout
- run:
name: Install yarn
command: yarn install
- run:
name: Install awscli
command: |
sudo apt-get install python-pip python-dev
pip install 'pyyaml<4,>=3.10' awscli --upgrade --user
- run:
name: AWS S3
command: aws s3 sync build s3://<URL> --delete
workflows:
version: 2
build-and-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: master
It show "aws: command not found". I'm not sure that I do something wrong or not but I want to know what's the problem and how to solve it. Thanks.
I would rework your config. Each job should have a focus/point. For deployment for example, you don't need NodeJS, you need the AWS CLI so use an image for that.
version: 2
jobs:
build:
docker:
- image: circleci/node:8.9.1
steps:
- checkout
- restore_cache:
key: dependency-cache-{{ checksum "package.json" }}
- save_cache:
key: dependency-cache-{{ checksum "package.json" }}
paths:
- node_modules
- persist_to_workspace:
root: /home/circleci
paths: project
deploy:
docker:
- image: cibuilds/aws:1.16.1
steps:
- checkout
- attach_workspace:
at: /home/circleci
- run:
name: AWS S3
command: aws s3 sync build s3://<URL> --delete
workflows:
version: 2
build-and-deploy:
jobs:
- build
- deploy:
requires:
- build
filters:
branches:
only: master
Try with the following step (taken from their v2 docs);
steps:
- run:
name: Install PIP
command: sudo apt-get install python-pip python-dev
- run:
name: Install awscli
command: sudo pip install awscli
- run:
name: Deploy to S3
command: aws s3 sync build s3://<URL> --delete
This method for installing awscli seems to work fine on a variety of systems. Tested on circleci/openjdk:8-jdk, requires no additional installation.
Edit
Seems that the node image lacks the installation of libpython-dev.
##################
# Install AWS CLI
##################
# For node images on Circle, install libpython-dev
sudo apt-get install -y libpython-dev
# Download awscli bundle
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
# Unzip the downloaded bundle
unzip awscli-bundle.zip
# Run the install script and install to ~/bin/aws directory
./awscli-bundle/install -b ~/bin/aws
After that, to run awscli commands, specify the full path to the aws executable, for example:
~/bin/aws s3 ls
Resources
Helpful thread GitHub
Example GitHub repository with Circle config on node:8.9.1
The CircleCI builds