How to deploy to AWS with circleci 2.0? - amazon-web-services

I have config.yaml I try upload code to AWS. On first version it's works fine, but now it's doesn't work. How can I fix it? I added deploy section and wrote sh commands
version: 2
jobs:
build:
working_directory: ~/myProject
parallelism: 1
shell: /bin/bash --login
environment:
CIRCLE_ARTIFACTS: /tmp/circleci-artifacts
CIRCLE_TEST_REPORTS: /tmp/circleci-test-results
docker:
- image: circleci/build-image:ubuntu-14.04-XXL-upstart-1189-5614f37
command: /sbin/init
steps:
- checkout
- run: mkdir -p $CIRCLE_ARTIFACTS $CIRCLE_TEST_REPORTS
- run:
working_directory: ~/myProject
command: nvm install 8.9.1 && nvm alias default 8.9.1
- restore_cache:
keys:
- v1-dep-{{ .Branch }}-
- v1-dep-master-
- v1-dep-
- run: sudo sudo add-apt-repository "deb http://archive.ubuntu.com/ubuntu $(lsb_release -sc) main universe restricted multiverse"
- run: sudo apt update
- run: sudo apt-get install python2.7-dev
- run: sudo easy_install --upgrade six
- run: sudo pip install --upgrade urllib3==1.21.1
- run: sudo pip install --upgrade pip
- run: sudo pip install --upgrade blessed
- run: sudo pip install awsebcli==3.12.3 --ignore-installed six pyyaml
- run: rm -rf /home/ubuntu/.aws
- run: if [ -z "${NODE_ENV:-}" ]; then export NODE_ENV=test; fi
- run: export PATH="~/myProject/node_modules/.bin:$PATH"
- run: npm install
- save_cache:
key: v1-dep-{{ .Branch }}-{{ epoch }}
paths:
- vendor/bundle
- ~/virtualenvs
- ~/.m2
- ~/.ivy2
- ~/.bundle
- ~/.go_workspace
- ~/.gradle
- ~/.cache/bower
- ./node_modules
- run: npm test
- store_test_results:
path: /tmp/circleci-test-results
- store_artifacts:
path: /tmp/circleci-artifacts
- store_artifacts:
path: /tmp/circleci-test-results
deploy:
name: deploy to AWS
production:
branch: production
commands:
- bash ./deploy_prod.sh
- eb deploy stmi-production
staging:
branch: master
commands:
- bash ./deploy_staging.sh
- eb deploy stmi-dev

This works for me:
machine: true
steps:
- checkout
- run:
name: create workspace
command: mkdir -p /tmp/workspace
- run:
name: Install awsebcli package
command: |
sudo apt-get -y -qq update
sudo apt-get install python-pip python-dev build-essential
sudo pip install --upgrade awsebcli
eb --version
- run:
name: installing dependencies
command: |
npm install
- run:
name: deploy
command: |
bash deploy.sh
- run:
name: Removing aws config
command: |
rm -rf /home/circleci/.aws
- run: ls /tmp/workspace
- persist_to_workspace:
root: /tmp/workspace
paths:
- status.txt`
And this is my deploy.sh
mkdir /home/circleci/.aws
touch /home/circleci/.aws/config
chmod 600 /home/circleci/.aws/config
echo "[profile user]" > /home/circleci/.aws/config
echo "aws_access_key_id=$AWS_ACCESS_KEY_ID" >> /home/circleci/.aws/config
echo "aws_secret_access_key=$AWS_SECRET_ACCESS_KEY" >>
/home/circleci/.aws/config
eb deploy $BEANSTALK_ENVIRONMENT --profile user --region
$BEANSTALK_PRODUCTION_AWS_REGION &&
echo 'Deployment Succeed' >> /tmp/workspace/beanstalk-deploy-
status.txt

Related

Permission denied on entrypoint when trying to update Elastic Beanstalk via GitHub Actions

I feel this might be an IAM question, but I don't really know where to begin. I have a Docker-based EBS environment that works great when I update it manually. However, when I update it with GitHub Actions, the container fails with the following message.
unable to start container process: exec: "./docker/entrypoint.sh": permission denied: unknown.
My CD pipeline authenticates, push a new Docker image to the registry, and then updates the Dockerrun.aws.js by editing the image name. The workflow runs ok: the image is pushed, and the Dockerrun.aws.js is correct... and yet the environment fails to launch.
name: Release
on:
push:
tags:
- 'v*'
jobs:
deploy-to-aws-ebs:
runs-on: ubuntu-latest
environment: staging
permissions:
id-token: write
contents: read
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
steps:
- name: Check out the repository
uses: actions/checkout#v3
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
role-to-assume: arn:aws:iam::${{ secrets.AWS_ACCOUNT_ID }}:role/ServiceRoleForEBSDeploy
aws-region: ${{ secrets.AWS_DEFAULT_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Get tag name
run: echo "tag=`echo ${{ github.ref }} | sed -e 's/\./-/g' | cut -c11-`-`echo ${{ github.sha }} | cut -c1-8`" >> $GITHUB_ENV
- name: Build, tag, and push docker image to Amazon ECR
env:
REGISTRY: ${{ steps.login-ecr.outputs.registry }}
REPOSITORY: docker_repository
IMAGE_TAG: ${{ env.tag }}
run: |
docker build -t $REGISTRY/$REPOSITORY:$IMAGE_TAG .
docker push $REGISTRY/$REPOSITORY:$IMAGE_TAG
echo "IMAGE_NAME=$REGISTRY/$REPOSITORY:$IMAGE_TAG" >> $GITHUB_ENV
- name: Create deployment package
run: |
sed -e "s|<IMAGE_NAME>|${{ env.IMAGE_NAME }}|g" \
docker/Dockerrun.aws.template.json > Dockerrun.aws.json
cat Dockerrun.aws.json
- name: Deploy to AWS Elastic Beanstalk
env:
AWS_EBS_APP_NAME: app_name
AWS_EBS_ENV_NAME: env_name
run: |
aws s3 cp Dockerrun.aws.json s3://${{ secrets.AWS_S3_BUCKET_NAME }}/versions/${{ env.tag }}-Dockerrun.aws.json
aws elasticbeanstalk create-application-version \
--application-name $AWS_EBS_APP_NAME \
--source-bundle S3Bucket=${{ secrets.AWS_S3_BUCKET_NAME }},S3Key=versions/${{ env.tag }}-Dockerrun.aws.json \
--version-label ${{ env.tag }}
aws elasticbeanstalk update-environment \
--application-name $AWS_EBS_APP_NAME \
--environment-name $AWS_EBS_ENV_NAME \
--version-label ${{ env.tag }}
Meanwhile, the Dockerfile is your basic Django stuff.
FROM python:3.10-slim-buster
ARG APP_HOME=/code \
USERNAME=user101
WORKDIR ${APP_HOME}
RUN addgroup --system ${USERNAME} \
&& adduser --system --ingroup ${USERNAME} ${USERNAME}
RUN apt-get update --yes --quiet && apt-get install --no-install-recommends --yes --quiet \
# dependencies for building Python packages
build-essential \
# psycopg2 dependencies
libpq-dev \
# dev utils
git \
# cleanup
&& rm -rf /var/lib/apt/lists/*
COPY . --chown=${USERNAME}:${USERNAME} ${APP_HOME} ${APP_HOME}
RUN pip install --upgrade pip
RUN pip install poetry
RUN poetry install --no-interaction --no-ansi
EXPOSE 80
USER ${USERNAME}
ENTRYPOINT ["./docker/entrypoint.sh" ]
CMD ["gunicorn", "config.wsgi:application", "--bind", ":80"]
My guess is that EBS is trying to build the environment with the GitHub Actions service user? Does that make sense? Should it be using the user defined in the Dockerfile?
This has nothing to do with IAM permissions.
You just need to make your script executable:
$ chmod +x ./docker/entrypoint.sh
You can also run it inside the Dockerfile before the ENTRYPOINT command:
RUN chmod +x ./docker/entrypoint.sh
ENTRYPOINT ["./docker/entrypoint.sh" ]

django and gurobi in docker have permission issues

I am trying to create a docker image for Django + Gurobi.
Container runs as root by default.
Gurobi does not want to run as root since license is issued to non-root user.
If switching to non-root, Django's python complains "attempt to write a read-only database" using /db.sqlite3.
chown+chmod just /apps, /db.sqlite3, and /usr/local/lib/python3.9/site-packages/django/db/backends/sqlite3 did not help.
problem seems to go away if I chown and chmod 777 the entire container: bad idea
What is the solution? Below is the Dockerfile
FROM python:3.9
COPY . .
ADD data .
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV APP_USER=user32
ENV APP_HOME=/home/$APP_USER
# install python dependencies
RUN pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host files.pythonhosted.org --upgrade pip
RUN pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host files.pythonhosted.org --no-cache-dir -r requirements.txt
RUN apt-get update && apt-get install -y inetutils-ping
RUN tar xvzf gurobi9.5.0_linux64.tar.gz
ENV GUROBI_HOME /gurobi950/linux64
RUN cd /gurobi950/linux64 && python setup.py install
RUN rm gurobi9.5.0_linux64.tar.gz
RUN groupadd -r $APP_USER && \
useradd -r -g $APP_USER -d $APP_HOME -s /sbin/nologin -c "Docker image user" $APP_USER
ENV TZ 'America/Los_Angeles'
RUN echo $TZ > /etc/timezone && apt-get update && \
apt-get install -y tzdata && \
rm /etc/localtime && \
ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && \
dpkg-reconfigure -f noninteractive tzdata && \
apt-get clean
#RUN chown -R $APP_USER:$APP_USER $APP_HOME
#RUN chown -R $APP_USER:$APP_USER /apps/
#RUN chown -R $APP_USER:$APP_USER /data/
#RUN chown -R $APP_USER:$APP_USER /gurobi950/
#RUN chown -R $APP_USER:$APP_USER /usr/local/lib/python3.9/
#RUN chown -R $APP_USER:$APP_USER /db.sqlite3
#RUN chmod -R 777 /db.sqlite3
#RUN chmod -R 777 /apps
#RUN chmod -R 777 /usr/local/lib/python3.9
#RUN chmod -R 777 /gurobi950
#RUN chown -R $APP_USER:$APP_USER /
#RUN chmod -R 777 /
ENV GRB_LICENSE_FILE /gurobi.lic
ENV LD_LIBRARY_PATH=/gurobi950/linux64/lib
RUN /gurobi950/linux64/bin/gurobi_cl --version
WORKDIR /
# running migrations
RUN python manage.py migrate
USER $APP_USER
# gunicorn
CMD ["gunicorn", "--config", "gunicorn-cfg.py", "core.wsgi"]
Here's the error on navigating to the home page
Request Method: POST
Request URL: http://localhost/login/?next=/
Django Version: 3.2.6
Exception Type: OperationalError
Exception Value:
attempt to write a readonly database
Exception Location: /usr/local/lib/python3.9/site-packages/django/db/backends/sqlite3/base.py, line 423, in execute
Python Executable: /usr/local/bin/python
Python Version: 3.9.9
Python Path:
['/',
'/usr/local/bin',
'/usr/local/lib/python39.zip',
'/usr/local/lib/python3.9',
'/usr/local/lib/python3.9/lib-dynload',
'/usr/local/lib/python3.9/site-packages',
'/usr/local/lib/python3.9/site-packages/IPython/extensions']
By default, commercial software like Gurobi does not let you run inside a Docker container. Please contact Gurobi support at https://support.gurobi.com to discuss alternatives.

Login to aws through Gitlab CI-CD pipeline

My .gitlab-ci.yml pipeline worked like a charm in the last year and today, from nothing, I am unable to login to my aws account with this error:
$ echo `aws ecr get-login --no-include-email --region eu-central-1` | sh
Traceback (most recent call last):
File "/usr/local/bin/aws", line 19, in <module>
import awscli.clidriver
File "/usr/local/lib/python3.5/dist-packages/awscli/clidriver.py", line 17, in <module>
import botocore.session
File "/usr/local/lib/python3.5/dist-packages/botocore/session.py", line 30, in <module>
import botocore.client
File "/usr/local/lib/python3.5/dist-packages/botocore/client.py", line 16, in <module>
from botocore.args import ClientArgsCreator
File "/usr/local/lib/python3.5/dist-packages/botocore/args.py", line 26, in <module>
from botocore.signers import RequestSigner
File "/usr/local/lib/python3.5/dist-packages/botocore/signers.py", line 19, in <module>
import botocore.auth
File "/usr/local/lib/python3.5/dist-packages/botocore/auth.py", line 121
pairs.append(f'{quoted_key}={quoted_value}')
^
SyntaxError: invalid syntax
Environment
I'm using docker to build images, push them to ECR and then force the deployment inside my ECS cluster.
I'm also using gitlab in my self-hosted server and have 3 defined variables set in the Gitlab CI/CD section. The variables are: AWS_ACCESS_KEY_ID,AWS_DEFAULT_REGION,AWS_SECRET_ACCESS_KEY.
This is my .gitlab-ci.yml file:
services:
- docker:dind
stages:
- test_build
- deploy_staging
- deploy_production
test_build:
stage: test_build
only:
- merge_requests
tags:
- genuino.webapp.runner
image: ubuntu:16.04
script:
# Add some dependencies for docker and the AWS CLI
- apt-get update -y # Get the most up-to-date repos.
- apt-get install -y apt-transport-https ca-certificates software-properties-common python-software-properties curl python3-pip
# Install Docker
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
- apt-key fingerprint 0EBFCD88
- add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
- apt-get update -y
- apt-get install -y docker-ce
# Build our image
- docker build -t $APP_NAME -f ./deploy/Dockerfile .
deploy_staging:
stage: deploy_staging
image: ubuntu:16.04
only:
- tags
except:
- branches
tags:
- genuino.webapp.runner
script:
# Add some dependencies for docker and the AWS CLI
- apt-get update -y # Get the most up-to-date repos.
- apt-get install -y apt-transport-https ca-certificates software-properties-common python-software-properties curl python3-pip
# Install Docker
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
- apt-key fingerprint 0EBFCD88
- add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
- apt-get update -y
- apt-get install -y docker-ce
# Install the AWS CLI and login to our registry
- pip3 install awscli
- pip3 install rsa
- echo `aws ecr get-login --no-include-email --region eu-central-1` | sh
# Build and push our image
- docker build -t $APP_NAME -f ./deploy/Dockerfile .
- docker tag $APP_NAME:$VERSION $REPOSITORY_URL/$APP_NAME:$VERSION
- docker push $REPOSITORY_URL/$APP_NAME:$VERSION
# Force deploy
- aws ecs update-service --cluster genuino-staging --service webapp --force-new-deployment --region eu-central-1
deploy_production:
stage: deploy_production
image: ubuntu:16.04
when: manual
only:
refs:
- develop
- tags
except:
- branches
tags:
- genuino.webapp.runner
script:
# Add some dependencies for docker and the AWS CLI
- apt-get update -y # Get the most up-to-date repos.
- apt-get install -y apt-transport-https ca-certificates software-properties-common python-software-properties curl python3-pip
# Install Docker
- curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
- apt-key fingerprint 0EBFCD88
- add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
- apt-get update -y
- apt-get install -y docker-ce
# Install the AWS CLI and login to our registry
- pip3 install awscli
- pip3 install rsa
- echo `aws ecr get-login --no-include-email --region eu-central-1` | sh
# Build and push our image
- docker build -t $PROD_APP_NAME -f ./deploy/Dockerfile.production .
- docker tag $PROD_APP_NAME:$VERSION $REPOSITORY_URL/$PROD_APP_NAME:$VERSION
- docker push $REPOSITORY_URL/$PROD_APP_NAME:$VERSION
# Force deploy
- aws ecs update-service --cluster genuino-production --service webapp --force-new-deployment --region eu-central-1
What I already done
I tried to change the authentication line as this: aws ecr get-login-password | docker login -u AWS --password-stdin $REPOSITORY_URL, it works in localhost, but during the deploy I get this error:
$ aws ecr get-login-password | docker login -u AWS --password-stdin $REPOSITORY_URL
Traceback (most recent call last):
File "/usr/local/bin/aws", line 19, in <module>
import awscli.clidriver
File "/usr/local/lib/python3.5/dist-packages/awscli/clidriver.py", line 17, in <module>
import botocore.session
File "/usr/local/lib/python3.5/dist-packages/botocore/session.py", line 30, in <module>
import botocore.client
File "/usr/local/lib/python3.5/dist-packages/botocore/client.py", line 16, in <module>
from botocore.args import ClientArgsCreator
File "/usr/local/lib/python3.5/dist-packages/botocore/args.py", line 26, in <module>
from botocore.signers import RequestSigner
File "/usr/local/lib/python3.5/dist-packages/botocore/signers.py", line 19, in <module>
import botocore.auth
File "/usr/local/lib/python3.5/dist-packages/botocore/auth.py", line 121
pairs.append(f'{quoted_key}={quoted_value}')
^
SyntaxError: invalid syntax
Error: Cannot perform an interactive login from a non TTY device
AWS cli v1 require Python 3.6 while you are using Python 3.5 in GitLab CI. Upgrading Python should solve your problem
https://docs.aws.amazon.com/cli/latest/userguide/welcome-versions.html#welcome-versions-v1

While building project in circleci 2.0 getting apturl==0.5.2 missing error

I have integrated my github project with circleci 2.0. but when i run build from circleci dashboard, i am getting this error.
Could not find a version that satisfies the requirement apturl==0.5.2
(from -r requirements.txt (line 1)) (from versions: )
No matching distribution found for apturl==0.5.2 (from -r requirements.txt (line 1))
Here is my config.yml
# Python CircleCI 2.0 configuration file
#
# Check https://circleci.com/docs/2.0/language-python/ for more
details
#
version: 2
jobs:
build:
docker:
# specify the version you desire here
# use `-browsers` prefix for selenium tests, e.g. `3.6.1-browsers`
- image: circleci/python:3.6.1
# Specify service dependencies here if necessary
# CircleCI maintains a library of pre-built images
# documented at https://circleci.com/docs/2.0/circleci-images/
# - image: circleci/postgres:9.4
working_directory: ~/Amazon_customers
steps:
- checkout
# Download and cache dependencies
- restore_cache:
keys:
- v1-dependencies-{{ checksum "requirements.txt" }}
# fallback to using the latest cache if no exact match is found
- v1-dependencies-
- run:
name: install dependencies
command: |
pipenv install
- save_cache:
paths:
- ./venv
key: v1-dependencies-{{ checksum "requirements.txt" }}
# run tests!
# this example uses Django's built-in test-runner
# other common Python testing frameworks include pytest and nose
# https://pytest.org
# https://nose.readthedocs.io
- run:
name: run tests
command: |
. venv/bin/activate
python manage.py test
- store_artifacts:
path: test-reports
destination: test-reports
And this is my requirements.txt file:
coverage==4.5.1
Django==2.0.6
djangorestframework==3.8.2
pkg-resources==0.0.0
pytz==2018.4
I don't have any apturl==0.5.2 in requirements.txt.How can i resolve this error.
version: 2
jobs:
build:
working_directory: ~/tt-server
docker:
- image: circleci/python3.5
environment:
# Enviroment Variables
steps:
- checkout
- run:
command: pipenv install
- run:
command: "echo mkdir /tmp/artifacts"
- run:
command: |
pipenv run "coverage run manage.py test --parallel=4"
pipenv run "coverage combine"
pipenv run "coverage report -m"
pipenv run "coverage html -d /tmp/artifacts"
pipenv run "coveralls"
- store_artifacts:
path: /tmp/artifacts
replace your congig.yml with this code.Also remove pkg-resources==0.0.0 from requiremnets.txt

How can I build a Docker image and push it to ECR with CIRCLE 2.0?

I'm trying to upgrade from CIRCLE 1.0 to 2.0 & I'm having trouble getting the Docker images to build. I've got the following job:|
... There is another Job here which runs some tests
deploy-aws:
# machine: true
docker:
- image: ecrurl/backend
aws_auth:
aws_access_key_id: ID1
aws_secret_access_key: $ECR_AWS_SECRET_ACCESS_KEY # or project UI envar reference
environment:
TAG: $CIRCLE_BRANCH-$CIRCLE_SHA1
ECR_URL: ecrurl/backend
DOCKER_IMAGE: $ECR_URL:$TAG
STAGING_BUCKET: staging
TESTING_BUCKET: testing
PRODUCTION_BUCKET: production
NPM_TOKEN: $NPM_TOKEN
working_directory: ~/backend
steps:
- run:
name: Install awscli
command: sudo apt-get -y -qq install awscli
- checkout
- run:
name: Build Docker image
command: |
if [ "${CIRCLE_BRANCH}" == "master" ]; then
docker pull $ECR_URL:latest
docker build -t backend NODE_ENV=$NODE_ENV --build-arg NPM_TOKEN=$NPM_TOKEN .
docker tag backend $DOCKER_IMAGE
docker push $DOCKER_IMAGE
docker tag -f $DOCKER_IMAGE $ECR_URL:latest
docker push $ECR_URL:latest
fi
workflows:
version: 2
build-deploy:
jobs:
- build # This one simply runs test
- deploy-aws:
requires:
- build
Running this throws the following error:
#!/bin/bash -eo pipefail
sudo apt-get -y -qq install awscli
/bin/bash: sudo: command not found
Exited with code 127
All I had todo before was this:
dependencies:
pre:
- $(aws ecr get-login --region us-west-2)
deployment:
staging:
branch: staging
- docker pull $ECR_URL:latest
- docker build -t backend NODE_ENV=$NODE_ENV --build-arg NPM_TOKEN=$NPM_TOKEN .
- docker tag backend $DOCKER_IMAGE
- docker push $DOCKER_IMAGE
- docker tag -f $DOCKER_IMAGE $ECR_URL:latest
- docker push $ECR_URL:latest
Here is the config I've changed to make this work:
deploy-aws:
docker:
- image: docker:17.05.0-ce-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache \
py-pip=9.0.0-r1
pip install \
docker-compose==1.12.0 \
awscli==1.11.76
- restore_cache:
keys:
- v1-{{ .Branch }}
paths:
- /caches/app.tar
- run:
name: Load Docker image layer cache
command: |
set +o pipefail
docker load -i /caches/app.tar | true
- run:
name: Build Docker image
command: |
if [ "${CIRCLE_BRANCH}" == "master" ]; then
docker build -t backend --build-arg .
fi
- run:
name: Save Docker image layer cache
command: |
mkdir -p /caches
docker save -o /caches/app.tar app
- save_cache:
key: v1-{{ .Branch }}-{{ epoch }}
paths:
- /caches/app.tar
- run:
name: Tag and push to ECR
command: |
if [ "${CIRCLE_BRANCH}" == "master" ]; then
docker tag backend $DOCKER_IMAGE
docker push $DOCKER_IMAGE
docker tag -f $DOCKER_IMAGE $ECR_URL:latest
docker push $ECR_URL:latest
fi
Check out this link: https://github.com/builtinnya/circleci-2.0-beta-docker-example/blob/master/.circleci/config.yml#L39