How to deploy to AWS Beanstalk with GitLab CI - amazon-web-services

How To Deploy a Node App on AWS Elastic Beanstalk, Docker, and Gitlab ci.
I've created a simple node application. Dockerized the node application.
What I'm trying to do is deploy my application using gitlab ci.
This is what I have so far:
image: docker:git
services:
- docker:dind
stages:
- build
- release
- release-prod
variables:
CI_REGISTRY: registry.gitlab.com
CONTAINER_TEST_IMAGE: registry.gitlab.com/testapp/routing:$CI_COMMIT_REF_NAME
CONTAINER_RELEASE_IMAGE: registry.gitlab.com/testapp/routing:latest
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" --password-stdin "$CI_REGISTRY"
build:
stage: build
script:
- docker build -t $CONTAINER_TEST_IMAGE -f Dockerfile.prod .
- docker push $CONTAINER_TEST_IMAGE
release-image:
stage: release
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker tag $CONTAINER_TEST_IMAGE $CONTAINER_RELEASE_IMAGE
- docker push $CONTAINER_RELEASE_IMAGE
only:
- master
release-prod:
stage: release-prod
script:
when: manual
I'm stuck on release-prod stage. I'm just not sure how I can deploy the app to AWS Beanstalk.
Since I have the docker images have been created and stored in gitlab registry. All I want to do is instruct AWS Beanstalk to download the docker images from gitlab registry and are start the application.
I also have a Dockerrun.aws.json which defines the services.

Your Dockerrun.aws.json file is what Beanstalk uses as the final say in what is deployed.
The option I found to work for us was to make a custom docker image with the eb cli installed so we can run eb deploy... from the gitlab-ci.yml file.
This requires AWS permissions for the runner to be able to access the aws service though so a user or permissions come into play. But they would any way it's setup.
GitLab project - CI/CD settings aws user keys (Ideally it's set up to use an IAM role instead but User/keys will work - I'm not too familiar with getting temporary access which might be the best thing for this but again, I'm not sure how that works)
We use a custom EC2 instance as our runner to run the pipeline so I'm not sure about shared runners - we had a concern of passing aws user creds to a shared runner pipeline...
build stage:
build and push the docker image to our ECR repository or your use case
deploy stage:
have a custom image stored in GitLab that has pre installed the eb cli. Then run eb deploy env-name
This is the dockerfile we use for our PHP project. Some of the installs aren't necessary for your case... This could also be improved by adding a USER and package versions. This will create a docker image that has the eb cli installed though.
FROM node:12
RUN apt-get update && apt-get -y --allow-unauthenticated install apt-transport-https ca-certificates curl gnupg2 software-properties-common ruby-full \
&& add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
RUN apt-get update && apt-get -y --allow-unauthenticated install docker-ce \
&& apt-get -y install build-essential zlib1g-dev libssl-dev libncurses-dev libffi-dev libsqlite3-dev libreadline-dev libbz2-dev python-pip python3-pip
RUN git clone https://github.com/aws/aws-elastic-beanstalk-cli-setup.git \
&& ./aws-elastic-beanstalk-cli-setup/scripts/bundled_installer
RUN python3 --version && apt-get update && apt-get -y install python3-pip \
&& pip3 install awscli boto3 botocore && pip3 install boto3 botocore --upgrade
Example gitlab-ci.yml setup
release-prod:
image: registry.gitlab.com/your-acct/project/custom-image
stage: release-prod
script:
- service docker start
- echo 'export PATH="/root/.ebcli-virtual-env/executables:$PATH"' >> ~/.bash_profile && source ~/.bash_profile
- echo 'export PATH=/root/.pyenv/versions/3.7.2/bin:$PATH' >> /root/.bash_profile && source /root/.bash_profile
- eb deploy your-environment
when: manual
you could also add the echo commands to the custom gitlab image also so all you need to run is eb deploy...
Hope this helps a little

Although there are couple of different ways to achieve this, I finally found proper solution for my usage cases. I have documented in here https://medium.com/voices-of-plusdental/gitlab-ci-deployment-for-php-applications-to-aws-elastic-beanstalk-automated-qa-test-environments-253ca4932d5b Using eb deploy was the easiest and shortest version. Also allows me to customize the instances in any way I want.

Related

How to use AWS CodeArtifact *within* A Dockerfile in AWSCodeBuild

I am trying to do a pip install from codeartifact from within a dockerbuild in aws codebuild.
This article does not quite solve my problem: https://docs.aws.amazon.com/codeartifact/latest/ug/using-python-packages-in-codebuild.html
The login to AWS CodeArtifct is in the prebuild; outside of the Docker context.
But my pip install is inside my Dockerfile (we pull from a private pypi registry).
How do I do this, without doing something horrible like setting an env variable to the password derived from reading ~/.config/pip.conf/ after running the login command in prebuild?
You can use the environment
variable: PIP_INDEX_URL[1].
Below is an AWS CodeBuild buildspec.yml file where we construct the
PIP_INDEX_URL for CodeArtifact by using
this example from the AWS documentation.
buildspec.yml
pre_build:
commands:
- echo Getting CodeArtifact authorization...
- export CODEARTIFACT_AUTH_TOKEN=$(aws codeartifact get-authorization-token --domain "${CODEARTIFACT_DOMAIN}" --domain-owner "${AWS_ACCOUNT_ID}" --query authorizationToken --output text)
- export PIP_INDEX_URL="https://aws:${CODEARTIFACT_AUTH_TOKEN}#${CODEARTIFACT_DOMAIN}-${AWS_ACCOUNT_ID}.d.codeartifact.${AWS_DEFAULT_REGION}.amazonaws.com/pypi/${CODEARTIFACT_REPO}/simple/"
In your Dockerfile, add an ARG PIP_INDEX_URL line just above
your RUN pip install -r requirements.txt so it can become an environment
variable during the build process:
Dockerfile
# this needs to be added before your pip install line!
ARG PIP_INDEX_URL
RUN pip install -r requirements.txt
Finally, we build the image with the PIP_INDEX_URL build-arg.
buildspec.yml
build:
commands:
- echo Building the Docker image...
- docker build -t "${IMAGE_REPO_NAME}" --build-arg PIP_INDEX_URL .
As an aside, adding ARG PIP_INDEX_URL to your Dockerfile shouldn't break any
existing CI or workflows. If --build-arg PIP_INDEX_URL is omitted when
building an image, pip will still use the default PyPI index.
Specifying --build-arg PIP_INDEX_URL=${PIP_INDEX_URL} is valid, but
unnecessary. Specifying the argument name with no value will make Docker take
its value from the environment variable of the same
name[2].
Security note: If someone runs docker history ${IMAGE_REPO_NAME}, they can
see the value
of ${PIP_INDEX_URL}[3]
. The token is only good for a maximum of 12 hours though, and you can shorten
it to as little as 15 minutes with the --duration-seconds parameter
of aws codeartifact get-authorization-token[4],
so maybe that's acceptable. If your Dockerfile is a multi-stage build, then it
shouldn't be an issue if you're not using ARG PIP_INDEX_URL in your target
stage. docker build --secret does not seem to be supported in CodeBuild at this time.
So, here is how I solved this for now. Seems kinda hacky, but it works. (EDIT: we have since switched to #phistrom answer)
In the prebuild, I run the command and copy ~/.config/pip/pip.conf to the current build directory:
pre_build:
commands:
- echo Logging in to Amazon ECR...
...
- echo Fetching pip.conf for PYPI
- aws codeartifact --region us-east-1 login --tool pip --repository ....
- cp ~/.config/pip/pip.conf .
build:
commands:
- docker build -t $IMAGE_REPO_NAME:$IMAGE_TAG .
- docker tag $IMAGE_REPO_NAME:$IMAGE_TAG $AWS_ACCOUNT_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/$IMAGE_REPO_NAME:$IMAGE_TAG
Then in the Dockerfile, I COPY that file in, do the pip install, then rm it
COPY requirements.txt pkg/
COPY --chown=myuser:myuser pip.conf /home/myuser/.config/pip/pip.conf
RUN pip install -r ./pkg/requirements.txt
RUN pip install ./pkg
RUN rm /home/myuser/.config/pip/pip.conf

How to transfer deployment package from S3 to EC2 instance to run python script?

AWS beginner here
I have a repo in GitLab which has a python script and a requirements.txt file, and the python script has to be deployed in the EC2 ubuntu instance (and the script has to be triggered only once a day) via Gitlab CI. I am creating a deployment package of the repo using CI and through this, I am deploying the zipped package in the S3 bucket. My .gitlab-ci.yml file:
image: ubuntu:18.04
variables:
AWS_DEFAULT_REGION: eu-central-1
GIT_SUBMODULE_STRATEGY: recursive
S3_TEST_BUCKET: $BUCKET_UNPACK
stages:
- deploy
TestJob:
stage: deploy
script:
- apt-get -y update
- apt-get -y install python3-pip python3.7 zip
- python3.7 -m pip install --upgrade pip
- python3.7 -V
- pip3.7 install virtualenv
- mv iso_forest_ad.py ~ # This is the python script
- mv requirements.txt ~
# Setup virtual environment
- mkdir ~/forEC2
- cd ~/forEC2
- virtualenv -p python3 venv
- source venv/bin/activate
- pip3.7 install -r ~/requirements.txt -t ~/forEC2/venv/lib/python3.7/site-packages/
# Package environment and dependencies
- cd ~/forEC2/venv/lib/python3.7/site-packages/
- zip -r9 ~/forEC2/archive.zip .
- cd ~
- zip -g ~/forEC2/archive.zip iso_forest_ad.py
- pip install awscli --upgrade
- export PATH=$PATH:~/.local/bin
- aws configure set aws_access_key_id $AWS_TEST_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_TEST_SECRET_ACCESS_KEY
- aws configure set default.region $AWS_DEFAULT_REGION
- aws s3 cp ~/forEC2/archive.zip $BUCKET_UNPACK/anomaly-detection-deployment.zip
Contents of requirements.txt
-i https://pypi.org/simple
joblib==0.16.0; python_version >= '3.6'
numpy==1.19.0
pandas==1.0.5
psycopg2-binary==2.8.5
python-dateutil==2.8.1; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'
pytz==2020.1
scikit-learn==0.23.1
scipy==1.5.1; python_version >= '3.6'
six==1.15.0; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3'
sqlalchemy==1.3.18
threadpoolctl==2.1.0; python_version >= '3.5'
Now, I would like to transfer the script and install the dependencies in the ubuntu EC2 instance and run the script.
I know one way would be to connect to the EC2 instance and do
aws s3 sync s3://s3-bucket-name/folder /home/ubuntu
as suggested in the post: Moving files from s3 to EC2 instance. But doing this, I was not able to install the dependencies from the requirements.txt file.
I would like to know if there is an alternate way (perhaps maybe by using shell script or some other way?) for achieving this. Since I am using ubuntu locally too, using putty is not an option for me.
The link you've posted already shows one way of doing this. Namely, by using UserData.
Therefore, you would have to develop a bash script which would not only download the zip file as shown in the link, but also unpack it, and install the requirements.txt file along side with any other dependencies or configuration setup you require.
So the UserData for your instance would be something like this (pseudo-code, this is only a rough example):
#!/bin/bash
apt update
apt install -y zip awscli python3-pip # awscli is not normally on ubuntu
aws s3 sync s3://optimal-aws-nz-play-config/package.zip .
unzip package.zip
cd package
pip install -r ./requirenements.txt
If this is something you do often, you could create lunch template with the instance settings and the UserData to automatically execute these steps for each instance launched from the template.
There are also other possibilities, involving CodeDeploy, CodePipeline, but plain old UserData would be a good start.
Alternative would be to use run-command. The execution of the command would be triggered from gitlab following upload of the new s3 package.
An example of how to invoke the run-command is in the docs:
aws ssm send-command \
--document-name "AWS-RunPowerShellScript" \
--parameters commands=["echo helloWorld"] \
--targets Key=tag:Env,Values=Dev,Test
Instead of echo helloWorld you would have to write your own bash commands to be executed.

What is the best way to do CI/CD with AWS CDK (python) using GitLab CI?

I am using AWS CDK (with Python) for a containerized application that runs on Fargate. I would like to run cdk deploy in a GitLab CI process and pass the git tag as an environment variable that replaces the container running in Fargate. I am currently doing something similar with CloudFormation (aws cloudformation update-stack ...). Is anyone else doing CI/CD with AWS CDK in this way? Is there a better way to do it?
Also, what should I use for my base image for this job? I was thinking that I can either start with a python container and install node or vice versa. Or maybe there is prebuilt container somewhere that I haven't been able to find yet.
Here is start that seems to be working well:
CDK:
image: python:3.8
stage: deploy
before_script:
- apt-get -qq update && apt-get -y install nodejs npm
- node -v
- npm i -g aws-cdk
- cd awscdk
- pip3 install -r requirements.txt
script:
- cdk diff
- cdk deploy --require-approval never
Edit 2020-05-04:
CDK can build docker images during cdk deploy, but it needs access to docker. If you don't need docker, the above CI job definition should be fine. Here's the current CI job I'm using:
cdk deploy:
image: docker:19.03.1
services:
- docker:19.03.5-dind
stage: deploy
only:
- master
before_script:
- apk add --no-cache python3
- python3 -V
- pip3 -V
- apk add nodejs-current npm
- node -v
- npm i -g aws-cdk
- cd awscdk
- pip3 install -r requirements.txt
script:
- cdk bootstrap aws://$AWS_ACCOUNT_ID/$AWS_DEFAULT_REGION
- cdk deploy --require-approval never
The cdk bootstrap is needed because I am using assets in my cdk code:
self.backend_task.add_container(
"DjangoBackend",
image=ecs.AssetImage(
"../backend",
file="scripts/prod/Dockerfile",
target="production",
),
logging=ecs.LogDrivers.aws_logs(stream_prefix="Backend"),
environment=environment_variables,
command=["/start_prod.sh"],
)
Here's more information on cdk bootstrap: https://github.com/aws/aws-cdk/blob/master/design/cdk-bootstrap.md
you definitely have to use CDK deploy inside the CI/CD pipeline if you have lambda or ECS assets, otherwise, you could run CDK synth and pass the resulting Cloudformation to AWS Code Deploy. That means a lot of your CI/CD will be spent deploying which might drain your free tier build minutes or just means you pay more (AWS Code Deploy is free)
I do something similar with Golang in CircleCi. I use the Go base image and install nodejs and cdk. I use this base image to build all my go binaries, the vuejs frontend and compile cdk typescript and deploy it.
FROM golang:1.13
RUN go get -u -d github.com/magefile/mage
WORKDIR $GOPATH/src/github.com/magefile/mage
RUN go run bootstrap.go
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash -
RUN apt-get install -y nodejs
RUN npm i -g aws-cdk#1.36.x
RUN npm i -g typescript
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add -
RUN echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list
RUN apt update && apt install yarn
I hope that helps.
Also, what should I use for my base image for this job? I was thinking that I can either start with a python container and install node or vice versa. Or maybe there is prebuilt container somewhere that I haven't been able to find yet.
For anyone looking for how to implement CI/CD with AWS CDK Python in 2022, here's a tested solution:
Use python:3.10.8 as the base image in your CI/CD
(or any image with Debian 11)
Install Node.js 16 from NodeSource: curl -fsSL https://deb.nodesource.com/setup_16.x | bash - && apt-get install -y nodejs
Install aws-cdk: npm i -g aws-cdk
You can add the two latter steps as inline scripts in your CI/CD pipeline so you do not need to build your own Docker image.
Here's a full example for Bitbucket Pipelines:
image: python:3.10.8
run-tests: &run-tests
step:
name: Run tests
script:
# Node 16
- curl -fsSL https://deb.nodesource.com/setup_16.x | bash - && apt-get install -y nodejs
- npm i -g aws-cdk
- pip install -r requirements-dev.txt
- pytest
pipelines:
pull-requests:
"**":
- <<: *run-tests
branches:
master:
- <<: *run-tests
Note that the above instructions do not install Docker engine. In Bitbucket Pipelines, Docker can be used simply by adding
services:
- docker
in the configuration file.
If cdk deploy is giving you the error:
/usr/lib/node_modules/aws-cdk/lib/index.js:12422
home = path.join((os.userInfo().homedir ?? os.homedir()).trim(), ".cdk");
then the node version is out of date. This can be fixed by updating the docker image which also requires pip3:
cdk deploy:
image: docker:20.10.21
services:
- docker:20.10.21-dind
stage: deploy
only:
- master
before_script:
- apk add --no-cache python3
- python3 -V
- apk add py3-pip
- pip3 -V

Docker, GitLab and deploying an image to AWS EC2

I am trying to learn how to create a .gitlab-ci.yml and am really struggling to find the resources to help me. I am using dind to create a docker image to push to the docker hub, then trying to log into my AWS EC2 instance, which also has docker installed, to pull the image and start it running.
I have successfully managed to build my image using GitLab and pushed it to the docker hub, but now I have the problem of trying to log into the EC2 instance to pull the image.
My first naive attempt looks like this:
#.gitlab-ci.yml
image: docker:18.09.7
variables:
DOCKER_REPO: myrepo
IMAGE_BASE_NAME: my-image-name
IMAGE: $DOCKER_REPO/$IMAGE_BASE_NAME:$CI_COMMIT_REF_SLUG
CONTAINER_NAME: my-container-name
services:
- docker:18.09.7-dind
before_script:
- docker login -u "$DOCKER_REGISTRY_USER" -p "$DOCKER_REGISTRY_PASSWORD"
after_script:
- docker logout
stages:
- build
- deploy
build:
stage: build
script:
- docker build . -t $IMAGE -f $PWD/staging.Dockerfile
- docker push $IMAGE
deploy:
stage: deploy
variables:
RELEASE_IMAGE: $DOCKER_REPO/$IMAGE_BASE_NAME:latest
script:
- docker pull $IMAGE
- docker tag $IMAGE $IMAGE
- docker push $IMAGE
- docker tag $IMAGE $RELEASE_IMAGE
- docker push $RELEASE_IMAGE
# So far so good - this is where it starts to go pear-shaped
- apt-get install sudo -y
- sudo apt install openssh-server -y
- ssh -i $AWS_KEY $AWS_URL "docker pull $RELEASE_IMAGE"
- ssh -i $AWS_KEY $AWS_URL "docker rm --force $CONTAINER_NAME"
- ssh -i $AWS_KEY $AWS_URL "docker run -p 3001:3001 -p 3002:3002 -w "/var/www/api" --name ${CONTAINER_NAME} ${IMAGE}"
It seems that whatever operating system the docker image is built upon does not have apt-get, ssh and a bunch of other useful commands installed. I receive the following error:
/bin/sh: eval: line 114: apt-get: not found
Can anyone help me with the commands I need to log into my EC2 instance and pull and run the image in gitlab-ci.yml using this docker:dind image? Upon which operating system is the docker image built?
The official Docker image is based on Alpine Linux, which uses the apk package manager.
Try replacing your apt-get commands with the following instead:
- apk add openssh-client
There is no need to install sudo, just to install openssh-server, so that step was removed.

Elastic Beanstalk - running npm install and webpack on every deployment of Django

I'm trying to use Elastic Beanstalk to deploy my Django server.
My problem is that part of my deployment process is to "npm install" from my package.json, and then executing webpack (npx webpack ..... --output main.js)
How can I do that while maintaining an easy deployment process (eb deploy) and without committing main.js to the repository?
To do it, you'll probably need ebextensions to configure your Elastic Beanstalk environment. Documentation is here.
I recently deploy my Symfony app on ElasticBeanstalk which needed Yarn to execute webpack.
To do it, I created a .config file in which I write the commands to install Yarn and another .config file to run Yarn at each deployment. All .config files are in .ebextensions directory at the root of the project.
commands:
01_install_node:
command: |
sudo curl --silent --location https://rpm.nodesource.com/setup_8.x | sudo bash -
sudo yum -y install nodejs
02_install_yarn:
command: |
sudo wget https://dl.yarnpkg.com/rpm/yarn.repo -O /etc/yum.repos.d/yarn.repo
sudo yum -y install yarn
You can use the container_commands key to execute commands that affect
your application source code. Container commands run after the
application and web server have been set up and the application
version archive has been extracted.
container_commands:
02_run_yarn:
command: |
yarn install
yarn run encore production