Deploy Docker Containers from Docker Cloud - amazon-web-services

I'm new to Docker and am trying to learn more about best practices for deploying Dockerized images. I've built some images on my development host using the Dockerfile and docker-compose.yml below.
After building the images, I ssh'd to my production server, an Amazon Linux flavored T2.micro instance on AWS's EC2 service. There I installed docker and docker-compose, then tried to build my images, but ran out of RAM. I therefore published the images I had built on my local host to Docker Cloud, and I now wish to deploy those images from Docker Cloud on the AWS instance.
How can I achieve this? I'd be very grateful for any help others can offer!
Dockerfile:
# Specify base image
FROM andreptb/oracle-java:8-alpine
# Specify author / maintainer
MAINTAINER Douglas Duhaime <douglas.duhaime#gmail.com>
# Add source to a directory and use that directory
# NB: /app is a reserved directory in tomcat container
ENV APP_PATH="/lts-app"
RUN mkdir "$APP_PATH"
ADD . "$APP_PATH"
WORKDIR "$APP_PATH"
##
# Build BlackLab
##
RUN apk add --update --no-cache \
wget \
tar \
git
# Store the path to the maven home
ENV MAVEN_HOME="/usr/lib/maven"
# Add maven and java to the path
ENV PATH="$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH"
# Install Maven
RUN MAVEN_VERSION="3.3.9" && \
cd "/tmp" && \
wget "http://archive.apache.org/dist/maven/maven-3/$MAVEN_VERSION/binaries/apache-maven-$MAVEN_VERSION-bin.tar.gz" -O - | tar xzf - && \
mv "/tmp/apache-maven-$MAVEN_VERSION" "$MAVEN_HOME" && \
ln -s "$MAVEN_HOME/bin/mvn" "/usr/bin/mvn" && \
rm -rf "/tmp/*"
# Get the BlackLab source
RUN git clone "git://github.com/INL/BlackLab.git"
# Build BlackLab with Maven
RUN cd "BlackLab" && \
mvn clean install
##
# Build Python + Node dependencies
##
# Install system deps with Alpine Linux package manager
RUN apk add --update --no-cache \
g++ \
gcc \
make \
openssl-dev \
python3-dev \
python \
py-pip \
nodejs
# Install Python dependencies
RUN pip install -r "requirements.txt" && \
npm install --no-optional && \
npm run build
# Store Mongo service name as mongo host
ENV MONGO_HOST=mongo_service
ENV TOMCAT_HOST=tomcat_service
ENV TOMCAT_WEBAPPS=/tomcat_webapps/
# Make ports available
EXPOSE 7082
# Seed the db
CMD npm run seed && \
gunicorn -b 0.0.0.0:7082 --access-logfile - --reload server.app:app
docker-compose.yml:
version: '2'
services:
tomcat_service:
image: 'bitnami/tomcat:latest'
ports:
- '8080:8080'
volumes:
- docker-data-tomcat:/bitnami/tomcat/data/
- docker-data-blacklab:/lts-app/lts/
mongo_service:
image: 'mongo'
command: mongod
ports:
- '27017:27017'
web:
# gain access to linked containers
links:
- mongo_service
- tomcat_service
# explicitly declare service dependencies
depends_on:
- mongo_service
- tomcat_service
# set environment variables
environment:
PYTHONUNBUFFERED: 'true'
# use the image from the Dockerfile in the cwd
build: .
ports:
- '7082:7082'
volumes:
- docker-data-tomcat:/tomcat_webapps
- docker-data-blacklab:/lts-app/lts/
volumes:
docker-data-tomcat:
docker-data-blacklab:

To solve this problem, I followed advice from StackOverflow user #MazelTov's and built the containers on my local OSX development machine, then published the images to Docker Cloud, then pulled those images down onto and ran the images from my production server (AWS EC2).
Install Dependencies
I'll try and outline the steps I followed below in case they help others. Please note these steps require you to have docker and docker-compose installed on your development and production machines. I used the gui installer to install Docker for Mac.
Build Images
After writing a Dockerfile and docker-compose.yml file, you can build your images with docker-compose up --build.
Upload Images to Docker Cloud
Once the images are built, you can upload them to Docker Cloud with the following steps. First, create an account on Docker Cloud.
Then store your Docker Cloud username in an environment variable (so your ~/.bash_profile should contain export DOCKER_ID_USER='yaledhlab' (use your username though).
Next login to your account from your developer machine:
docker login
Once you're logged in, list your docker images:
docker ps
This will display something like:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
89478c386661 yaledhlab/let-them-speak-web "/bin/sh -c 'npm run…" About an hour ago Up About an hour 0.0.0.0:7082->7082/tcp letthemspeak_web_1
5e9c75d29051 training/webapp:latest "python app.py" 4 hours ago Up 4 hours 0.0.0.0:5000->5000/tcp heuristic_mirzakhani
890f7f1dc777 bitnami/tomcat:latest "/app-entrypoint.sh …" 4 hours ago Up About an hour 0.0.0.0:8080->8080/tcp letthemspeak_tomcat_service_1
09d74e36584d mongo "docker-entrypoint.s…" 4 hours ago Up About an hour 0.0.0.0:27017->27017/tcp letthemspeak_mongo_service_1
For each of the images you want to publish to Docker Cloud, run:
docker tag image_name $DOCKER_ID_USER/my-uploaded-image-name
docker push $DOCKER_ID_USER/my-uploaded-image-name
For example, to upload mywebapp_web to your user's account on Docker cloud, you can run:
docker tag mywebapp_web $DOCKER_ID_USER/web
docker push $DOCKER_ID_USER/web
You can then run open https://cloud.docker.com/swarm/$DOCKER_ID_USER/repository/list to see your uploaded images.
Deploy Images
Finally, you can deploy your images on EC2 with the following steps. First, install Docker and Docker-Compose on the Amazon-flavored EC2 instance:
# install docker
sudo yum install docker -y
# start docker
sudo service docker start
# allow ec2-user to run docker
sudo usermod -a -G docker ec2-user
# get the docker-compose binaries
sudo curl -L https://github.com/docker/compose/releases/download/1.20.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
# change the permissions on the source
sudo chmod +x /usr/local/bin/docker-compose
Log out, then log back in to update your user's groups. Then start a screen and run the server: screen. Once the screen starts, you should be able to add a new docker-compose config file that specifies the path to your deployed images. For example, I needed to fetch the let-them-speak-web container housed within yaledhlab's Docker Cloud account, so I changed the docker-compose.yml file above to the file below, which I named production.yml:
version: '2'
services:
tomcat_service:
image: 'bitnami/tomcat:latest'
ports:
- '8080:8080'
volumes:
- docker-data-tomcat:/bitnami/tomcat/data/
- docker-data-blacklab:/lts-app/lts/
mongo_service:
image: 'mongo'
command: mongod
ports:
- '27017:27017'
web:
image: 'yaledhlab/let-them-speak-web'
# gain access to linked containers
links:
- mongo_service
- tomcat_service
# explicitly declare service dependencies
depends_on:
- mongo_service
- tomcat_service
# set environment variables
environment:
PYTHONUNBUFFERED: 'true'
ports:
- '7082:7082'
volumes:
- docker-data-tomcat:/tomcat_webapps
- docker-data-blacklab:/lts-app/lts/
volumes:
docker-data-tomcat:
docker-data-blacklab:
Then the production compose file can be run with: docker-compose -f production.yml up. Finally, ssh in with another terminal, and detach the screen with screen -D.

Yeah, that's true. Docker Cloud uses Docker Hub as its native registry for storing both public and private repositories. Once you push your images to Docker Hub, they are available in Docker Cloud.
Pulling images from Docker hub is the opposite of pushing them. This works for both private and public repositories.
To download your images locally, I always export docker username to shell session:
# export DOCKER_ID_USER="username"
In fact, I have this on my .bashrc profile.
Replacing the value of DOCKER_ID_USER with your Docker Cloud username.
Then Log in to Docker Cloud using the docker login command.
$ docker login
This logs you in using your Docker ID, which is shared between both Docker Hub and Docker Cloud
You can now run docker pull command to get your images downloaded locally.
$ docker pull image:tag
This is applicable to any Cloud Platform, not specific to AWS.
As you’re new to docker, here is my recommendation of best Docker Guides, including Docker vs VMs and advanced topics like working with Docker swarm and Kubernetes.

Related

CIDC with BitBucket, Docker Image and Azure

EDITED
I am learning CICD and Docker. So far I have managed to successfully create a docker image using the code below:
Dockerfile
# Docker Operating System
FROM python:3-slim-buster
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
#App folder on Slim OS
WORKDIR /app
# Install pip requirements
COPY requirements.txt requirements.txt
RUN python -m pip install --upgrade pip pip install -r requirements.txt
#Copy Files to App folder
COPY . /app
docker-compose.yml
version: '3.4'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
ports:
- 8000:8000
My code is on BitBucket and I have a pipeline file as follows:
bitbucket-pipelines.yml
image: atlassian/default-image:2
pipelines:
branches:
master:
- step:
name:
Build And Publish To Azure
services:
- docker
script:
- docker login -u $AZURE_USER -p $AZURE_PASS xxx.azurecr.io
- docker build -t xxx.azurecr.io .
- docker push xxx.azurecr.io
With xxx being the Container registry on Azure. When the pipeline job runs I am getting denied: requested access to the resource is denied error on BitBucket.
What did I not do correctly?
Thanks.
The Edit
Changes in docker-compose.yml and bitbucket-pipeline.yml
docker-compose.yml
version: '3.4'
services:
web:
build: .
image: xx.azurecr.io/myticket
container_name: xx
command: python manage.py runserver 0.0.0.0:80
ports:
- 80:80
bitbucket-pipelines.yml
image: atlassian/default-image:2
pipelines:
branches:
master:
- step:
name:
Build And Publish To Azure
services:
- docker
script:
- docker login -u $AZURE_USER -p $AZURE_PASS xx.azurecr.io
- docker build -t xx.azurecr.io/xx .
- docker push xx.azurecr.io/xx
You didnt specify CMD or ENTRYPOINT in your dockerfile.
There are stages when building a dockerfile
Firstly you call an image, then you package your requirements etc.. those are stages that are being executed while the container is building. you are missing the last stage that executes a command inside the container when its already up.
Both ENTRYPOINT and CMD are essential for building and running Dockerfiles.
for it to work you must add a CMD or ENTRYPOINT at the bottom of your dockerfile..
Change your files accordingly and try again.
Dockerfile
# Docker Operating System
FROM python:3-slim-buster
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
#App folder on Slim OS
WORKDIR /app
# Install pip requirements
COPY requirements.txt requirements.txt
RUN python -m pip install --upgrade pip pip install -r requirements.txt
#Copy Files to App folder
COPY . /app
# Execute commands inside the container
CMD manage.py runserver 0.0.0.0:8000
Check you are able to build and run the image by going to its directory and running
docker build -t app .
docker run -d -p 80:80 app
docker ps
See if your container is running.
Next
Update the image property in the docker-compose file.
Prefix the image name with the login server name of your Azure container registry, .azurecr.io. For example, if your registry is named myregistry, the login server name is myregistry.azurecr.io (all lowercase), and the image property is then myregistry.azurecr.io/azure-vote-front.
Change the ports mapping to 80:80. Save the file.
The updated file should look similar to the following:
docker-compose.yml
Copy
version: '3'
services:
foo:
build: .
image: foo.azurecr.io/atlassian/default-image:2
container_name: foo
ports:
- "80:80"
By making these substitutions, the image you build is tagged for your Azure container registry, and the image can be pulled to run in Azure Container Instances.
More in documentation

Duplicate images on docker-compose build. How to properly push two services of docker-compose.yml to Docker hub registry?

I have a docker-compose.yml defined as follows with two services (the database and the app):
version: '3'
services:
db:
build: .
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=(adminname)
- POSTGRES_PASSWORD=(adminpassword)
- CLOUDINARY_URL=(cloudinarykey)
app:
build: .
ports:
- "8000:8000"
depends_on:
- db
The reason I have build: . in both services is due to how you can't do docker-compose push unless you have a build in all services. However, this means that both services are referring to the same Dockerfile, which builds the entire app. So after I run docker-compose build and look at the images available I see this:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mellon_app latest XXXXXXXXXXXX 27 seconds ago 1.14GB
postgres latest XXXXXXXXXXXX 27 seconds ago 1.14GB
The IMAGE_ID is the exact same for both images, the size is exactly the same for both images. This makes me think I've definitely done some unnecessary duplication as they're both just running the same Dockerfile. I don't want to take up any unnecessary space, how do I do this properly?
This is my Dockerfile:
FROM (MY FRIENDS ACCOUNT)/django-npm:latest
RUN mkdir usr/src/mprova
WORKDIR /usr/src/mprova
COPY frontend ./frontend
COPY backend ./backend
WORKDIR /usr/src/mprova/frontend
RUN npm install
RUN npm run build
WORKDIR /usr/src/mprova/backend
ENV DJANGO_PRODUCTION=True
RUN pip3 install -r requirements.txt
EXPOSE 8000
CMD python3 manage.py collectstatic && \
python3 manage.py makemigrations && \
python3 manage.py migrate && \
gunicorn mellon.wsgi --bind 0.0.0.0:8000
What is the proper way to push the images to my Docker hub registry without this duplication?
Proper way is to do
docker build -f {path-to-dockerfile} -t {desired-docker-image-name}.
docker tag {desired-docker-image-name}:latest {desired-remote-image-name}:latest or not latest but what you want, like datetime in int format
docker push {desired-remote-image-name}:latest
and cleanup:
4. docker rmi {desired-docker-image-name}:latest {desired-remote-image-name}:latest
Whole purpose of docker-compose is to help your local development, so it's easier to start several pods and combine them in local docker-compose network etc...

Accessing Cloud SQL from Cloud Run on Google Cloud

I have a Cloud Run service that accesses a Cloud SQL instance through SQLAlchemy. However, in the logs for Cloud Run, I see CloudSQL connection failed. Please see https://cloud.google.com/sql/docs/mysql/connect-run for additional details: ensure that the account has access to "<connection_string>". Going to that link, it says that:
"By default, your app will authorize your connections using the Cloud Run (fully managed) service account. The service account is in the format PROJECT_NUMBER-compute#developer.gserviceaccount.com."
However, the following (https://cloud.google.com/run/docs/securing/service-identity) says:
"By default, Cloud Run revisions are using the Compute Engine default service account (PROJECT_NUMBER-compute#developer.gserviceaccount.com), which has the Project > Editor IAM role. This means that by default, your Cloud Run revisions have read and write access to all resources in your Google Cloud project."
So shouldn't that mean that Cloud Run can already access SQL? I've already set up the Cloud SQL Connection in the Cloud Run deployment page. What do you suggest I do to allow access to Cloud SQL from Cloud Run?
EDIT: I have to enable the Cloud SQL API.
No, Cloud Run cannot access to Cloud SQL by default. You need to follow one of the two paths.
Connect to SQL using a local unix socket file: You need to configure permissions like you said and deploy with flags indicating intent to connect to the database. Follow https://cloud.google.com/sql/docs/mysql/connect-run
Connect to SQL with a private IP: This involves deploying Cloud SQL instance into a VPC Network and therefore having it get a private IP address. Then you use Cloud Run VPC Access Connector (currently beta) to allow Cloud Run container to be able to connect to that VPC network, which includes SQL database's IP address directly (no IAM permissions needed). Follow https://cloud.google.com/vpc/docs/configure-serverless-vpc-access
Cloud SQL Proxy solution
I use the cloud-sql-proxy to create a local unix socket file in the workspace directory provided by Cloud Build.
Here are the main steps:
Pull a Berglas container populating its call with the _VAR1 substitution, an environment variable I've encrypted using Berglas called CMCREDENTIALS. You should add as many of these _VAR{n} as you require.
Install the cloudsqlproxy via wget.
Run an intermediate step (tests for this build). This step uses the variables stored in the provided temporary /workspace directory.
Build your image.
Push your image.
Using Cloud Run, deploy and include the flag --set-environment-variables
The full cloudbuild.yaml
# basic cloudbuild.yaml
steps:
# pull the berglas container and write the secrets to temporary files
# under /workspace
- name: gcr.io/berglas/berglas
id: 'Install Berglas'
env:
- '${_VAR1}=berglas://${_BUCKET_ID_SECRETS}/${_VAR1}?destination=/workspace/${_VAR1}'
args: ["exec", "--", "/bin/sh"]
# install the cloud sql proxy
- id: 'Install Cloud SQL Proxy'
name: alpine:latest
entrypoint: sh
args:
- "-c"
- "\
wget -O /workspace/cloud_sql_proxy \
https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 && \
sleep 2 && \
chmod +x /workspace/cloud_sql_proxy"
waitFor: ['-']
# using the secrets from above, build and run the test suite
- name: 'python:3.8.3-slim'
id: 'Run Unit Tests'
entrypoint: '/bin/bash'
args:
- "-c"
- "\
(/workspace/cloud_sql_proxy -dir=/workspace/${_SQL_PROXY_PATH} -instances=${_INSTANCE_NAME1} & sleep 2) && \
apt-get update && apt-get install -y --no-install-recommends \
build-essential libssl-dev libffi-dev libpq-dev python3-dev wget && \
rm -rf /var/lib/apt/lists/* && \
export ${_VAR1}=$(cat /workspace/${_VAR1}) && \
export INSTANCE_NAME1=${_INSTANCE_NAME1} && \
export SQL_PROXY_PATH=/workspace/${_SQL_PROXY_PATH} && \
pip install -r dev-requirements.txt && \
pip install -r requirements.txt && \
python -m pytest -v && \
rm -rf /workspace/${_SQL_PROXY_PATH} && \
echo 'Removed Cloud SQL Proxy'"
waitFor: ['Install Cloud SQL Proxy', 'Install Berglas']
dir: '${_APP_DIR}'
# Using the application/Dockerfile build instructions, build the app image
- name: 'gcr.io/cloud-builders/docker'
id: 'Build Application Image'
args: ['build',
'-t',
'gcr.io/$PROJECT_ID/${_IMAGE_NAME}',
'.',
]
dir: '${_APP_DIR}'
# Push the application image
- name: 'gcr.io/cloud-builders/docker'
id: 'Push Application Image'
args: ['push',
'gcr.io/$PROJECT_ID/${_IMAGE_NAME}',
]
# Deploy the application image to Cloud Run
# populating secrets via Berglas exec ENTRYPOINT for gunicorn
- name: 'gcr.io/cloud-builders/gcloud'
id: 'Deploy Application Image'
args: ['beta',
'run',
'deploy',
'${_IMAGE_NAME}',
'--image',
'gcr.io/$PROJECT_ID/${_IMAGE_NAME}',
'--region',
'us-central1',
'--platform',
'managed',
'--quiet',
'--add-cloudsql-instances',
'${_INSTANCE_NAME1}',
'--set-env-vars',
'SQL_PROXY_PATH=/${_SQL_PROXY_PATH},INSTANCE_NAME1=${_INSTANCE_NAME1},${_VAR1}=berglas://${_BUCKET_ID_SECRETS}/${_VAR1}',
'--allow-unauthenticated',
'--memory',
'512Mi'
]
# Use the defaults below which can be changed at the command line
substitutions:
_IMAGE_NAME: your-image-name
_BUCKET_ID_SECRETS: your-bucket-for-berglas-secrets
_INSTANCE_NAME1: project-name:location:dbname
_SQL_PROXY_PATH: cloudsql
_VAR1: CMCREDENTIALS
# The images we'll push here
images: [
'gcr.io/$PROJECT_ID/${_IMAGE_NAME}'
]
Dockerfile utilized
The below builds a Python app from source contained inside the directory <myrepo>/application. This dockerfile sits under application/Dockerfile.
# Use the official lightweight Python image.
# https://hub.docker.com/_/python
FROM python:3.8.3-slim
# Add build arguments
# Copy local code to the container image.
ENV APP_HOME /application
WORKDIR $APP_HOME
# Install production dependencies.
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
libpq-dev \
python3-dev \
libssl-dev \
libffi-dev \
&& rm -rf /var/lib/apt/lists/*
# Copy the application source
COPY . ./
# Install Python dependencies
RUN pip install -r requirements.txt --no-cache-dir
# Grab Berglas from Google Cloud Registry
COPY --from=gcr.io/berglas/berglas:latest /bin/berglas /bin/berglas
# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
ENTRYPOINT exec /bin/berglas exec -- gunicorn --bind :$PORT --workers 1 --threads 8 app:app
Hope this helps someone, though possibly too specific (Python + Berglas) for the original OP.

How to deploy to AWS Beanstalk with GitLab CI

How To Deploy a Node App on AWS Elastic Beanstalk, Docker, and Gitlab ci.
I've created a simple node application. Dockerized the node application.
What I'm trying to do is deploy my application using gitlab ci.
This is what I have so far:
image: docker:git
services:
- docker:dind
stages:
- build
- release
- release-prod
variables:
CI_REGISTRY: registry.gitlab.com
CONTAINER_TEST_IMAGE: registry.gitlab.com/testapp/routing:$CI_COMMIT_REF_NAME
CONTAINER_RELEASE_IMAGE: registry.gitlab.com/testapp/routing:latest
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" --password-stdin "$CI_REGISTRY"
build:
stage: build
script:
- docker build -t $CONTAINER_TEST_IMAGE -f Dockerfile.prod .
- docker push $CONTAINER_TEST_IMAGE
release-image:
stage: release
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker tag $CONTAINER_TEST_IMAGE $CONTAINER_RELEASE_IMAGE
- docker push $CONTAINER_RELEASE_IMAGE
only:
- master
release-prod:
stage: release-prod
script:
when: manual
I'm stuck on release-prod stage. I'm just not sure how I can deploy the app to AWS Beanstalk.
Since I have the docker images have been created and stored in gitlab registry. All I want to do is instruct AWS Beanstalk to download the docker images from gitlab registry and are start the application.
I also have a Dockerrun.aws.json which defines the services.
Your Dockerrun.aws.json file is what Beanstalk uses as the final say in what is deployed.
The option I found to work for us was to make a custom docker image with the eb cli installed so we can run eb deploy... from the gitlab-ci.yml file.
This requires AWS permissions for the runner to be able to access the aws service though so a user or permissions come into play. But they would any way it's setup.
GitLab project - CI/CD settings aws user keys (Ideally it's set up to use an IAM role instead but User/keys will work - I'm not too familiar with getting temporary access which might be the best thing for this but again, I'm not sure how that works)
We use a custom EC2 instance as our runner to run the pipeline so I'm not sure about shared runners - we had a concern of passing aws user creds to a shared runner pipeline...
build stage:
build and push the docker image to our ECR repository or your use case
deploy stage:
have a custom image stored in GitLab that has pre installed the eb cli. Then run eb deploy env-name
This is the dockerfile we use for our PHP project. Some of the installs aren't necessary for your case... This could also be improved by adding a USER and package versions. This will create a docker image that has the eb cli installed though.
FROM node:12
RUN apt-get update && apt-get -y --allow-unauthenticated install apt-transport-https ca-certificates curl gnupg2 software-properties-common ruby-full \
&& add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
RUN apt-get update && apt-get -y --allow-unauthenticated install docker-ce \
&& apt-get -y install build-essential zlib1g-dev libssl-dev libncurses-dev libffi-dev libsqlite3-dev libreadline-dev libbz2-dev python-pip python3-pip
RUN git clone https://github.com/aws/aws-elastic-beanstalk-cli-setup.git \
&& ./aws-elastic-beanstalk-cli-setup/scripts/bundled_installer
RUN python3 --version && apt-get update && apt-get -y install python3-pip \
&& pip3 install awscli boto3 botocore && pip3 install boto3 botocore --upgrade
Example gitlab-ci.yml setup
release-prod:
image: registry.gitlab.com/your-acct/project/custom-image
stage: release-prod
script:
- service docker start
- echo 'export PATH="/root/.ebcli-virtual-env/executables:$PATH"' >> ~/.bash_profile && source ~/.bash_profile
- echo 'export PATH=/root/.pyenv/versions/3.7.2/bin:$PATH' >> /root/.bash_profile && source /root/.bash_profile
- eb deploy your-environment
when: manual
you could also add the echo commands to the custom gitlab image also so all you need to run is eb deploy...
Hope this helps a little
Although there are couple of different ways to achieve this, I finally found proper solution for my usage cases. I have documented in here https://medium.com/voices-of-plusdental/gitlab-ci-deployment-for-php-applications-to-aws-elastic-beanstalk-automated-qa-test-environments-253ca4932d5b Using eb deploy was the easiest and shortest version. Also allows me to customize the instances in any way I want.

Docker, GitLab and deploying an image to AWS EC2

I am trying to learn how to create a .gitlab-ci.yml and am really struggling to find the resources to help me. I am using dind to create a docker image to push to the docker hub, then trying to log into my AWS EC2 instance, which also has docker installed, to pull the image and start it running.
I have successfully managed to build my image using GitLab and pushed it to the docker hub, but now I have the problem of trying to log into the EC2 instance to pull the image.
My first naive attempt looks like this:
#.gitlab-ci.yml
image: docker:18.09.7
variables:
DOCKER_REPO: myrepo
IMAGE_BASE_NAME: my-image-name
IMAGE: $DOCKER_REPO/$IMAGE_BASE_NAME:$CI_COMMIT_REF_SLUG
CONTAINER_NAME: my-container-name
services:
- docker:18.09.7-dind
before_script:
- docker login -u "$DOCKER_REGISTRY_USER" -p "$DOCKER_REGISTRY_PASSWORD"
after_script:
- docker logout
stages:
- build
- deploy
build:
stage: build
script:
- docker build . -t $IMAGE -f $PWD/staging.Dockerfile
- docker push $IMAGE
deploy:
stage: deploy
variables:
RELEASE_IMAGE: $DOCKER_REPO/$IMAGE_BASE_NAME:latest
script:
- docker pull $IMAGE
- docker tag $IMAGE $IMAGE
- docker push $IMAGE
- docker tag $IMAGE $RELEASE_IMAGE
- docker push $RELEASE_IMAGE
# So far so good - this is where it starts to go pear-shaped
- apt-get install sudo -y
- sudo apt install openssh-server -y
- ssh -i $AWS_KEY $AWS_URL "docker pull $RELEASE_IMAGE"
- ssh -i $AWS_KEY $AWS_URL "docker rm --force $CONTAINER_NAME"
- ssh -i $AWS_KEY $AWS_URL "docker run -p 3001:3001 -p 3002:3002 -w "/var/www/api" --name ${CONTAINER_NAME} ${IMAGE}"
It seems that whatever operating system the docker image is built upon does not have apt-get, ssh and a bunch of other useful commands installed. I receive the following error:
/bin/sh: eval: line 114: apt-get: not found
Can anyone help me with the commands I need to log into my EC2 instance and pull and run the image in gitlab-ci.yml using this docker:dind image? Upon which operating system is the docker image built?
The official Docker image is based on Alpine Linux, which uses the apk package manager.
Try replacing your apt-get commands with the following instead:
- apk add openssh-client
There is no need to install sudo, just to install openssh-server, so that step was removed.