Docker, GitLab and deploying an image to AWS EC2 - amazon-web-services

I am trying to learn how to create a .gitlab-ci.yml and am really struggling to find the resources to help me. I am using dind to create a docker image to push to the docker hub, then trying to log into my AWS EC2 instance, which also has docker installed, to pull the image and start it running.
I have successfully managed to build my image using GitLab and pushed it to the docker hub, but now I have the problem of trying to log into the EC2 instance to pull the image.
My first naive attempt looks like this:
#.gitlab-ci.yml
image: docker:18.09.7
variables:
DOCKER_REPO: myrepo
IMAGE_BASE_NAME: my-image-name
IMAGE: $DOCKER_REPO/$IMAGE_BASE_NAME:$CI_COMMIT_REF_SLUG
CONTAINER_NAME: my-container-name
services:
- docker:18.09.7-dind
before_script:
- docker login -u "$DOCKER_REGISTRY_USER" -p "$DOCKER_REGISTRY_PASSWORD"
after_script:
- docker logout
stages:
- build
- deploy
build:
stage: build
script:
- docker build . -t $IMAGE -f $PWD/staging.Dockerfile
- docker push $IMAGE
deploy:
stage: deploy
variables:
RELEASE_IMAGE: $DOCKER_REPO/$IMAGE_BASE_NAME:latest
script:
- docker pull $IMAGE
- docker tag $IMAGE $IMAGE
- docker push $IMAGE
- docker tag $IMAGE $RELEASE_IMAGE
- docker push $RELEASE_IMAGE
# So far so good - this is where it starts to go pear-shaped
- apt-get install sudo -y
- sudo apt install openssh-server -y
- ssh -i $AWS_KEY $AWS_URL "docker pull $RELEASE_IMAGE"
- ssh -i $AWS_KEY $AWS_URL "docker rm --force $CONTAINER_NAME"
- ssh -i $AWS_KEY $AWS_URL "docker run -p 3001:3001 -p 3002:3002 -w "/var/www/api" --name ${CONTAINER_NAME} ${IMAGE}"
It seems that whatever operating system the docker image is built upon does not have apt-get, ssh and a bunch of other useful commands installed. I receive the following error:
/bin/sh: eval: line 114: apt-get: not found
Can anyone help me with the commands I need to log into my EC2 instance and pull and run the image in gitlab-ci.yml using this docker:dind image? Upon which operating system is the docker image built?

The official Docker image is based on Alpine Linux, which uses the apk package manager.
Try replacing your apt-get commands with the following instead:
- apk add openssh-client
There is no need to install sudo, just to install openssh-server, so that step was removed.

Related

How to deploy to AWS Beanstalk with GitLab CI

How To Deploy a Node App on AWS Elastic Beanstalk, Docker, and Gitlab ci.
I've created a simple node application. Dockerized the node application.
What I'm trying to do is deploy my application using gitlab ci.
This is what I have so far:
image: docker:git
services:
- docker:dind
stages:
- build
- release
- release-prod
variables:
CI_REGISTRY: registry.gitlab.com
CONTAINER_TEST_IMAGE: registry.gitlab.com/testapp/routing:$CI_COMMIT_REF_NAME
CONTAINER_RELEASE_IMAGE: registry.gitlab.com/testapp/routing:latest
before_script:
- echo "$CI_REGISTRY_PASSWORD" | docker login -u "$CI_REGISTRY_USER" --password-stdin "$CI_REGISTRY"
build:
stage: build
script:
- docker build -t $CONTAINER_TEST_IMAGE -f Dockerfile.prod .
- docker push $CONTAINER_TEST_IMAGE
release-image:
stage: release
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker tag $CONTAINER_TEST_IMAGE $CONTAINER_RELEASE_IMAGE
- docker push $CONTAINER_RELEASE_IMAGE
only:
- master
release-prod:
stage: release-prod
script:
when: manual
I'm stuck on release-prod stage. I'm just not sure how I can deploy the app to AWS Beanstalk.
Since I have the docker images have been created and stored in gitlab registry. All I want to do is instruct AWS Beanstalk to download the docker images from gitlab registry and are start the application.
I also have a Dockerrun.aws.json which defines the services.
Your Dockerrun.aws.json file is what Beanstalk uses as the final say in what is deployed.
The option I found to work for us was to make a custom docker image with the eb cli installed so we can run eb deploy... from the gitlab-ci.yml file.
This requires AWS permissions for the runner to be able to access the aws service though so a user or permissions come into play. But they would any way it's setup.
GitLab project - CI/CD settings aws user keys (Ideally it's set up to use an IAM role instead but User/keys will work - I'm not too familiar with getting temporary access which might be the best thing for this but again, I'm not sure how that works)
We use a custom EC2 instance as our runner to run the pipeline so I'm not sure about shared runners - we had a concern of passing aws user creds to a shared runner pipeline...
build stage:
build and push the docker image to our ECR repository or your use case
deploy stage:
have a custom image stored in GitLab that has pre installed the eb cli. Then run eb deploy env-name
This is the dockerfile we use for our PHP project. Some of the installs aren't necessary for your case... This could also be improved by adding a USER and package versions. This will create a docker image that has the eb cli installed though.
FROM node:12
RUN apt-get update && apt-get -y --allow-unauthenticated install apt-transport-https ca-certificates curl gnupg2 software-properties-common ruby-full \
&& add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
RUN apt-get update && apt-get -y --allow-unauthenticated install docker-ce \
&& apt-get -y install build-essential zlib1g-dev libssl-dev libncurses-dev libffi-dev libsqlite3-dev libreadline-dev libbz2-dev python-pip python3-pip
RUN git clone https://github.com/aws/aws-elastic-beanstalk-cli-setup.git \
&& ./aws-elastic-beanstalk-cli-setup/scripts/bundled_installer
RUN python3 --version && apt-get update && apt-get -y install python3-pip \
&& pip3 install awscli boto3 botocore && pip3 install boto3 botocore --upgrade
Example gitlab-ci.yml setup
release-prod:
image: registry.gitlab.com/your-acct/project/custom-image
stage: release-prod
script:
- service docker start
- echo 'export PATH="/root/.ebcli-virtual-env/executables:$PATH"' >> ~/.bash_profile && source ~/.bash_profile
- echo 'export PATH=/root/.pyenv/versions/3.7.2/bin:$PATH' >> /root/.bash_profile && source /root/.bash_profile
- eb deploy your-environment
when: manual
you could also add the echo commands to the custom gitlab image also so all you need to run is eb deploy...
Hope this helps a little
Although there are couple of different ways to achieve this, I finally found proper solution for my usage cases. I have documented in here https://medium.com/voices-of-plusdental/gitlab-ci-deployment-for-php-applications-to-aws-elastic-beanstalk-automated-qa-test-environments-253ca4932d5b Using eb deploy was the easiest and shortest version. Also allows me to customize the instances in any way I want.

Bitbucket Pipelines to build Java app, Docker image and push it to AWS ECR?

I am setting up Bitbucket Pipelines for my Java app and what I want to achive is whenever I merge something with branch master, Bitbucket fires the pipeline, which in first step build and test my application, and in second step build Docker image from it and push it to ECR. The problem is that in second step it isn't possible to use the JAR file made in first step, because every step is made in a separate, fresh Docker container. Any ideas how to solve it?
My current files are:
1) Bitbucket-pipelines.yaml
pipelines:
branches:
master:
- step:
name: Build and test application
services:
- docker
image: openjdk:11
caches:
- gradle
script:
- apt-get update
- apt-get install -y python-pip
- pip install --no-cache-dir docker-compose
- bash ./gradlew clean build test testIntegration
- step:
name: Build and push image
services:
- docker
image: atlassian/pipelines-awscli
caches:
- gradle
script:
- echo $(aws ecr get-login --no-include-email --region us-west-2) > login.sh
- sh login.sh
- docker build -f Dockerfile -t my-application .
- docker tag my-application:latest 212234103948.dkr.ecr.us-west-2.amazonaws.com/my-application:latest
- docker push 212234103948.dkr.ecr.us-west-2.amazonaws.com/my-application:latest
2) Dockerfile:
FROM openjdk:11
VOLUME /tmp
EXPOSE 8080
COPY build/libs/*.jar app.jar
ENTRYPOINT ["java", "-jar", "/app.jar"]
And the error I receive:
Step 4/5 : COPY build/libs/*.jar app.jar
COPY failed: no source files were specified
I have found the solutions, it's quite simple - we should just use "artifacts" feature, so in first step the additional line:
artifacts:
- build/libs/*.jar
solves the problem.

Deploy Docker Containers from Docker Cloud

I'm new to Docker and am trying to learn more about best practices for deploying Dockerized images. I've built some images on my development host using the Dockerfile and docker-compose.yml below.
After building the images, I ssh'd to my production server, an Amazon Linux flavored T2.micro instance on AWS's EC2 service. There I installed docker and docker-compose, then tried to build my images, but ran out of RAM. I therefore published the images I had built on my local host to Docker Cloud, and I now wish to deploy those images from Docker Cloud on the AWS instance.
How can I achieve this? I'd be very grateful for any help others can offer!
Dockerfile:
# Specify base image
FROM andreptb/oracle-java:8-alpine
# Specify author / maintainer
MAINTAINER Douglas Duhaime <douglas.duhaime#gmail.com>
# Add source to a directory and use that directory
# NB: /app is a reserved directory in tomcat container
ENV APP_PATH="/lts-app"
RUN mkdir "$APP_PATH"
ADD . "$APP_PATH"
WORKDIR "$APP_PATH"
##
# Build BlackLab
##
RUN apk add --update --no-cache \
wget \
tar \
git
# Store the path to the maven home
ENV MAVEN_HOME="/usr/lib/maven"
# Add maven and java to the path
ENV PATH="$MAVEN_HOME/bin:$JAVA_HOME/bin:$PATH"
# Install Maven
RUN MAVEN_VERSION="3.3.9" && \
cd "/tmp" && \
wget "http://archive.apache.org/dist/maven/maven-3/$MAVEN_VERSION/binaries/apache-maven-$MAVEN_VERSION-bin.tar.gz" -O - | tar xzf - && \
mv "/tmp/apache-maven-$MAVEN_VERSION" "$MAVEN_HOME" && \
ln -s "$MAVEN_HOME/bin/mvn" "/usr/bin/mvn" && \
rm -rf "/tmp/*"
# Get the BlackLab source
RUN git clone "git://github.com/INL/BlackLab.git"
# Build BlackLab with Maven
RUN cd "BlackLab" && \
mvn clean install
##
# Build Python + Node dependencies
##
# Install system deps with Alpine Linux package manager
RUN apk add --update --no-cache \
g++ \
gcc \
make \
openssl-dev \
python3-dev \
python \
py-pip \
nodejs
# Install Python dependencies
RUN pip install -r "requirements.txt" && \
npm install --no-optional && \
npm run build
# Store Mongo service name as mongo host
ENV MONGO_HOST=mongo_service
ENV TOMCAT_HOST=tomcat_service
ENV TOMCAT_WEBAPPS=/tomcat_webapps/
# Make ports available
EXPOSE 7082
# Seed the db
CMD npm run seed && \
gunicorn -b 0.0.0.0:7082 --access-logfile - --reload server.app:app
docker-compose.yml:
version: '2'
services:
tomcat_service:
image: 'bitnami/tomcat:latest'
ports:
- '8080:8080'
volumes:
- docker-data-tomcat:/bitnami/tomcat/data/
- docker-data-blacklab:/lts-app/lts/
mongo_service:
image: 'mongo'
command: mongod
ports:
- '27017:27017'
web:
# gain access to linked containers
links:
- mongo_service
- tomcat_service
# explicitly declare service dependencies
depends_on:
- mongo_service
- tomcat_service
# set environment variables
environment:
PYTHONUNBUFFERED: 'true'
# use the image from the Dockerfile in the cwd
build: .
ports:
- '7082:7082'
volumes:
- docker-data-tomcat:/tomcat_webapps
- docker-data-blacklab:/lts-app/lts/
volumes:
docker-data-tomcat:
docker-data-blacklab:
To solve this problem, I followed advice from StackOverflow user #MazelTov's and built the containers on my local OSX development machine, then published the images to Docker Cloud, then pulled those images down onto and ran the images from my production server (AWS EC2).
Install Dependencies
I'll try and outline the steps I followed below in case they help others. Please note these steps require you to have docker and docker-compose installed on your development and production machines. I used the gui installer to install Docker for Mac.
Build Images
After writing a Dockerfile and docker-compose.yml file, you can build your images with docker-compose up --build.
Upload Images to Docker Cloud
Once the images are built, you can upload them to Docker Cloud with the following steps. First, create an account on Docker Cloud.
Then store your Docker Cloud username in an environment variable (so your ~/.bash_profile should contain export DOCKER_ID_USER='yaledhlab' (use your username though).
Next login to your account from your developer machine:
docker login
Once you're logged in, list your docker images:
docker ps
This will display something like:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
89478c386661 yaledhlab/let-them-speak-web "/bin/sh -c 'npm run…" About an hour ago Up About an hour 0.0.0.0:7082->7082/tcp letthemspeak_web_1
5e9c75d29051 training/webapp:latest "python app.py" 4 hours ago Up 4 hours 0.0.0.0:5000->5000/tcp heuristic_mirzakhani
890f7f1dc777 bitnami/tomcat:latest "/app-entrypoint.sh …" 4 hours ago Up About an hour 0.0.0.0:8080->8080/tcp letthemspeak_tomcat_service_1
09d74e36584d mongo "docker-entrypoint.s…" 4 hours ago Up About an hour 0.0.0.0:27017->27017/tcp letthemspeak_mongo_service_1
For each of the images you want to publish to Docker Cloud, run:
docker tag image_name $DOCKER_ID_USER/my-uploaded-image-name
docker push $DOCKER_ID_USER/my-uploaded-image-name
For example, to upload mywebapp_web to your user's account on Docker cloud, you can run:
docker tag mywebapp_web $DOCKER_ID_USER/web
docker push $DOCKER_ID_USER/web
You can then run open https://cloud.docker.com/swarm/$DOCKER_ID_USER/repository/list to see your uploaded images.
Deploy Images
Finally, you can deploy your images on EC2 with the following steps. First, install Docker and Docker-Compose on the Amazon-flavored EC2 instance:
# install docker
sudo yum install docker -y
# start docker
sudo service docker start
# allow ec2-user to run docker
sudo usermod -a -G docker ec2-user
# get the docker-compose binaries
sudo curl -L https://github.com/docker/compose/releases/download/1.20.1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
# change the permissions on the source
sudo chmod +x /usr/local/bin/docker-compose
Log out, then log back in to update your user's groups. Then start a screen and run the server: screen. Once the screen starts, you should be able to add a new docker-compose config file that specifies the path to your deployed images. For example, I needed to fetch the let-them-speak-web container housed within yaledhlab's Docker Cloud account, so I changed the docker-compose.yml file above to the file below, which I named production.yml:
version: '2'
services:
tomcat_service:
image: 'bitnami/tomcat:latest'
ports:
- '8080:8080'
volumes:
- docker-data-tomcat:/bitnami/tomcat/data/
- docker-data-blacklab:/lts-app/lts/
mongo_service:
image: 'mongo'
command: mongod
ports:
- '27017:27017'
web:
image: 'yaledhlab/let-them-speak-web'
# gain access to linked containers
links:
- mongo_service
- tomcat_service
# explicitly declare service dependencies
depends_on:
- mongo_service
- tomcat_service
# set environment variables
environment:
PYTHONUNBUFFERED: 'true'
ports:
- '7082:7082'
volumes:
- docker-data-tomcat:/tomcat_webapps
- docker-data-blacklab:/lts-app/lts/
volumes:
docker-data-tomcat:
docker-data-blacklab:
Then the production compose file can be run with: docker-compose -f production.yml up. Finally, ssh in with another terminal, and detach the screen with screen -D.
Yeah, that's true. Docker Cloud uses Docker Hub as its native registry for storing both public and private repositories. Once you push your images to Docker Hub, they are available in Docker Cloud.
Pulling images from Docker hub is the opposite of pushing them. This works for both private and public repositories.
To download your images locally, I always export docker username to shell session:
# export DOCKER_ID_USER="username"
In fact, I have this on my .bashrc profile.
Replacing the value of DOCKER_ID_USER with your Docker Cloud username.
Then Log in to Docker Cloud using the docker login command.
$ docker login
This logs you in using your Docker ID, which is shared between both Docker Hub and Docker Cloud
You can now run docker pull command to get your images downloaded locally.
$ docker pull image:tag
This is applicable to any Cloud Platform, not specific to AWS.
As you’re new to docker, here is my recommendation of best Docker Guides, including Docker vs VMs and advanced topics like working with Docker swarm and Kubernetes.

Docker pull can authenticate but run cannot

I built, tagged & published my first (ever) Docker image to Quay:
docker build -t myapp .
docker tag <imageId> quay.io/myorg/myapp:1.0.0-SNAPSHOT
docker login quay.io
docker push quay.io/myorg/myapp:1.0.0-SNAPSHOT
I then logged into Quay.io to confirm the tagged image was successfully pushed, and it was. So then I SSHed into a brand-spanking-new AWS EC2 instance and followed their instructions to install Docker:
sudo yum update -y
sudo yum install -y docker
sudo service docker start
sudo usermod -a -G docker ec2-user
sudo docker info
Interestingly enough the sudo usermod -a -G docker ec2-user command doesn't seem to work as advertised as I still need to append sudo to all my commands...
So I try to pull my tagged image:
sudo docker pull quay.io/myorg/myapp:1.0.0-SNAPSHOT
Please login prior to pull:
Username: myorguser
Password: <password entered>
1.0.0-SNAPSHOT: Pulling from myorg/myapp
<hashNum1>: Pull complete
<hashNum2>: Pull complete
<hashNum3>: Pull complete
<hashNum4>: Pull complete
<hashNum5>: Pull complete
<hashNum6>: Pull complete
Digest: sha256:<longHashNum>
Status: Downloaded newer image for quay.io/myorg/myapp:1.0.0-SNAPSHOT
So far, so good (I guess!). Let's see what images my local Docker engine knows about:
sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Hmmm...that doesn't seem right. Oh well, let' try running a container for my (successfully?) pulled image:
sudo docker run -it -p 8080:80 -d --name myapp:1.0.0-SNAPSHOT myapp:1.0.0-SNAPSHOT
Unable to find image 'myapp:1.0.0-SNAPSHOT' locally
docker: Error response from daemon: repository myapp not found: does not exist or no pull access.
See 'docker run --help'.
Any idea where I'm going awry?
To list images, you need to use: docker images
When you pull, the image has the same tag. So if you wish to run, you will need to use:
sudo docker run -it -p 8080:80 -d --name myapp:1.0.0-SNAPSHOT quay.io/myorg/myapp:1.0.0-SNAPSHOT
If you wish to use a short name, you need to retag it after the docker pull:
sudo docker tag quay.io/myorg/myapp:1.0.0-SNAPSHOT myapp:1.0.0-SNAPSHOT
After that, your docker run command will work. Note that docker ps is for containers that are running (or have exited in the recent past if used with -a)

AWS CodeBuild - Unable to find DockerFile during build

Started playing with AWS CodeBuild.
Goal is to have a docker images as a final results with the nodejs, hapi and sample app running inside.
Currently i have an issue with:
"unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /tmp/src049302811/src/Dockerfile: no such file or directory"
Appears on BUILD stage.
Project details:
S3 bucket used as a source
ZIP file stored in respective S3 bucket contains buildspec.yml, package.json, sample *.js file and DockerFile.
aws/codebuild/docker:1.12.1 is used as a build environment.
When i'm building an image using Docker installed on my laptop there is no issues so i can't understand which directory i need to specify to get rid off this error message.
Buildspec and DockerFile attached below.
Thanks for any comments.
buildspec.yml
version: 0.1
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --region eu-west-1)
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t <CONTAINER_NAME> .
- docker tag <CONTAINER_NAME>:latest <ID>.dkr.ecr.eu-west-1.amazonaws.com/<CONTAINER_NAME>:latest
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push <id>.eu-west-1.amazonaws.com/<image>:latest
DockerFile
FROM alpine:latest
RUN apk update && apk upgrade
RUN apk add nodejs
RUN rm -rf /var/cache/apk/*
COPY . /src
RUN cd /src; npm install hapi
EXPOSE 80
CMD ["node", "/src/server.js"]
Ok, so the solution was simple.
Issue was related to the Dockerfile name.
It was not accepting DockerFile (with capital F, strange it was working locally) but Dockerfile (with lower-case f) worked perfectly.
Can you validate that Dockerfile exists in the root of the directory? One way of doing this would be to run ls -altr as part of the pre-build phase in your buildspec (even before ecr login).