gitlab CI create docker image and push to AWS - amazon-web-services

I setup a docker registry (ECR) on AWS. From my gitlab repository I'd like to setup a CI to automatically create images and push them to the repository.
I was following the following tutorial to setup everything, but when running the example, I receive the error
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
My yml file looks like this
image: docker:latest
variables:
REPOSITORY_URL: <aws-url>/<registry>/outsite-slackbot
services:
- docker:dind
before_script:
- apk add --no-cache curl jq python py-pip
- pip install awscli
stages:
- build
build:
stage: build
script:
- $(aws ecr get-login --no-include-email --region eu-west-1)

There is no problem with the Dockerfile, you can't be connected to docker daemon by the way. So check these steps:
Are you logged in as a root? (sudo su or sudo -i)
Start Docker service (service docker start)
Then follow the tutorial :)

Related

Travis docker build to AWS ec2 docker

I am developing a micro-service architecture based application using Spring-boot and Angular7 for the front end. What I need to do is to deploy it to a docker I have installed on my AWS ec2 instance using Travis.
What I have done so far is created a travis.yml file
language: java
jdk: oraclejdk8
services:
- mysql
- rabbitmq
- redis-server
before_install:
- mysql -e 'CREATE DATABASE IF NOT EXISTS mydb;'
image: my-service/aws-cli-docker
variables:
AWS_ACCESS_KEY_ID: "##########"
AWS_SECRET_ACCESS_KEY: "##########"
deploy_stage:
stage: deploy
environment: Production
only:
- master
script:
- aws ssm send-command --document-name "AWS-RunShellScript" --instance-ids "i-########" --parameters '{"commands":["sudo docker-compose -f /home/ubuntu/docker-compose.yml up -d --no-deps --build"],"executionTimeout":["3600"]}' --timeout-seconds 600 --region us-east-2
My dockerfile is in the source of the sevice as is as follows:
FROM java:8
VOLUME /tmp
ADD ./target/myService-0.0.1-SNAPSHOT.jar my-service.jar
EXPOSE 8081
ENTRYPOINT [ "sh", "-c", "java -Xms64m -Xmx512m -XX:+UseTLAB -XX:+ResizeTLAB -XX:ReservedCodeCacheSize=128m -XX:+UseCodeCacheFlushing -jar /my-service.jar" ]
nd my docker compose file is in the home/ubuntu path on my ec2 instance as follows:
services:
vehicle-service:
image: my-service/aws-cli-docker
ports:
-8081:8081
I my Travis build works and tests run but there's no deployment or any errors about the docker image. Can someone figure out what I'm missing?

Dockerfile - install jenkins on AWS

New to AWS so any help would be appreciated.
I'm attempting to run Jenkins through Docker on AWS. I found this article https://docs.aws.amazon.com/aws-technical-content/latest/jenkins-on-aws/containerized-deployment.html
Can anyone share a better step-by-step tutorial to achieve this? the page above seems incomplete.
It talks about "The Dockerfile should also contain the steps to install the Jenkins Amazon ECS plugin" but does not show how to go about installing the plugin using the Dockerfile.
thanks
Please follow below steps:
Launch EC2 cluster according to your needs.
Install docker in you local machine. For example, for ubuntu (sudo apt-get isntall docker.io)
systemctl start docker
Create new folder for our jenkins docker. Create new Dockerfile inside it with following contents.
FROM Jenkins
COPY plugins.txt /usr/share/jenkins/plugins.txt
RUN /usr/local/bin/plugins.sh /usr/share/jenkins/plugins.txt
Create plugins.txt in same folder and add below line
amazon-ecs:1.3
Login to ECR using aws cli. Configure aws first with your credentials.
aws ecr get-login --region <REGION>
Run the output returned from above command to docker login.
sudo docker build -t jenkins_master .
sudo docker tag jenkins_master:latest <AWS ACC ID>.dkr.ecr.<REGION>.amazonaws.com/jenkins_master:latest
Create repository in ECR for this image
aws ecr create-repository --repository-name jenkins_master
Push the image in AWS ECR.
sudo docker push <AWS ACC ID>.dkr.ecr.<REGION>+.amazonaws.com/jenkins_master:latest
Our Jenkins docker image is ready. But data stored by this Jenkins server will not be persistent. To store data permanently, we will create another docker image which will create a volume with mount point. For that, create new directory for this new docker image and inside it create another Dockerfile with below content.
FROM Jenkins
VOLUME ["/var/jenkins_home"]
Again follow same commands to push this new repository to ECR.
sudo docker build -t jenkins_dv .
sudo docker tag jenkins_dv:latest <AWS ACC ID>.dkr.ecr.<REGION>.amazonaws.com/jenkins_dv:latest
aws ecr create-repository --repository-name jenkins_dv
sudo docker push <AWS Account Number>.dkr.ecr.<REGION>.amazonaws.com/jenkins_dv:latest
Now our images are ready. We will use this images to run them as service on our ECS cluster. For that we need to install ecs-cli using below command for linux.
sudo curl -o /usr/local/bin/ecs-cli https://s3.amazonaws.com/amazon-ecs-cli/ecs-cli-linux-amd64-latest
Create a new txt file with below contents which will have jenkins configuration.
jenkins_master:
image: jenkins_master
cpu_shares: 100
mem_limit: 2000M
ports:
- "8080:8080"
- "50000:50000"
volumes_from:
- jenkins_dv
jenkins_dv:
image: jenkins_dv
cpu_shares: 100
mem_limit: 500M
15. Finally push this service using above file to your newly created cluster.
ecs-cli compose --file docker_compose.txt service up --cluster <cluster_name>
Hope this helps!

docker swarm and aws ecr authentication using api keys

I'm having trouble pulling docker images from AWS ECR when deploying a stack to my docker swarm cluster that runs in AWS EC2.
If I try to ssh to any of the nodes and authenticate manually and pull an image manually, there are no issues
This works:
root#manager1 ~ # `aws ecr get-login --no-include-email --region us-west-2 `
Login Succeeded
root#manager1 ~ # docker pull *****.dkr.ecr.us-west-2.amazonaws.com/myapp:latest
however, if I try deploying a stack or service:
docker stack deploy --compose-file docker-compose.yml myapp
The image can't be found and on the node that I already authenticated as well as on all other manager/worker nodes.
Error from docker service ps myapp :
"No such image: *****.dkr.ecr.us-west-2.amazonaws.com/myapp:latest"
OS: RHEL 7.3
Docker version: Docker version 1.13.1-cs5, build 21c42d8
Anyone have a solution for this issue?
Try this command
docker login -u Username -p password *****.dkr.ecr.us-west-2.amazonaws.com && docker stack deploy --compose-file docker-compose.yml myapp --with-registry-auth

Configuring bitbucket pipelines with Docker to connect to AWS

I am trying to set up Bitbucket pipelines to deploy to ECS as here: https://confluence.atlassian.com/bitbucket/deploy-to-amazon-ecs-892623902.html
These instructions say how to push to Docker hub, but I want to push the image to Amazon's image repo. I have set AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID in my Bitbucket parameters list and I can run these command locally with no problems (the keys defined in ~/.aws/credentials). However, I keep getting the error 'no basic auth credentials'. I am wondering if it is not recognising the variables somehow. The docs here: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html say that:
The AWS CLI uses a provider chain to look for AWS credentials in a number of different places, including system or user environment variables and local AWS configuration files. So I am not sure why it isn't working. My bitbucket pipelines configuration is as so (I have not included anything unnecessary):
- export IMAGE_NAME=$AWS_REPO_ID.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com/my/repo-name:$BITBUCKET_COMMIT
# build the Docker image (this will use the Dockerfile in the root of the repo)
- docker build -t $IMAGE_NAME .
# authenticate with the AWS repo (this gets and runs the docker login command)
- eval $(aws ecr get-login --region $AWS_DEFAULT_REGION)
# push the new Docker image to the repo
- docker push $IMAGE_NAME
Is there a way of specifying the credentials for aws ecr get-login to use? I even tried this, but it doesn't work:
- mkdir -p ~/.aws
- echo -e "[default]\n" > ~/.aws/credentials
- echo -e "aws_access_key_id = $AWS_ACCESS_KEY_ID\n" >> ~/.aws/credentials
- echo -e "aws_secret_access_key = $AWS_SECRET_ACCESS_KEY\n" >> ~/.aws/credentials
Thanks
I use an alternative method to build and push Docker images to AWS ECR that requires no environment variables:
image: amazon/aws-cli
options:
docker: true
oidc: true
aws:
oidc-role: arn:aws:iam::123456789012:role/BitBucket-ECR-Access
pipelines:
default:
- step:
name: Build and push to ECR
script:
- aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin 123456789012.dkr.ecr.us-east-1.amazonaws.com
- docker build -t 123456789012.dkr.ecr.us-east-1.amazonaws.com/myimage:0.0.1 .
- docker push 123456789012.dkr.ecr.us-east-1.amazonaws.com/myimage:0.0.1
You will need to update the role ARN to match a Role you have created in your AWS IAM console with sufficient permissions.
Try this:
bitbucket-pipeline.yml
pipelines:
custom:
example-image-builder:
- step:
image: python:3
script:
- export CLONE_ROOT=${BITBUCKET_CLONE_DIR}/../example
- export IMAGE_LOCATION=<ENTER IMAGE LOCATION HERE>
- export BUILD_CONTEXT=${BITBUCKET_CLONE_DIR}/build/example-image-builder/dockerfile
- pip install awscli
- aws s3 cp s3://example-deployment-bucket/deploy-keys/bitbucket-read-key .
- chmod 0400 bitbucket-read-key
- ssh-agent bash -c 'ssh-add bitbucket-read-key; git clone --depth 1 git#bitbucket.org:example.git -b master ${CLONE_ROOT}'
- cp ${CLONE_ROOT}/requirements.txt ${BUILD_CONTEXT}/requirements.txt
- eval $(aws ecr get-login --region us-east-1 --no-include-email)
- docker build --no-cache --file=${BUILD_CONTEXT}/dockerfile --build-arg AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID} --build-arg AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} --tag=${IMAGE_LOCATION} ${BUILD_CONTEXT}
- docker push ${IMAGE_LOCATION}
options:
docker: true
dockerfile
FROM python:3
MAINTAINER Me <me#me.me>
COPY requirements.txt requirements.txt
ENV DEBIAN_FRONTEND noninteractive
ARG AWS_ACCESS_KEY_ID
ARG AWS_SECRET_ACCESS_KEY
RUN apt-get update && apt-get -y install stuff
ENTRYPOINT ["/bin/bash"]
I am running out of time, so for now I included more than just the answer to your question. But this would be a good enough template to work from. Ask questions in the comments if there is any line you don't understand and I will edit the answer.
i had the same issue. the error is mainly due to an old version of awscli.
you need to use a docker image with a more recent awscli.
for my project i use linkmobility/maven-awscli
You need to set the Environnment variables in Bitbucket
small changes to your pipeline
image: Docker-Image-With-awscli
eval $(aws ecr get-login --no-include-email --region ${AWS_DEFAULT_REGION} )

Docker trying to push 900MB to aws ec2 container

I've created a docker image locally, and it's about 900MB. I've setup AWS container service and I'm now trying push my docker image to it. My docker image is a node (v6) application.
It's taking too long and I feel like I'm doing something wrong here? It shouldn't be trying to push 900MB (the size of my image) should it?
Is there a way to simply push my Dockerfile to AWS and have it pull my code from github, build the image on the server, and then run it?
Here is my Dockerfile:
FROM node:6.2.0
# Create app directory inside the container
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
# Install app dependencies
COPY package.json /usr/src/app/
RUN npm install
# Bundle app source
COPY . /usr/src/app
EXPOSE 7000
CMD [ "npm", "start" ]
The code which I'm running to push my docker image is the following:
docker push <id.dkr.ecr.ap-southeast-2.amazonaws.com/name:latest
Before this I ran the following:
aws ecr get-login --region ap-southeast-2
docker login -u AWS -p <token> -e none <aws_url>