CircleCI script to test against DynamoDB Local Fails - amazon-web-services

We have a CircleCI script that manages our deployment. I wanted to allow DynamoDB local to run so that we could test our DynamoDB requests. I've tried following the answers here, here and here. I've also tries using the DynamoDB local image from Docker Hub, here. This is the closest I've gotten.
version: 2
jobs:
setup-dynamodb:
docker:
- image: openjdk:15-jdk
steps:
- setup_remote_docker:
version: 18.06.0-ce
- run:
name: run-dynamodb-local
background: true
shell: /bin/bash
command: |
curl -k -L -o dynamodb-local.tgz http://dynamodb-local.s3-website-us-west-2.amazonaws.com/dynamodb_local_latest.tar.gz
tar -xzf dynamodb-local.tgz
java -Djava.library.path=./DynamoDBLocal_lib -jar DynamoDBLocal.jar -port 8000 -sharedDb
check-failed:
docker:
- image: golang:1.14.3
steps:
- checkout
- setup_remote_docker:
version: 18.06.0-ce
- attach_workspace:
at: /tmp/app/workspace
- run:
name: Install dockerize
shell: /bin/bash
command: |
yum -y update && \
yum -y install wget && \
yum install -y tar.x86_64 && \
yum clean all
wget https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz && \
tar -C /usr/local/bin -xzvf dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz && \
rm dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz
environment:
DOCKERIZE_VERSION: v0.3.0
- run:
name: Wait for Local DynamoDB
command: dockerize -wait tcp://localhost:8000 -timeout 1m
- run:
name: checkerr
shell: /bin/bash
command: |
ls -laF /tmp/app/workspace/
for i in $(seq 1 2); do
f=$(printf "failed%d.txt" $i)
value=$(</tmp/app/workspace/$f)
if [[ "$value" != "nil" ]]; then
echo "$f = $value"
exit 1
fi
done
The problem I'm having is that all my tests are failing with error message dial tcp 127.0.0.1:8000: connect: connection refused. I'm not sure why this is happening. Do I need to expose the port from the container?

The reason is, the first job is totally seperate to second job.
In fact, you don't need the first one, and adjust second one as below
check-failed:
docker:
- image: golang:1.14.3
- image: amazon/dynamodb-local
steps:
- setup_remote_docker:
...
...
By the way, you don't need install dynamodb every time, you can run as container as well

Related

docker compose failing on gitlab-ci build stage

I am trying to build gitlab-ci but one of the stages is failing the build. I get stuck on build stage. it does not recognise python and i am trying to install it so i can build the image and get it tested with robot framework
gitlab-ci.yaml
image: python:latest
services:
- name: docker:dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh"]
stages:
- compile
- build
- test
- deploy
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
MOUNT_POINT: /builds/$CI_PROJECT_PATH/mnt
REPOSITORY_URL: $AWS_ACCOUNT_ID.dkr.ecr.eu-west-2.amazonaws.com/apps_web
TASK_DEFINITION_NAME: apps_8000
CLUSTER_NAME: QA-2
SERVICE_NAME: apps_demo
ARTIFACT_REPORT_PATH: "app/reports/"
before_script:
- docker info
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
- export WEB_IMAGE=$IMAGE:web
- apk add --no-cache openssh-client bash
- chmod +x ./setup_env.sh
- bash ./setup_env.sh
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
unittests:
stage: test
before_script:
- python -m venv env
- source env/bin/activate
- python -m pip install --upgrade pip
- pip install -r app/app-requirements.txt
variables:
DOCKER_IMAGE_TAG: ${CI_REGISTRY_IMAGE}:${CI_COMMIT_TAG}
image: ${DOCKER_IMAGE_TAG}
script:
- source env/bin/activate
- python app/manage.py jenkins --enable-coverage
artifacts:
reports:
junit: app/reports/junit.xml
paths:
- $ARTIFACT_REPORT_PATH
expire_in: 30 days
when: on_success
only:
refs:
- merge_requests
variables:
- $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "qa"
migrations:
stage: compile
before_script:
- python -m venv env
- source env/bin/activate
- pip install -r app/app-requirements.txt
script:
- python app/manage.py makemigrations
artifacts:
paths:
- "app/*/migrations/*.py"
expire_in: 1 day
only:
refs:
- merge_requests
variables:
- $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "qa"
build:
image:
name: docker/compose:1.25.4
entrypoint: [ "" ]
stage: build
variables:
DOCKER_IMAGE_TAG: ${CI_REGISTRY_IMAGE}:${CI_COMMIT_REF_NAME}
before_script:
- apt-get install python3
- python -m venv env
- source env/bin/activate
- python -m pip install --upgrade pip
- pip install -r app/app-requirements.txt
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
- export WEB_IMAGE=$IMAGE:web
script:
- apk add --no-cache bash
- chmod +x ./setup_env.sh
- bash ./setup_env.sh
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
- docker pull $IMAGE:web || true
- docker-compose -f docker-compose.ci.yml build
- docker push $IMAGE:web
- docker tag app
- docker build -t ${DOCKER_IMAGE_TAG} .
after_script:
- docker push ${DOCKER_IMAGE_TAG}
- docker logout
dependencies:
- migrations
only:
refs:
- merge_requests
variables:
- $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "qa"
deploy_qa:
stage: deploy
image: registry.gitlab.com/gitlab-org/cloud-deploy/aws-ecs:latest
before_script:
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
- export WEB_IMAGE=$IMAGE:web
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
script:
- echo $IMAGE
- echo $WEB_IMAGE
- docker pull $WEB_IMAGE
environment:
name: qa
url: https://app.domain.com
only:
refs:
- merge_requests
variables:
- $CI_MERGE_REQUEST_TARGET_BRANCH_NAME == "qa"
It is failing with error /bin/sh: eval: line 153: apt-get: not found
Like #slauth said in his comment, the docker/compose image is based on Alpine Linux which uses the apk package manager, not apt. However, you most likely wouldn't be able to use a debian image since you need the functionality of docker/compose. In that case, you can use apk to install python instead of apt-get, just like you're installing bash in the script section of this job:
apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python
(This comes from a related answer here).
However, Installing and updating packages in a CI/CD Pipeline is generally a bad practice since depending on the number of pipelines you run, it can significantly slow down your development process. Instead, you can create your own docker images based on whichever image you need, and install your packages there. For example, you can create a new image based on docker/composer, install python, bash, etc there. Then push the new image either to Docker Hub, Gitlab's built-in docker registry, or another registry you might have available. Finally, in your .gitlab-ci.yml file, you simply change docker/compose to your new image.
For more information on this part, you can see another answer I wrote for a similar question here.

How to run command inside Docker container

I'm new to Docker and I'm trying to understand the following setup.
I want to debug my docker container to see if it is receiving AWS credentials when running as a task in Fargate. It is suggested that I run the command:
curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
But I'm not sure how to do so.
The setup uses Gitlab CI to build and push the docker container to AWS ECR.
Here is the dockerfile:
FROM rocker/tidyverse:3.6.3
RUN apt-get update && \
apt-get install -y openjdk-11-jdk && \
apt-get install -y liblzma-dev && \
apt-get install -y libbz2-dev && \
apt-get install -y libnetcdf-dev
COPY ./packrat/packrat.lock /home/project/packrat/
COPY initiate.R /home/project/
COPY hello.Rmd /home/project/
RUN install2.r packrat
RUN which nc-config
RUN Rscript -e 'packrat::restore(project = "/home/project/")'
RUN echo '.libPaths("/home/project/packrat/lib/x86_64-pc-linux-gnu/3.6.3")' >> /usr/local/lib/R/etc/Rprofile.site
WORKDIR /home/project/
CMD Rscript initiate.R
Here is the gitlab-ci.yml file:
image: docker:stable
variables:
ECR_PATH: XXXXX.dkr.ecr.eu-west-2.amazonaws.com/
DOCKER_DRIVER: overlay2
DOCKER_TLS_CERTDIR: ""
services:
- docker:dind
stages:
- build
- deploy
before_script:
- docker info
- apk add --no-cache curl jq py-pip
- pip install awscli
- chmod +x ./build_and_push.sh
build-rmarkdown-task:
stage: build
script:
- export REPO_NAME=edelta/rmarkdown_report
- export BUILD_DIR=rmarkdown_report
- export REPOSITORY_URL=$ECR_PATH$REPO_NAME
- ./build_and_push.sh
when: manual
Here is the build and push script:
#!/bin/sh
$(aws ecr get-login --no-include-email --region eu-west-2)
docker pull $REPOSITORY_URL || true
docker build --cache-from $REPOSITORY_URL -t $REPOSITORY_URL ./$BUILD_DIR/
docker push $REPOSITORY_URL
I'd like to run this command on my docker container:
curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
How I run this command on container startup in fargate?
For running a command inside docker container you need to be inside the docker container.
Step 1: Find the container ID / Container Name that you want to debug
docker ps A list of containers will be displayed, pick one of them
Step 2 run following command
docker exec -it <containerName/ConatinerId> bash and then enter wait for few seconds and you will be inside the docker container with interactive mode Bash
for more info read https://docs.docker.com/engine/reference/commandline/exec/
Short answer, just replace the CMD
CMD ["sh", "-c", " curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_UR && Rscript initiate.R"]
Long answer, You need to replace the CMD of the DockerFile, as currently running only Rscript.
you have two option add entrypoint or change CMD, for CMD check above
create entrypoint.sh and run run only when you want to debug.
#!/bin/sh
if [ "${IS_DEBUG}" == true ];then
echo "Container running in debug mode"
curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
# uncomment below section if you still want to execute R script.
# exec "$#"
else
exec "$#"
fi
Changes that will required on Dockerfile side
WORKDIR /home/project/
ENV IS_DEBUG=true
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
entrypoint ["/entrypoint.sh"]
CMD Rscript initiate.R

AWS codepipeline killing script "docker compose not working"

I am trying to run a code pipeline with github as the source, codeBuild as the builder and elastic beanstalk as the server infrastructure. I am using a docker image amazonlinux:2018.03 which works perfectly locally but during the codebuild in the pipeline i get the following error:
docker-compose: command not found
I have tried to install docker, docker-compose etc. but it keeps giving me this error. I've set the build to use a file buildspec.yaml:
version: 0.2
phases:
install:
commands:
- echo "installing"
- sudo yum install -y yum-utils
- sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
- sudo curl -L "https://github.com/docker/compose/releases/download/1.25.5/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- sudo chmod +x /usr/local/bin/docker-compose
- sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
- docker-compose --version
build:
commands:
- bash compose-local.sh
compose-local.sh:
#!/bin/bash
sudo docker-compose up
I have tried for a couple of days. And i am not sure if i am overseeing something with codeBuild i dont know?
Run /usr/local/bin/docker-compose up instead.
If using Ubuntu 2.0+ or Amazon Linux 2 image, we need to specify docker as the runtime-versions in install phase at buildspec.yml file, e.g.:
version: 0.2
phases:
install:
runtime-versions:
docker: 18
build:
commands:
- echo Build started on `date`
- echo Building the Docker image with docker-compose...
- docker-compose -f docker-compose.yml build
Also please make sure to enable privilege mode: https://docs.aws.amazon.com/codebuild/latest/userguide/create-project.html#create-project-console

AWS CodeBuild - docker: not found

I have the following buildspec.yml:
version: 0.2
phases:
install:
commands:
- curl -L -o sbt-0.13.6.deb http://dl.bintray.com/sbt/debian/sbt-0.13.6.deb && \
- dpkg -i sbt-0.13.6.deb && \
- rm sbt-0.13.6.deb && \
- apt-get update && \
- apt-get install sbt && \
pre_build:
commands:
- echo Entered the pre_build phase...
- docker login -u user -p pass
build:
commands:
- echo Build started on `date`
- sbt test
- echo test completed on `date`
- sbt docker:publishLocal
- docker tag image repo
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push repo
cache:
paths:
- $HOME/.ivy2/cache
- $HOME/.sbt
and fails with
/codebuild/output/tmp/script.sh: 4: /codebuild/output/tmp/script.sh: docker: not found
in the console. As far as I see in the examples provided in the doc, docker should be already given.
How can I avoid this?
Thanks
On your CodeBuild project select the "privileged" flag to enable Docker in your build container. If you are using a CodeBuild managed image, then selecting this flag is all that's needed. If you are using a custom image then ensure the Docker is started as explained in https://docs.aws.amazon.com/codebuild/latest/userguide/sample-docker-custom-image.html

How to deploy docker image created with version 2 on the aws

I am new to docker. I did somehow create docker project with version 2 docker compose following is my docker-compose.yml
version: "2"
services:
# Configuration for php web server
webserver:
image: inshastri/laravel-adminpanel:latest
restart: always
ports:
- '8080:80'
networks:
- web
volumes:
- ./:/var/www/html
- ./apache.conf:/etc/apache2/sites-available/000-default.conf
depends_on:
- db
links:
- db
# - redis
environment:
DB_HOST: db
DB_DATABASE: phpapp
DB_USERNAME: root
DB_PASSWORD: toor
# Configuration for mysql db server
db:
image: "mysql:5"
volumes:
- ./mysql:/etc/mysql/conf.d
environment:
MYSQL_ROOT_PASSWORD: toor
MYSQL_DATABASE: phpapp
networks:
- web
restart: always
# Configuration for phpmyadmin (optional)
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
PMA_PORT: 3306
PMA_HOST: db
PMA_USER: root
PMA_PASSWORD: toor
ports:
- "8004:80"
restart: always
depends_on:
- db
networks:
- web
redis:
image: redis:4.0-alpine
# Network connecting the whole app
networks:
web:
driver: bridge
and with docker file as below
FROM ubuntu:16.04
RUN apt-get update \
&& apt-get install -qy language-pack-en-base \
&& locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LC_ALL en_US.UTF-8
RUN apt-get -y install apache2
RUN a2enmod headers
RUN a2enmod rewrite
# add PPA for PHP 7
RUN apt-get install -y --no-install-recommends apt-utils
RUN apt-get install -y software-properties-common python-software-properties
RUN add-apt-repository -y ppa:ondrej/php
# Adding php 7
RUN apt-get update
RUN apt-get install -y php7.1 php7.1-fpm php7.1-cli php7.1-common php7.1-mbstring php7.1-gd php7.1-intl php7.1-xml php7.1-mysql php7.1-mcrypt php7.1-zip
RUN apt-get -y install libapache2-mod-php7.1 php7.1 php7.1-cli php-xdebug php7.1-mbstring sqlite3 php7.1-mysql php-imagick php-memcached php-pear curl imagemagick php7.1-dev php7.1-phpdbg php7.1-gd npm nodejs-legacy php7.1-json php7.1-curl php7.1-sqlite3 php7.1-intl apache2 vim git-core wget libsasl2-dev libssl-dev
RUN apt-get -y install libsslcommon2-dev libcurl4-openssl-dev autoconf g++ make openssl libssl-dev libcurl4-openssl-dev pkg-config libsasl2-dev libpcre3-dev
RUN apt-get install -y imagemagick graphicsmagick
RUN a2enmod headers
RUN a2enmod rewrite
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
ENV APACHE_RUN_DIR /var/run/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2
RUN ln -sf /dev/stdout /var/log/apache2/access.log && \
ln -sf /dev/stderr /var/log/apache2/error.log
RUN mkdir -p $APACHE_RUN_DIR $APACHE_LOCK_DIR $APACHE_LOG_DIR
# Update application repository list and install the Redis server.
RUN apt-get update && apt-get install -y redis-server
# Allow Composer to be run as root
ENV COMPOSER_ALLOW_SUPERUSER 1
# Setup the Composer installer
RUN curl -o /tmp/composer-setup.php https://getcomposer.org/installer \
&& curl -o /tmp/composer-setup.sig https://composer.github.io/installer.sig \
&& php -r "if (hash('SHA384', file_get_contents('/tmp/composer-setup.php')) !== trim(file_get_contents('/tmp/composer-setup.sig'))) { unlink('/tmp/composer-setup.php'); echo 'Invalid installer' . PHP_EOL; exit(1); }" \
&& php /tmp/composer-setup.php \
&& chmod a+x composer.phar \
&& mv composer.phar /usr/local/bin/composer
# Install composer dependencies
RUN echo pwd: `pwd` && echo ls: `ls`
# RUN composer install
EXPOSE 80
# Expose default port
EXPOSE 6379
VOLUME [ "/var/www/html" ,"./mysql:/etc/mysql/conf.d",]
WORKDIR /var/www/html
ENTRYPOINT [ "/usr/sbin/apache2" ]
CMD ["-D", "FOREGROUND"]
COPY . /var/www/html
COPY ./apache.conf /etc/apache2/sites-available/000-default.conf
Now there are 2 things which i cannot understand after googling a lot
1) when i give the image to my friend he took the pull and when he ran it was without the other services like mysql and phpmyadmin
2) how should i deploy this application to ec2 amazon
There are lots of things but cannot understand any of them like ec2 beanstalk etc
Please guide a simple uploading of my image file to aws and run on it also how can i run my image on my friends pc as i thougth docker was a container managment system it should get all my services as when my friend or any one takes a pull of my image
for refrence my image is inshastri/laravel-adminpanel
Please help thanks in advance