I have a Dockerfile:
FROM public.ecr.aws/bitnami/node:15 AS stage-01
COPY package.json /app/package.json
COPY package-lock.json /app/package-lock.json
WORKDIR /app
RUN npm ci
FROM stage-01 AS stage-02
COPY src /app/src
COPY public /app/public
COPY tsconfig.json /app/tsconfig.json
WORKDIR /app
RUN PUBLIC_URL=/myapp/web npm run build
FROM public.ecr.aws/bitnami/nginx:1.20
USER 1001
COPY --from=stage-02 /app/build /app/build
COPY nginx.conf /opt/bitnami/nginx/conf/server_blocks/nginx.conf
COPY ./env.sh /app/build
COPY window.env /app/build
EXPOSE 8080
WORKDIR /app/build
CMD ["/bin/sh", "-c", "/app/build/env.sh && nginx -g \"daemon off;\""]
If I build this image locally it starts normally and does what it has to do.
My local docker version:
Client: Docker Engine - Community
Version: 20.10.7
API version: 1.41
Go version: go1.13.15
Git commit: f0df350
Built: Wed Jun 2 11:56:40 2021
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.8
API version: 1.41 (minimum version 1.12)
Go version: go1.16.6
Git commit: 75249d8
Built: Fri Jul 30 19:52:16 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.9
GitCommit: e25210fe30a0a703442421b0f60afac609f950a3
runc:
Version: 1.0.1
GitCommit: v1.0.1-0-g4144b63
docker-init:
Version: 0.19.0
GitCommit: de40ad0
If I build it in Codebuild it does not starts:
/app/build/env.sh: 4: /app/build/env.sh: cannot create ./env-config.js: Permission denied
This is the image I am using in codebuild: aws/codebuild/amazonlinux2-x86_64-standard:3.0
I have also run the same script in local and still no error.
What could be the cause of this? If you have something in mind please let me know, otherwise I will post more code
This is my env.sh
#!/usr/bin/env sh
# Add assignment
echo "window._env_ = {" > ./env-config.js
# Read each line in .env file
# Each line represents key=value pairs
while read -r line || [ -n "$line" ];
do
echo "$line"
# Split env variables by character `=`
if printf '%s\n' "$line" | grep -q -e '='; then
varname=$(printf '%s\n' "$line" | sed -e 's/=.*//')
varvalue=$(printf '%s\n' "$line" | sed -e 's/^[^=]*=//')
fi
# Read value of current variable if exists as Environment variable
eval value=\"\$"$varname"\"
# Otherwise use value from .env file
[ -z "$value" ] && value=${varvalue}
echo name: "$varname", value: "$value"
# Append configuration property to JS file
echo " $varname: \"$value\"," >> ./env-config.js
done < window.env
echo "}" >> ./env-config.js
buildspec:
version: 0.2
env:
git-credential-helper: yes
secrets-manager:
GITHUB_TOKEN: "github:GITHUB_TOKEN"
phases:
install:
runtime-versions:
nodejs: 12
commands:
- npm install
build:
commands:
- echo Build started on `date`
- GITHUB_USERNAME=${GITHUB_USERNAME} GITHUB_EMAIL=${GITHUB_EMAIL} GITHUB_TOKEN=${GITHUB_TOKEN} AWS_REGION=${AWS_DEFAULT_REGION} GITHUB_REPOSITORY_URL=${GITHUB_REPOSITORY_URL} ECR_REPOSITORY_URL=${ECR_REPOSITORY_URL} ENV=${ENV} node release.js
My build project terraform configuration:
resource "aws_codebuild_project" "dashboard_image" {
name = var.project.name
service_role = var.codebuild_role_arn
artifacts {
type = "CODEPIPELINE"
}
environment {
compute_type = "BUILD_GENERAL1_SMALL"
image = "aws/codebuild/amazonlinux2-x86_64-standard:3.0"
type = "LINUX_CONTAINER"
privileged_mode = true
environment_variable {
name = "GITHUB_REPOSITORY_URL"
value = "https://github.com/${var.project.github_organization_name}/${var.project.github_repository_name}.git"
}
environment_variable {
name = "ECR_REPOSITORY_URL"
value = var.project.ecr_repository_url
}
environment_variable {
name = "ECR_IMAGE_NAME"
value = var.project.ecr_image_name
}
environment_variable {
name = "ENV"
value = "prod"
}
}
source {
type = "CODEPIPELINE"
buildspec = "buildspec.yml"
}
}
It's all about your Dockerfile and user permissions in it. Try to run docker run public.ecr.aws/bitnami/nginx:1.20 whoami - you will see that this image has not default user. It will be the same if you exec something inside this container. You have to add --user root to run or exec commands. See section "Why use a non-root container?" in Bitnami Nginx image documentation
That's why you don't have permission to create file inside the /app folder. The owner of this folder is root from the first public.ecr.aws/bitnami/node:15 image (which has root user by default).
In order to make it work in your case you have to change the line from USER 1001 to USER root (or someone with proper permissions) and double check that env.sh file has execute permission chmod +x env.sh.
This is the change I had to make to my Dockerfile in order to make it work:
FROM public.ecr.aws/bitnami/node:15 AS stage-01
COPY package.json /app/package.json
COPY package-lock.json /app/package-lock.json
WORKDIR /app
RUN npm ci
FROM stage-01 AS stage-02
COPY src /app/src
COPY public /app/public
COPY tsconfig.json /app/tsconfig.json
WORKDIR /app
RUN PUBLIC_URL=/myapp/web npm run build
FROM public.ecr.aws/bitnami/nginx:1.20
USER root
COPY --from=stage-02 /app/build /app/build
COPY nginx.conf /opt/bitnami/nginx/conf/server_blocks/nginx.conf
COPY ./env.sh /app/build
COPY window.env /app/build
RUN chmod 777 /app/build/env-config.js
EXPOSE 8080
WORKDIR /app/build
USER 1001
CMD ["/bin/sh", "-c", "/app/build/env.sh && nginx -g \"daemon off;\""]
It is probably due to the codebuild permissions when cloning the repository
777 is just temporary, later I will probably test if I can restrict the permissions.
Related
Dockerfile
FROM node:lts-alpine as build-stage
ENV VUE_APP_BACKEND_SERVER=${_VUE_APP_BACKEND_SERVER}
RUN echo "server env is:"
RUN echo $VUE_APP_BACKEND_SERVER
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
RUN npm run gcpbuild
Cloudbuild config
steps:
- name: gcr.io/cloud-builders/docker
args:
- build
- '--no-cache'
- '-t'
- '$_GCR_HOSTNAME/$PROJECT_ID/$REPO_NAME/$_SERVICE_NAME:$COMMIT_SHA'
- front
- '-f'
- front/Dockerfile
- '--build-arg=ENV=$_VUE_APP_BACKEND_SERVER'
id: Build
...
...
options:
substitutionOption: ALLOW_LOOSE
substitutions:
_VUE_APP_BACKEND_SERVER: 'https://backend.url'
I have also set the variable in the substitutions in the 'Advanced' section. However during the build the echo prints a blank and the variable is not available in the app as expected.
What you need is:
FROM node:lts-alpine as build-stage
ARG VUE_APP_BACKEND_SERVER
...
Also, fix build-arg line in your cloud build config:
- '--build-arg',
- 'VUE_APP_BACKEND_SERVER=${_VUE_APP_BACKEND_SERVER}'
Check out the docs.
Read more about ARG directive in Dockerfiles.
I'm trying to create a custom image of RedHat 8 using the EC2 Image Builder. In one of the recipes added to the pipeline, I've created the ansible user and used S3 to download the authorized_keys and the custom sudoers.d file. The issue I'm facing is that the sudoers file called "ansible" gets copied just fine, the authorized_keys doesn't. CloudWatch says that the recipe get executed without errors, the files are downloaded but when I create an EC2 with this AMI, the authorized_keys file is not in the path.
What's happening?
This is the recipe I'm using:
name: USER-Ansible
description: Creazione e configurazione dell'utente ansible
schemaVersion: 1.0
phases:
- name: build
steps:
- name: UserCreate
action: ExecuteBash
inputs:
commands:
- groupadd -g 2004 ux
- useradd -u 4134 -g ux -c "AWX Ansible" -m -d /home/ansible ansible
- mkdir /home/ansible/.ssh
- name: FilesDownload
action: S3Download
inputs:
- source: s3://[REDACTED]/authorized_keys
destination: /home/ansible/.ssh/authorized_keys
expectedBucketOwner: [REDACTED]
overwrite: false
- source: s3://[REDACTED]/ansible
destination: /etc/sudoers.d/ansible
expectedBucketOwner: [REDACTED]
overwrite: false
- name: FilesConfiguration
action: ExecuteBash
inputs:
commands:
- chown ansible:ux /home/ansible/.ssh/authorized_keys; chmod 600 /home/ansible/.ssh/authorized_keys
- chown ansible:ux /home/ansible/.ssh; chmod 700 /home/ansible/.ssh
- chown root:root /etc/sudoers.d/ansible; chmod 440 /etc/sudoers.d/ansible
Thanks in advance!
AWS EC2 Image Builder cleans up afterwards
https://docs.aws.amazon.com/imagebuilder/latest/userguide/security-best-practices.html#post-build-cleanup
# Clean up for ssh files
SSH_FILES=(
"/etc/ssh/ssh_host_rsa_key"
"/etc/ssh/ssh_host_rsa_key.pub"
"/etc/ssh/ssh_host_ecdsa_key"
"/etc/ssh/ssh_host_ecdsa_key.pub"
"/etc/ssh/ssh_host_ed25519_key"
"/etc/ssh/ssh_host_ed25519_key.pub"
"/root/.ssh/authorized_keys"
)
if [[ -f {{workingDirectory}}/skip_cleanup_ssh_files ]]; then
echo "Skipping cleanup of ssh files"
else
echo "Cleaning up ssh files"
cleanup "${SSH_FILES[#]}"
USERS=$(ls /home/)
for user in $USERS; do
echo Deleting /home/"$user"/.ssh/authorized_keys;
sudo find /home/"$user"/.ssh/authorized_keys -type f -exec shred -zuf {} \;
done
for user in $USERS; do
if [[ -f /home/"$user"/.ssh/authorized_keys ]]; then
echo Failed to delete /home/"$user"/.ssh/authorized_keys;
exit 1
fi;
done;
fi;
You can skip individual sections of the clean up script.
https://docs.aws.amazon.com/imagebuilder/latest/userguide/security-best-practices.html#override-linux-cleanup-script
I have a docker-compose service that runs django using gunicorn in an entrypoint shell script.
When I issue CTRL-C after the docker-compose stack has been started, the web and nginx services do not gracefully exit and are not deleted. How do I configure the docker environment so that the services are removed when a CTRL-C is issued?
I have tried using stop_signal: SIGINT but the result is the same. Any ideas?
docker-compose log after CTRL-C issued
^CGracefully stopping... (press Ctrl+C again to force)
Killing nginx ... done
Killing web ... done
docker containers after CTRL-C is issued
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4b2f7db95c90 nginx:alpine "/docker-entrypoint.…" 5 minutes ago Exited (137) 5 minutes ago nginx
cdf3084a8382 myimage "./docker-entrypoint…" 5 minutes ago Exited (137) 5 minutes ago web
Dockerfile
#
# Use poetry to build wheel and install dependencies into a virtual environment.
# This will store the dependencies during compile docker stage.
# In run stage copy the virtual environment to the final image. This will reduce the
# image size.
#
# Install poetry using pip, to allow version pinning. Use --ignore-installed to avoid
# dependency conflicts with poetry.
#
# ---------------------------------------------------------------------------------------
##
# base: Configure python environment and set workdir
##
FROM python:3.8-slim as base
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONFAULTHANDLER=1 \
PYTHONHASHSEED=random \
PYTHONUNBUFFERED=1
WORKDIR /app
# configure user pyuser:
RUN useradd --user-group --create-home --no-log-init --shell /bin/bash pyuser && \
chown pyuser /app
# ---------------------------------------------------------------------------------------
##
# compile: Install dependencies from poetry exported requirements
# Use poetry to build the wheel for the python package.
# Install the wheel using pip.
##
FROM base as compile
ARG DEPLOY_ENV=development \
POETRY_VERSION=1.1.7
# pip:
ENV PIP_DEFAULT_TIMEOUT=100 \
PIP_DISABLE_PIP_VERSION_CHECK=1 \
PIP_NO_CACHE_DIR=1
# system dependencies:
RUN apt-get update && \
apt-get install -y --no-install-recommends \
build-essential gcc && \
apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false && \
apt-get clean -y && \
rm -rf /var/lib/apt/lists/*
# install poetry, ignoring installed dependencies
RUN pip install --ignore-installed "poetry==$POETRY_VERSION"
# virtual environment:
RUN python -m venv /opt/venv
ENV VIRTUAL_ENV=/opt/venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
# install dependencies:
COPY pyproject.toml poetry.lock ./
RUN /opt/venv/bin/pip install --upgrade pip \
&& poetry install $(if [ "$DEPLOY_ENV" = 'production' ]; then echo '--no-dev'; fi) \
--no-ansi \
--no-interaction
# copy source:
COPY . .
# build and install wheel:
RUN poetry build && /opt/venv/bin/pip install dist/*.whl
# -------------------------------------------------------------------------------------------
##
# run: Copy virtualenv from compile stage, to reduce final image size
# Run the docker-entrypoint.sh script as pyuser
#
# This performs the following actions when the container starts:
# - Make and run database migrations
# - Collect static files
# - Create the superuser
# - Run wsgi app using gunicorn
#
# port: 5000
#
# build args:
#
# GIT_HASH Git hash the docker image is derived from
#
# environment:
#
# DJANGO_DEBUG True if django debugging is enabled
# DJANGO_SECRET_KEY The secret key used for django server, defaults to secret
# DJANGO_SUPERUSER_EMAIL Django superuser email, default=myname#example.com
# DJANGO_SUPERUSER_PASSWORD Django superuser passwd, default=Pa55w0rd
# DJANGO_SUPERUSER_USERNAME Django superuser username, default=admin
##
FROM base as run
ARG GIT_HASH
ENV DJANGO_DEBUG=${DJANGO_DEBUG:-False}
ENV DJANGO_SECRET_KEY=${DJANGO_SECRET_KEY:-secret}
ENV DJANGO_SETTINGS_MODULE=default_project.main.settings
ENV DJANGO_SUPERUSER_EMAIL=${DJANGO_SUPERUSER_EMAIL:-"myname#example.com"}
ENV DJANGO_SUPERUSER_PASSWORD=${DJANGO_SUPERUSER_PASSWORD:-"Pa55w0rd"}
ENV DJANGO_SUPERUSER_USERNAME=${DJANGO_SUPERUSER_USERNAME:-"admin"}
ENV GIT_HASH=${GIT_HASH:-dev}
# install virtualenv from compiled image
COPY --chown=pyuser:pyuser --from=compile /opt/venv /opt/venv
# set path for virtualenv and VIRTUAL_ENV toactivate virtualenv
ENV VIRTUAL_ENV="/opt/venv"
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
COPY --chown=pyuser:pyuser ./docker/docker-entrypoint.sh ./
USER pyuser
RUN mkdir /opt/venv/lib/python3.8/site-packages/default_project/staticfiles
EXPOSE 5000
ENTRYPOINT ["./docker-entrypoint.sh"]
Entrypoint
#!/bin/sh
set -e
echo "Making migrations..."
django-admin makemigrations
echo "Running migrations..."
django-admin migrate
echo "Making staticfiles..."
mkdir -p /opt/venv/lib/python3.8/site-packages/default_project/staticfiles
echo "Collecting static files..."
django-admin collectstatic --noinput
# requires gnu text tools
# echo "Compiling translation messages..."
# django-admin compilemessages
# echo "Making translation messages..."
# django-admin makemessages
if [ "$DJANGO_SUPERUSER_USERNAME" ]
then
echo "Creating django superuser"
django-admin createsuperuser \
--noinput \
--username $DJANGO_SUPERUSER_USERNAME \
--email $DJANGO_SUPERUSER_EMAIL
fi
exec gunicorn \
--bind 0.0.0.0:5000 \
--forwarded-allow-ips='*' \
--worker-tmp-dir /dev/shm \
--workers=4 \
--threads=1 \
--worker-class=gthread \
default_project.main.wsgi:application
exec "$#"
docker-compose
version: '3.8'
services:
web:
container_name: web
image: myimage
init: true
build:
context: .
dockerfile: docker/Dockerfile
environment:
- DJANGO_DEBUG=${DJANGO_DEBUG}
- DJANGO_SECRET_KEY=${DJANGO_SECRET_KEY}
- DJANGO_SUPERUSER_EMAIL=${DJANGO_SUPERUSER_EMAIL}
- DJANGO_SUPERUSER_PASSWORD=${DJANGO_SUPERUSER_PASSWORD}
- DJANGO_SEUPRUSER_USERNAME=${DJANGO_SUPERUSER_USERNAME}
# stop_signal: SIGINT
volumes:
- static-files:/opt/venv/lib/python3.8/site-packages/{{ cookiecutter.project_name }}/staticfiles:rw
ports:
- 127.0.0.1:${DJANGO_PORT}:5000
nginx:
container_name: nginx
image: nginx:alpine
volumes:
- ./docker/nginx:/etc/nginx/conf.d
- static-files:/static
depends_on:
- web
ports:
- 127.0.0.1:8000:80
volumes:
static-files:
You can use docker-compose down
Stops containers and removes containers, networks, volumes, and images created by up.
Reference
I want to Do CI/CD with CircleCI to ECR, ECS.
Dockerfiles works correctly in local with docker-compose.
but, I am getting the following error in CircleCI.
COPY failed: stat /var/lib/docker/tmp/docker-builder505910231/b-plus-app/build: no such file or directory
Here is the relevant code where the error occurred.
↓Dockerfile(react)↓
FROM node:14.17-alpine
WORKDIR /usr/src/app
COPY . /usr/src/app
RUN rm -r -f b-plus-app/build && cd b-plus-app \
&& rm -r -f node_modules && npm i && npm run build
↓Dockerfile(nginx)↓
FROM nginx:1.15.8
RUN rm -f /etc/nginx/conf.d/*
RUN rm -r -f /usr/share/nginx/html
#Stop Here
COPY b-plus-app/build /var/www
COPY prod_conf/ /etc/nginx/conf.d/
CMD /usr/sbin/nginx -g 'daemon off;' -c /etc/nginx/nginx.conf
↓.circleci/config.yml↓
version: 2.1
orbs:
aws-ecr: circleci/aws-ecr#6.15
aws-ecs: circleci/aws-ecs#2.0.0
workflows:
react-deploy:
jobs:
- persist_to_workspace:
- aws-ecr/build-and-push-image:
account-url: AWS_ECR_ACCOUNT_URL
region: AWS_REGION
aws-access-key-id: AWS_ACCESS_KEY_ID
aws-secret-access-key: AWS_SECRET_ACCESS_KEY
create-repo: true
path: 'front/'
repo: front
tag: "${CIRCLE_SHA1}"
filters:
branches:
only: main
- aws-ecs/deploy-service-update:
requires:
- aws-ecr/build-and-push-image
family: 'b_plus_service'
cluster-name: 'b-plus'
service-name: 'b-plus'
container-image-name-updates: "container=front,tag=${CIRCLE_SHA1}"
nginx-deploy:
jobs:
- aws-ecr/build-and-push-image:
account-url: AWS_ECR_ACCOUNT_URL
region: AWS_REGION
aws-access-key-id: AWS_ACCESS_KEY_ID
aws-secret-access-key: AWS_SECRET_ACCESS_KEY
create-repo: true
dockerfile: Dockerfile.prod
path: 'front/'
repo: nginx
tag: "${CIRCLE_SHA1}"
#requires:
# - react-deploy:
# - rails-deploy:
filters:
branches:
only: main
- aws-ecs/deploy-service-update:
requires:
- aws-ecr/build-and-push-image
family: 'b_plus_service'
cluster-name: 'b-plus'
service-name: 'b-plus'
container-image-name-updates: "container=nginx,tag=${CIRCLE_SHA1}"
If you know how to fix the problem, please let me know. Thank you for reading my question.
I am trying to deploy a flask application on aws lambda via zappa through gitlab CI. Since inline editing isn't possible via gitlab CI, I generated the zappa_settings.json file on my remote computer and I am trying to use this to do zappa deploy dev.
My zappa_settings.json file:
{
"dev": {
"app_function": "main.app",
"aws_region": "eu-central-1",
"profile_name": "default",
"project_name": "prices-service-",
"runtime": "python3.7",
"s3_bucket": -MY_BUCKET_NAME-
}
}
My .gitlab-ci.yml file:
image: ubuntu:18.04
stages:
- deploy
before_script:
- apt-get -y update
- apt-get -y install python3-pip python3.7 zip
- python3.7 -m pip install --upgrade pip
- python3.7 -V
- pip3.7 install virtualenv zappa
deploy_job:
stage: deploy
script:
- mv requirements.txt ~
- mv zappa_settings.json ~
- mkdir ~/forlambda
- cd ~/forlambda
- virtualenv -p python3 venv
- source venv/bin/activate
- pip3.7 install -r ~/requirements.txt -t ~/forlambda/venv/lib/python3.7/site-packages/
- zappa deploy dev
The CI file, upon running, gives me the following error:
Any suggestions are appreciated
zappa_settings.json is commited to the repo and not created on the fly. What is created on the fly is AWS credentials file. Values required are being read from Gitlab env vars set in the web UI of the project.
zappa_settings.json
{
"prod": {
"lambda_handler": "main.handler",
"aws_region": "eu-central-1",
"profile_name": "default",
"project_name": "dummy-name",
"s3_bucket": "dummy-name",
"aws_environment_variables": {
"STAGE": "prod",
"PROJECT": "dummy-name"
}
},
"dev": {
"extends": "prod",
"debug": true,
"aws_environment_variables": {
"STAGE": "dev",
"PROJECT": "dummy-name"
}
}
}
.gitlab-ci.yml
image:
python:3.6
stages:
- test
- deploy
variables:
AWS_DEFAULT_REGION: "eu-central-1"
# variables set in gitlab's web gui:
# AWS_ACCESS_KEY_ID
# AWS_SECRET_ACCESS_KEY
before_script:
# adding pip cache
- export PIP_CACHE_DIR="/home/gitlabci/cache/pip-cache"
.zappa_virtualenv_setup_template: &zappa_virtualenv_setup
# `before_script` should not be overriden in the job that uses this template
before_script:
# creating virtualenv because zappa MUST have it and activating it
- pip install virtualenv
- virtualenv ~/zappa
- source ~/zappa/bin/activate
# installing requirements in virtualenv
- pip install -r requirements.txt
test code:
stage: test
before_script:
# installing testing requirements
- pip install -r requirements_testing.txt
script:
- py.test
test package:
<<: *zappa_virtualenv_setup
variables:
ZAPPA_STAGE: prod
stage: test
script:
- zappa package $ZAPPA_STAGE
deploy to production:
<<: *zappa_virtualenv_setup
variables:
ZAPPA_STAGE: prod
stage: deploy
environment:
name: production
script:
# creating aws credentials file
- mkdir -p ~/.aws
- echo "[default]" >> ~/.aws/credentials
- echo "aws_access_key_id = "$AWS_ACCESS_KEY_ID >> ~/.aws/credentials
- echo "aws_secret_access_key = "$AWS_SECRET_ACCESS_KEY >> ~/.aws/credentials
# try to update, if the command fails (probably not even deployed) do the initial deploy
- zappa update $ZAPPA_STAGE || zappa deploy $ZAPPA_STAGE
after_script:
- rm ~/.aws/credentials
only:
- master
I haven't used zappa in a while, but I remember that a lot of errors that were caused by bad AWS credentials, but zappa reporting something else.