I am trying to deploy a docker container to Elastic Beanstalk in AWS. I'm repeatedly getting error while doing so and each time the error is related to the ENTRYPOINT that I specified in the dockerrun.aws.json. What am I doing wrong here ?
The webapp uses Django, python3 and keras.
This is my Dockerfile content:
# reference: https://hub.docker.com/_/ubuntu/
FROM ubuntu:18.04
RUN apt-get update && apt-get install \
-y --no-install-recommends python3 python3-virtualenv
# Adds metadata to the image as a key value pair example LABEL
version="1.0"
LABEL maintainer="Amir Ashraff <amir.ashraff#gmail.com>"
##Set environment variables
ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m virtualenv --python=/usr/bin/python3 $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
# Install dependencies:
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt
# Open Ports for Web App
EXPOSE 8000
WORKDIR /manage.py
COPY . /manage.py
RUN chmod +x /manage.py
ENTRYPOINT [ "python3" ]
CMD [ "python3", "manage.py runserver 0.0.0.0:8000" ]
And this is the dockerrun.aws.json content:
{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": ""
}
],
"Volumes": [
{
"HostDirectory": "/~/aptos",
"ContainerDirectory": "/aptos/diabetes_retinopathy_recognition"
}
],
"Logging": "/aptos/diabetes_recognition",
"Entrypoint": "/opt/venv/bin/python3",
"Command": "python3 manage.py runserver 0.0.0.0:8000"
}
And this is the error from AWS logs:
Docker container quit unexpectedly on Tue Aug 20 13:03:47 UTC 2019:
/opt/venv/bin/python3: can't open file 'python3': [Errno 2] No such file
or directory.
Related
I have a docker-compose service that runs django using gunicorn in an entrypoint shell script.
When I issue CTRL-C after the docker-compose stack has been started, the web and nginx services do not gracefully exit and are not deleted. How do I configure the docker environment so that the services are removed when a CTRL-C is issued?
I have tried using stop_signal: SIGINT but the result is the same. Any ideas?
docker-compose log after CTRL-C issued
^CGracefully stopping... (press Ctrl+C again to force)
Killing nginx ... done
Killing web ... done
docker containers after CTRL-C is issued
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4b2f7db95c90 nginx:alpine "/docker-entrypoint.…" 5 minutes ago Exited (137) 5 minutes ago nginx
cdf3084a8382 myimage "./docker-entrypoint…" 5 minutes ago Exited (137) 5 minutes ago web
Dockerfile
#
# Use poetry to build wheel and install dependencies into a virtual environment.
# This will store the dependencies during compile docker stage.
# In run stage copy the virtual environment to the final image. This will reduce the
# image size.
#
# Install poetry using pip, to allow version pinning. Use --ignore-installed to avoid
# dependency conflicts with poetry.
#
# ---------------------------------------------------------------------------------------
##
# base: Configure python environment and set workdir
##
FROM python:3.8-slim as base
ENV PYTHONDONTWRITEBYTECODE=1 \
PYTHONFAULTHANDLER=1 \
PYTHONHASHSEED=random \
PYTHONUNBUFFERED=1
WORKDIR /app
# configure user pyuser:
RUN useradd --user-group --create-home --no-log-init --shell /bin/bash pyuser && \
chown pyuser /app
# ---------------------------------------------------------------------------------------
##
# compile: Install dependencies from poetry exported requirements
# Use poetry to build the wheel for the python package.
# Install the wheel using pip.
##
FROM base as compile
ARG DEPLOY_ENV=development \
POETRY_VERSION=1.1.7
# pip:
ENV PIP_DEFAULT_TIMEOUT=100 \
PIP_DISABLE_PIP_VERSION_CHECK=1 \
PIP_NO_CACHE_DIR=1
# system dependencies:
RUN apt-get update && \
apt-get install -y --no-install-recommends \
build-essential gcc && \
apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false && \
apt-get clean -y && \
rm -rf /var/lib/apt/lists/*
# install poetry, ignoring installed dependencies
RUN pip install --ignore-installed "poetry==$POETRY_VERSION"
# virtual environment:
RUN python -m venv /opt/venv
ENV VIRTUAL_ENV=/opt/venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
# install dependencies:
COPY pyproject.toml poetry.lock ./
RUN /opt/venv/bin/pip install --upgrade pip \
&& poetry install $(if [ "$DEPLOY_ENV" = 'production' ]; then echo '--no-dev'; fi) \
--no-ansi \
--no-interaction
# copy source:
COPY . .
# build and install wheel:
RUN poetry build && /opt/venv/bin/pip install dist/*.whl
# -------------------------------------------------------------------------------------------
##
# run: Copy virtualenv from compile stage, to reduce final image size
# Run the docker-entrypoint.sh script as pyuser
#
# This performs the following actions when the container starts:
# - Make and run database migrations
# - Collect static files
# - Create the superuser
# - Run wsgi app using gunicorn
#
# port: 5000
#
# build args:
#
# GIT_HASH Git hash the docker image is derived from
#
# environment:
#
# DJANGO_DEBUG True if django debugging is enabled
# DJANGO_SECRET_KEY The secret key used for django server, defaults to secret
# DJANGO_SUPERUSER_EMAIL Django superuser email, default=myname#example.com
# DJANGO_SUPERUSER_PASSWORD Django superuser passwd, default=Pa55w0rd
# DJANGO_SUPERUSER_USERNAME Django superuser username, default=admin
##
FROM base as run
ARG GIT_HASH
ENV DJANGO_DEBUG=${DJANGO_DEBUG:-False}
ENV DJANGO_SECRET_KEY=${DJANGO_SECRET_KEY:-secret}
ENV DJANGO_SETTINGS_MODULE=default_project.main.settings
ENV DJANGO_SUPERUSER_EMAIL=${DJANGO_SUPERUSER_EMAIL:-"myname#example.com"}
ENV DJANGO_SUPERUSER_PASSWORD=${DJANGO_SUPERUSER_PASSWORD:-"Pa55w0rd"}
ENV DJANGO_SUPERUSER_USERNAME=${DJANGO_SUPERUSER_USERNAME:-"admin"}
ENV GIT_HASH=${GIT_HASH:-dev}
# install virtualenv from compiled image
COPY --chown=pyuser:pyuser --from=compile /opt/venv /opt/venv
# set path for virtualenv and VIRTUAL_ENV toactivate virtualenv
ENV VIRTUAL_ENV="/opt/venv"
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
COPY --chown=pyuser:pyuser ./docker/docker-entrypoint.sh ./
USER pyuser
RUN mkdir /opt/venv/lib/python3.8/site-packages/default_project/staticfiles
EXPOSE 5000
ENTRYPOINT ["./docker-entrypoint.sh"]
Entrypoint
#!/bin/sh
set -e
echo "Making migrations..."
django-admin makemigrations
echo "Running migrations..."
django-admin migrate
echo "Making staticfiles..."
mkdir -p /opt/venv/lib/python3.8/site-packages/default_project/staticfiles
echo "Collecting static files..."
django-admin collectstatic --noinput
# requires gnu text tools
# echo "Compiling translation messages..."
# django-admin compilemessages
# echo "Making translation messages..."
# django-admin makemessages
if [ "$DJANGO_SUPERUSER_USERNAME" ]
then
echo "Creating django superuser"
django-admin createsuperuser \
--noinput \
--username $DJANGO_SUPERUSER_USERNAME \
--email $DJANGO_SUPERUSER_EMAIL
fi
exec gunicorn \
--bind 0.0.0.0:5000 \
--forwarded-allow-ips='*' \
--worker-tmp-dir /dev/shm \
--workers=4 \
--threads=1 \
--worker-class=gthread \
default_project.main.wsgi:application
exec "$#"
docker-compose
version: '3.8'
services:
web:
container_name: web
image: myimage
init: true
build:
context: .
dockerfile: docker/Dockerfile
environment:
- DJANGO_DEBUG=${DJANGO_DEBUG}
- DJANGO_SECRET_KEY=${DJANGO_SECRET_KEY}
- DJANGO_SUPERUSER_EMAIL=${DJANGO_SUPERUSER_EMAIL}
- DJANGO_SUPERUSER_PASSWORD=${DJANGO_SUPERUSER_PASSWORD}
- DJANGO_SEUPRUSER_USERNAME=${DJANGO_SUPERUSER_USERNAME}
# stop_signal: SIGINT
volumes:
- static-files:/opt/venv/lib/python3.8/site-packages/{{ cookiecutter.project_name }}/staticfiles:rw
ports:
- 127.0.0.1:${DJANGO_PORT}:5000
nginx:
container_name: nginx
image: nginx:alpine
volumes:
- ./docker/nginx:/etc/nginx/conf.d
- static-files:/static
depends_on:
- web
ports:
- 127.0.0.1:8000:80
volumes:
static-files:
You can use docker-compose down
Stops containers and removes containers, networks, volumes, and images created by up.
Reference
I am trying to deploy a flask application on aws lambda via zappa through gitlab CI. Since inline editing isn't possible via gitlab CI, I generated the zappa_settings.json file on my remote computer and I am trying to use this to do zappa deploy dev.
My zappa_settings.json file:
{
"dev": {
"app_function": "main.app",
"aws_region": "eu-central-1",
"profile_name": "default",
"project_name": "prices-service-",
"runtime": "python3.7",
"s3_bucket": -MY_BUCKET_NAME-
}
}
My .gitlab-ci.yml file:
image: ubuntu:18.04
stages:
- deploy
before_script:
- apt-get -y update
- apt-get -y install python3-pip python3.7 zip
- python3.7 -m pip install --upgrade pip
- python3.7 -V
- pip3.7 install virtualenv zappa
deploy_job:
stage: deploy
script:
- mv requirements.txt ~
- mv zappa_settings.json ~
- mkdir ~/forlambda
- cd ~/forlambda
- virtualenv -p python3 venv
- source venv/bin/activate
- pip3.7 install -r ~/requirements.txt -t ~/forlambda/venv/lib/python3.7/site-packages/
- zappa deploy dev
The CI file, upon running, gives me the following error:
Any suggestions are appreciated
zappa_settings.json is commited to the repo and not created on the fly. What is created on the fly is AWS credentials file. Values required are being read from Gitlab env vars set in the web UI of the project.
zappa_settings.json
{
"prod": {
"lambda_handler": "main.handler",
"aws_region": "eu-central-1",
"profile_name": "default",
"project_name": "dummy-name",
"s3_bucket": "dummy-name",
"aws_environment_variables": {
"STAGE": "prod",
"PROJECT": "dummy-name"
}
},
"dev": {
"extends": "prod",
"debug": true,
"aws_environment_variables": {
"STAGE": "dev",
"PROJECT": "dummy-name"
}
}
}
.gitlab-ci.yml
image:
python:3.6
stages:
- test
- deploy
variables:
AWS_DEFAULT_REGION: "eu-central-1"
# variables set in gitlab's web gui:
# AWS_ACCESS_KEY_ID
# AWS_SECRET_ACCESS_KEY
before_script:
# adding pip cache
- export PIP_CACHE_DIR="/home/gitlabci/cache/pip-cache"
.zappa_virtualenv_setup_template: &zappa_virtualenv_setup
# `before_script` should not be overriden in the job that uses this template
before_script:
# creating virtualenv because zappa MUST have it and activating it
- pip install virtualenv
- virtualenv ~/zappa
- source ~/zappa/bin/activate
# installing requirements in virtualenv
- pip install -r requirements.txt
test code:
stage: test
before_script:
# installing testing requirements
- pip install -r requirements_testing.txt
script:
- py.test
test package:
<<: *zappa_virtualenv_setup
variables:
ZAPPA_STAGE: prod
stage: test
script:
- zappa package $ZAPPA_STAGE
deploy to production:
<<: *zappa_virtualenv_setup
variables:
ZAPPA_STAGE: prod
stage: deploy
environment:
name: production
script:
# creating aws credentials file
- mkdir -p ~/.aws
- echo "[default]" >> ~/.aws/credentials
- echo "aws_access_key_id = "$AWS_ACCESS_KEY_ID >> ~/.aws/credentials
- echo "aws_secret_access_key = "$AWS_SECRET_ACCESS_KEY >> ~/.aws/credentials
# try to update, if the command fails (probably not even deployed) do the initial deploy
- zappa update $ZAPPA_STAGE || zappa deploy $ZAPPA_STAGE
after_script:
- rm ~/.aws/credentials
only:
- master
I haven't used zappa in a while, but I remember that a lot of errors that were caused by bad AWS credentials, but zappa reporting something else.
I have the following docker compose file:
version: '2'
services:
app:
build: .
command: >
bash -cex "
export LC_ALL=C.UTF-8
export LANG=C.UTF-8
/virtualenv/bin/flask run -h 0.0.0.0 -p 5050
"
env_file: env
links:
- postgres
ports:
- 8080:8080
As you can see I'm using the env_file option to load my environment variables from the file env.
Now I'm trying to deploy this container to Elastic Beanstalk.
This is my file Dockerrun.aws.json so far:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "app",
"image": "myorg/myimage",
"essential": true,
"memory": 256,
"command": [
"/bin/bash",
"export LC_ALL=C.UTF-8",
"export LANG=C.UTF-8",
"/virtualenv/bin/flask run -h 0.0.0.0 -p 5050"
],
"portMappings": [
{
"hostPort": 8080,
"containerPort": 8080
}
],
"links": [
"postgres",
]
}
In the AWS Elastic Beanstalk documentation just mention the environment option to pass an array of env variables, but I can't find how to pass a file instead of an array of variables.
Does someone knows how to translate this docker-compose file to Dockerrun.aws.json file properly?
Regards.
Try container-transform.
$ pip install container-transform
$ cat docker-compose.yml | container-transform -v
and it will print the ECS format to STDOUT.
I get an error while i launch crossbar 0.12.1 that I did not have with the version 0.11
[Controller 210] crossbar.error.invalid_configuration:
WSGI app module 'myproject.wsgi' import failed: No module named django -
Python search path was [u'/myproject', '/opt/crossbar/site-packages/crossbar/worker', '/opt/crossbar/bin', '/opt/crossbar/lib_pypy/extensions', '/opt/crossbar/lib_pypy', '/opt/crossbar/lib-python/2.7', '/opt/crossbar/lib-python/2.7/lib-tk', '/opt/crossbar/lib-python/2.7/plat-linux2', '/opt/crossbar/site-packages']
I have not changed anything else that the crossbar update.
My config.json are still the same, with the pythonpath of my project within the option :
{
"workers": [
{
"type": "router",
"options": {
"pythonpath": ["/myproject"]
},
"realms": [
{
"name": "realm1",
"roles": [
{
"name": "anonymous",
"permissions": [
{
"uri": "*",
"publish": true,
"subscribe": true,
"call": true,
"register": true
}
]
}
]
}
],
"transports": [
{
"type": "web",
"endpoint": {
"type": "tcp",
"port": 80
},
"paths": {
"/": {
"type": "wsgi",
"module": "myproject.wsgi",
"object": "application"
},
etc...
Do you have an idea ?
Thanks.
It seems that "pythonpath": ["/myproject"] replaces other python path configs from your dist-packages. Look for an option that adds /myproject and not replacing current path settings.
Or - add the path to your project to the machine python path, and don't provide crossbar with any python path, so it will pick the exisitng one.
Something like (depends on OS):
$ sudo nano /usr/lib/python2.7/dist-packages/myproject.pth
Then:
/home/username/path/to/myproject
I work with Docker in order to have a clean environment.
The Dockerfile here : http://crossbar.io/docs/Installation-on-Docker/ seem broken :
ImportError: No module named setuptools_ext
----------------------------------------
Cleaning up...
Command python setup.py egg_info failed with error code 1 in /tmp/pip-build-VfPnRU/pynacl
Storing debug log for failure in /root/.pip/pip.log
The command '/bin/sh -c pip install crossbar[all]' returned a non-zero code: 1
it seem solved with :
RUN pip install --upgrade cffi
Before RUN pip install crossbar[all]
With this Environment, my problem are solved :)
Don't know why i get this error before, but it's work.
Many thanks to all here and to indexerror, the "french python stackoverflow" :)
http://indexerror.net/3380/crossbar-0-12-1-wsgi-error-no-module-named-django?show=3415
P.S.
Here the clean Dockerfile i use :
FROM ubuntu
ENV APPNAME="monappli"
ADD requirements.txt /tmp/
RUN apt-get update
RUN apt-get install -y gcc build-essential python-dev python2.7-dev libxslt1-dev libssl-dev libxml2 libxml2-dev tesseract-ocr python-imaging libffi-dev libreadline-dev libbz2-dev libsqlite3-dev libncurses5-dev python-mysqldb python-pip
RUN cd /tmp/ && pip install -r requirements.txt
RUN pip install -U crossbar[all]
WORKDIR $APPNAME
CMD cd / && cd $APPNAME && python manage.py makemigrations && python manage.py migrate && crossbar start
With Django, flask and/or all the dependencies you want within a file named "requirements.txt" in the same folder than the Docker file :
requirements.txt ex :
ipython
django
djangorestframework
djangorestframework-jwt
django-cors-headers
bottlenose
python-amazon-simple-product-api
python-dateutil
beautifulsoup4
datetime
mechanize
pytesseract
requests
I am trying to run my django application in a docker container. I am using uWSGI to serve the django application, and also have a celery worker running in the background. These processes are started by supervisord.
The problem that I am having is that I am unable to see the application on the port that I would expect to see it on. I am exposing port 8080 and running the uwsgi process on 8080, but cannot find my application in a browser at the ip address $(boot2docker ip):8080. I just get Google Chrome's 'This webpage is not available'. (I am using a Mac, so I need to get the boot2docker ip address). The container is clearly running, and reports that my uwsgi and celery processes are both successfully running as well.
When I run docker exec CONTAINER_ID curl localhost:8080 I get a response like
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 21 0 21 0 0 3150 0 --:--:-- --:--:-- --:--:-- 3500
... so it seems like the container is accepting connections on port 8080.
When I run docker exec CONTAINER_ID netstat -lpn |grep :8080 I get tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 11/uwsgi
When I run docker inspect CONTAINER_ID I get the following:
[{
"AppArmorProfile": "",
"Args": [
"-c",
"/home/docker/code/supervisor-app.conf"
],
"Config": {
"AttachStderr": true,
"AttachStdin": false,
"AttachStdout": true,
"Cmd": [
"supervisord",
"-c",
"/home/docker/code/supervisor-app.conf"
],
"CpuShares": 0,
"Cpuset": "",
"Domainname": "",
"Entrypoint": null,
"Env": [
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"env=staging"
],
"ExposedPorts": {
"8080/tcp": {}
},
"Hostname": "21443d8a16df",
"Image": "vitru",
"Memory": 0,
"MemorySwap": 0,
"NetworkDisabled": false,
"OnBuild": null,
"OpenStdin": false,
"PortSpecs": null,
"StdinOnce": false,
"Tty": false,
"User": "",
"Volumes": null,
"WorkingDir": ""
},
"Created": "2014-12-27T01:00:22.390065668Z",
"Driver": "aufs",
"ExecDriver": "native-0.2",
"HostConfig": {
"Binds": null,
"CapAdd": null,
"CapDrop": null,
"ContainerIDFile": "",
"Devices": [],
"Dns": null,
"DnsSearch": null,
"ExtraHosts": null,
"Links": null,
"LxcConf": [],
"NetworkMode": "bridge",
"PortBindings": {},
"Privileged": false,
"PublishAllPorts": false,
"RestartPolicy": {
"MaximumRetryCount": 0,
"Name": ""
},
"SecurityOpt": null,
"VolumesFrom": null
},
"HostnamePath": "/mnt/sda1/var/lib/docker/containers/21443d8a16df8e2911ae66d5d31341728d76ae080e068a5bb1dd48863febb607/hostname",
"HostsPath": "/mnt/sda1/var/lib/docker/containers/21443d8a16df8e2911ae66d5d31341728d76ae080e068a5bb1dd48863febb607/hosts",
"Id": "21443d8a16df8e2911ae66d5d31341728d76ae080e068a5bb1dd48863febb607",
"Image": "de52fbada520519793e348b60b608f7db514eef7fd436df4542710184c1ecb7f",
"MountLabel": "",
"Name": "/suspicious_fermat",
"NetworkSettings": {
"Bridge": "docker0",
"Gateway": "172.17.42.1",
"IPAddress": "172.17.0.87",
"IPPrefixLen": 16,
"MacAddress": "02:42:ac:11:00:57",
"PortMapping": null,
"Ports": {
"8080/tcp": null
}
},
"Path": "supervisord",
"ProcessLabel": "",
"ResolvConfPath": "/mnt/sda1/var/lib/docker/containers/21443d8a16df8e2911ae66d5d31341728d76ae080e068a5bb1dd48863febb607/resolv.conf",
"State": {
"ExitCode": 0,
"FinishedAt": "0001-01-01T00:00:00Z",
"Paused": false,
"Pid": 16230,
"Restarting": false,
"Running": true,
"StartedAt": "2014-12-27T01:00:22.661588847Z"
},
"Volumes": {},
"VolumesRW": {}
}
]
As someone not terribly fluent in Docker, I'm not really sure what all that means. Maybe there is a clue in there as to why I cannot connect to my server?
Here is my Dockerfile, so you can see if I'm doing anything blatantly wrong in there.
FROM ubuntu:14.04
# Get most recent apt-get
RUN apt-get -y update
# Install python and other tools
RUN apt-get install -y tar git curl nano wget dialog net-tools build-essential
RUN apt-get install -y python3 python3-dev python-distribute
RUN apt-get install -y nginx supervisor
# Get Python3 version of pip
RUN apt-get -y install python3-setuptools
RUN easy_install3 pip
RUN pip install uwsgi
RUN apt-get install -y python-software-properties
# Install GEOS
RUN apt-get -y install binutils libproj-dev gdal-bin
# Install node.js
RUN apt-get install -y nodejs npm
# Install postgresql dependencies
RUN apt-get update && \
apt-get install -y postgresql libpq-dev && \
rm -rf /var/lib/apt/lists
# Install pylibmc dependencies
RUN apt-get update
RUN apt-get install -y libmemcached-dev zlib1g-dev libssl-dev
ADD . /home/docker/code
# Setup config files
RUN ln -s /home/docker/code/supervisor-app.conf /etc/supervisor/conf.d/
# Create virtualenv and run pip install
RUN pip install -r /home/docker/code/vitru/requirements.txt
# Create directory for logs
RUN mkdir -p /var/logs
# Set environment as staging
ENV env staging
EXPOSE 8080
# The supervisor conf file starts uwsgi on port 8080 and starts a celeryd worker
CMD ["supervisord", "-c", "/home/docker/code/supervisor-app.conf"]
I believe the problem you have is that EXPOSE only makes the ports available between containers... not to the host system. See docs here:
https://docs.docker.com/reference/builder/#expose
You need to "publish" the port via the -p flag for docker run command:
https://docs.docker.com/reference/run/#expose-incoming-ports
There is a similar distinction in Fig, if you were using it, between expose and ports directives in the fig.yml file.