docker container automcatically deleted - dockerfile

Am trying to run Dgoss(server validation tool) on my Docker image but It's not working. Container is getting deleted automatically. Please help me !!
snippet of my Dockerfile:
FROM node:4-alpine
ENV NODE_ENV "production"
ENV PORT 8079
RUN addgroup mygroup && adduser -D -G mygroup myuser && mkdir -p
/usr/src/app && chown -R myuser /usr/src/app
# Prepare app directory
WORKDIR /usr/src/app
COPY package.json /usr/src/app/
COPY yarn.lock /usr/src/app/
RUN chown myuser /usr/src/app/yarn.lock
USER myuser
RUN yarn install
COPY . /usr/src/app
# Start the app
CMD ["/usr/local/bin/npm", "start"]
Command:
> dgoss edit test-1
Error:
INFO: Starting docker container
INFO: Container ID: 4a631969
INFO: Run goss add/autoadd to add resources
/goss $ INFO: Deleting container

Related

Why is my docker image not running when using docker run (image), but i can run containers generated by docker-compose up?

My docker-compose creates 3 containers - django, celery and rabbitmq. When i run the following commands -> docker-compose build and docker-compose up, the containers run successfully.
However I am having issues with deploying the image. The image generated has an image ID of 24d7638e2aff. For whatever reason however, if I just run the command below, nothing happens with an exit 0. Both the django and celery applications have the same image id.
docker run 24d7638e2aff
This is not good, as I am unable to deploy this image on kubernetes. My only thought is that the dockerfile has been configured wrongly, but i cannot figure out what is the cause
docker-compose.yaml
version: "3.9"
services:
django:
container_name: testapp_django
build:
context: .
args:
build_env: production
ports:
- "8000:8000"
command: >
sh -c "python manage.py migrate &&
python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
links:
- rabbitmq
- celery
rabbitmq:
container_name: testapp_rabbitmq
restart: always
image: rabbitmq:3.10-management
ports:
- "5672:5672" # specifies port of queue
- "15672:15672" # specifies port of management plugin
celery:
container_name: testapp_celery
restart: always
build:
context: .
args:
build_env: production
command: celery -A testapp worker -l INFO -c 4
depends_on:
- rabbitmq
Dockerfile
ARG PYTHON_VERSION=3.9-slim-buster
# define an alias for the specfic python version used in this file.
FROM python:${PYTHON_VERSION} as python
# Python build stage
FROM python as python-build-stage
ARG build_env
# Install apt packages
RUN apt-get update && apt-get install --no-install-recommends -y \
# dependencies for building Python packages
build-essential \
# psycopg2 dependencies
libpq-dev
# Requirements are installed here to ensure they will be cached.
COPY ./requirements .
# Create Python Dependency and Sub-Dependency Wheels.
RUN pip wheel --wheel-dir /usr/src/app/wheels \
-r ${build_env}.txt
# Python 'run' stage
FROM python as python-run-stage
ARG build_env
ARG APP_HOME=/app
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV BUILD_ENV ${build_env}
WORKDIR ${APP_HOME}
RUN addgroup --system appuser \
&& adduser --system --ingroup appuser appuser
# Install required system dependencies
RUN apt-get update && apt-get install --no-install-recommends -y \
# psycopg2 dependencies
libpq-dev \
# Translations dependencies
gettext \
# git for GitPython commands
git-all \
# cleaning up unused files
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
&& rm -rf /var/lib/apt/lists/*
# All absolute dir copies ignore workdir instruction. All relative dir copies are wrt to the workdir instruction
# copy python dependency wheels from python-build-stage
COPY --from=python-build-stage /usr/src/app/wheels /wheels/
# use wheels to install python dependencies
RUN pip install --no-cache-dir --no-index --find-links=/wheels/ /wheels/* \
&& rm -rf /wheels/
COPY --chown=appuser:appuser ./docker_scripts/entrypoint /entrypoint
RUN sed -i 's/\r$//g' /entrypoint
RUN chmod +x /entrypoint
# copy application code to WORKDIR
COPY --chown=appuser:appuser . ${APP_HOME}
# make appuser owner of the WORKDIR directory as well.
RUN chown appuser:appuser ${APP_HOME}
USER appuser
EXPOSE 8000
ENTRYPOINT ["/entrypoint"]
entrypoint
#!/bin/bash
set -o errexit
set -o pipefail
set -o nounset
exec "$#"
How do I build images of these containers so that I can deploy them to k8s?
The Compose command: overrides the Dockerfile CMD. docker run doesn't look at the docker-compose.yml file at all, and docker run with no particular command runs the image's CMD. You haven't declared anything for that, which is why the container exits immediately.
Leave the entrypoint script unchanged (or even delete it entirely, since it doesn't really do anything). Add a CMD line to the Dockerfile
CMD python manage.py migrate && python manage.py runserver 0.0.0.0:8000
Now plain docker run as you've shown it will attempt to start the Django server. For the Celery container, you can still pass a command override
docker run -d --net ... your-image \
celery -A testapp worker -l INFO -c 4
If you do deploy to Kubernetes, and you keep the entrypoint script, then you need to use args: in your pod spec to provide the alternate command, not command:.
I think that is because the commands to run the django server are in the docker-compose.yml.
You should move these commands inside the entrypoint.
set -o errexit
set -o pipefail
set -o nounset
python manage.py migrate && python manage.py runserver 0.0.0.0:8000
exec "$#"
Pay attention that this command python manage.py runserver 0.0.0.0:8000 will start the application with a development server that cannot be used for production purposes.
You should look for gunicorn or similar.

Apache Druid Nano Container Service Error

I want to spin up a low configuration containerized service for which I created a Dockerfile as below:
docker build -t apache/druid_nano:0.20.2 -f Dockerfile .
FROM ubuntu:16.04
Install Java JDK 8
RUN apt-get update
&& apt-get install -y openjdk-8-jdk
RUN mkdir /app
WORKDIR /app
COPY apache-druid-0.20.2-bin.tar.gz /app
RUN tar xvzf apache-druid-0.20.2-bin.tar.gz
WORKDIR /app/apache-druid-0.20.2
EXPOSE <PORT_NUMBERS>
ENTRYPOINT ["/bin/start/start-nano-quickstart"]
When I start the container using the command "docker run -d -p 8888:8888 apache/druid_nano:0.20.2, I get an error as below:
/bin/start-nano-quickstart: no such file or directory
I removed the ENTRYPOINT command and built the image again just to check if the file exists in the bin directory inside the container. There is a file start-nano-quickstart under the bin directory inside the container.
Am I missing anything here? Please help.

Docker connection error 'Connection broken by NewConnectionError'

I am new to docker i just installed it and did some configuration for my django project.
when i am running docker build . i am getting these error whats wrong here?
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.VerifiedHTTPSConnection object at 0x7f1363b604f0>: Failed to establish a new connection: [Errno -3] Try again')': /simple/django/
Dockerfile
FROM python:3.8-alpine
MAINTAINER RAHUL VERMA
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
RUN mkdir /app
WORKDIR app
COPY ./app /app
RUN adduser -D user
USER user
requirements.txt file
Django==2.2
djangorestframework==3.11.0
I just do small changes to use your dockerfile and run command docker build -t app dockerfilelocation.
Its working from my side.My dockerfile look like -
FROM python:3.8-alpine
MAINTAINER RAHUL VERMA
ENV PYTHONUNBUFFERED 1
COPY . /app/
RUN pip install -r /app/requirements.txt
RUN mkdir /app
WORKDIR app
COPY ./app /app
RUN adduser -D user
USER user
I had the same problem it I solveit with the conections of internet or you can check it with this:
"This is almost certainly an issue with the networking/DNS configuration inside the container and not related to pip specifically. Try adding a RUN nslookup pypi.org line to your Dockerfile and see if it works. If you're using a custom index URL then put that instead of pypi.org."
https://github.com/pypa/pip/issues/7460

Docker error: entrypoint permission denied

I'm trying to build a docker image where the entrypoint can run without the error:
docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"/app\": permission denied": unknown.
my OS: Windows 10
Dockerfile content:
ARG GO_VERSION=1.11
FROM golang:${GO_VERSION}-alpine AS builder
RUN mkdir /user && \
echo 'nobody:x:65534:65534:nobody:/:' > /user/passwd && \
echo 'nobody:x:65534:' > /user/group
RUN apk add --no-cache ca-certificates
ENV CGO_ENABLED=0 GOFLAGS=-mod=vendor
WORKDIR $GOPATH/src/XXXXmyrepoXXXX
COPY ./ ./
RUN go build \
-installsuffix 'static' \
-o /app .
FROM scratch AS final
COPY --from=builder /user/group /user/passwd /etc/
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=builder /app /app
EXPOSE 8080
USER nobody:nobody
ENTRYPOINT ["/app"]
How should I change the Dockerfile, it should work as well as it does here https://medium.com/#pierreprinetti/the-go-1-11-dockerfile-a3218319d191? There are literally no changes.
The error was that i did not go build the correct path.

ECS Docker container won't start

I have a Docker container with this Dockerfile:
FROM node:8.1
RUN rm -fR /var/lib/apt/lists/*
RUN echo "deb http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main" | tee /etc/apt/sources.list.d/webupd8team-java.list
RUN echo "deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main" | tee -a /etc/apt/sources.list.d/webupd8team-java.list
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys EEA14886
RUN apt-get update
RUN echo debconf shared/accepted-oracle-license-v1-1 select true | \
debconf-set-selections
RUN echo debconf shared/accepted-oracle-license-v1-1 seen true | \
debconf-set-selections
RUN apt-get install -y oracle-java8-installer
RUN apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN mkdir -p /app
WORKDIR /app
# Install app dependencies
COPY package.json /app/
RUN npm install
# Bundle app source
COPY . /app
# Environment Variables
ENV PORT 8080
# start the SSH daemon service
RUN service ssh start
# create a non-root user & a home directory for them
RUN useradd --create-home --shell /bin/bash tunnel-user
# set their password
RUN echo 'tunnel-user:93wcBjsp' | chpasswd
# Copy the SSH key to authorized_keys
COPY tunnel.pub /app/
RUN mkdir -p /home/tunnel-user/.ssh
RUN cat tunnel.pub >> /home/tunnel-user/.ssh/authorized_keys
# Set permissions
RUN chown -R tunnel-user:tunnel-user /home/tunnel-user/.ssh
RUN chmod 0700 /home/tunnel-user/.ssh
RUN chmod 0600 /home/tunnel-user/.ssh/authorized_keys
# allow the tunnel-user to SSH into this machine
RUN echo 'AllowUsers tunnel-user' >> /etc/ssh/sshd_config
EXPOSE 8080
EXPOSE 22
CMD [ "npm", "start" ]
My ECS task has this definition. I'm using a role which has AmazonEC2ContainerServiceforEC2Role.
When I try to start it as a task in my ECS cluster I get this error:
CannotStartContainerError: API error (500): driver failed programming external connectivity on endpoint ecs-ssh-4-ssh-8cc68dbfaa8edbdc0500 (387e024a87752293f51e5b62de9e2b26102d735e8da500c8e7fa5e1b4b4f0983): Error starting userland proxy: listen tcp 0.0.0
How do I fix this?