Dockerfile location and path - dockerfile

I am learning about Dockerfile by following some examples and reading the docs. A Dockerfile has the following starting lines:
FROM ubuntu:14.04
RUN mkdir /home/meteorapp
WORKDIR /home/meteorapp
ADD . ./meteorapp
# Do basic updates
RUN apt-get update -q && apt-get clean
# Get curl in order to download what we need
RUN apt-get install curl -y \
# Install Meteor
&& (curl https://install.meteor.com/ | sh) \
# Build the Meteor app
&& cd /home/meteorapp/meteorapp/app \
&& meteor build ../build --directory \
# and more lines ...
The lines && cd /home/meteorapp/meteorapp/app \ fails with error:
/bin/sh: 1: cd: can't cd to /home/meteorapp/meteorapp/app
The Dockerfile is located in the root directory of my app
What is causing this error and how to fix it?

It appears that /home/meteorapp/meteorapp/app doesn't exist inside your docker container.
When you ADD . ./meteorapp you put everything you have in the Dockerfile folder inside your container, so if you don't have an app folder (and it seems that you don't, based on your screenshot), it won't magically appear inside the container

Related

How to pre-install pre commit into hooks into docker

As I understand the documentation, whenever I add these lines to the config:
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v2.1.0
hooks:
- id: trailing-whitespace
it makes pre-commit to download the hooks code from this repo and execute it. Is it possible to pre-install all the hooks somehow into a Docker image. So when I call pre-commit run no network is used?
I found this section of the documentation describing how pre-commit caches all the repositories. They are stored in ~/.cache/pre-commit and this could be configured by updating PRE_COMMIT_HOME env variable.
However, the caching only works when I do pre-commit run. But I want to pre-install everything w/o running the checks. Is it possible?
you're looking for the pre-commit install-hooks command
at the least you need something like this to cache the pre-commit environments:
COPY .pre-commit-config.yaml .
RUN git init . && pre-commit install-hooks
disclaimer: I created pre-commit
Snippet provided by #anthony-sottile works like charm. It helps utilize docker cache. Here is a working variation for it from django world.
ARG PYTHON_VERSION=3.9-buster
# define an alias for the specfic python version used in this file.
FROM python:${PYTHON_VERSION} as python
# Python build stage
FROM python as python-build-stage
ARG BUILD_ENVIRONMENT=test
# Install apt packages
RUN apt-get update && apt-get install --no-install-recommends -y \
# dependencies for building Python packages
build-essential \
# psycopg2 dependencies
libpq-dev
# Requirements are installed here to ensure they will be cached.
COPY ./requirements .
# Create Python Dependency and Sub-Dependency Wheels.
RUN pip wheel --wheel-dir /usr/src/app/wheels \
-r ${BUILD_ENVIRONMENT}.txt
# Python 'run' stage
FROM python as python-run-stage
ARG BUILD_ENVIRONMENT=test
ARG APP_HOME=/app
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV BUILD_ENV ${BUILD_ENVIRONMENT}
WORKDIR ${APP_HOME}
# Install required system dependencies
RUN apt-get update && apt-get install --no-install-recommends -y \
# psycopg2 dependencies
libpq-dev \
# Translations dependencies
gettext \
# cleaning up unused files
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false \
&& rm -rf /var/lib/apt/lists/*
# All absolute dir copies ignore workdir instruction. All relative dir copies are wrt to the workdir instruction
# copy python dependency wheels from python-build-stage
COPY --from=python-build-stage /usr/src/app/wheels /wheels/
# use wheels to install python dependencies
RUN pip install --no-cache-dir --no-index --find-links=/wheels/ /wheels/* \
&& rm -rf /wheels/
COPY ./compose/test/django/entrypoint /entrypoint
RUN chmod +x /entrypoint
COPY .pre-commit-config.yaml .
RUN git init . && pre-commit install-hooks
# copy application code to WORKDIR
COPY . ${APP_HOME}
ENTRYPOINT ["/entrypoint"]
then you can fire pre-commit checks in similar fashion:
docker-compose -p project_name -f test.yml run --rm django pre-commit run --all-files

Docker file throwing error when i try to run it "AH00111: Config variable ${APACHE_RUN_DIR} is not defined"

I am trying my hands with Docker.
I am trying to install apche2 into ubuntu images.
FROM ubuntu
RUN echo "welcome to yellow pages"
RUN apt-get update
RUN apt-get install -y tzdata
RUN apt-get install -y apache2
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
RUN echo 'Hello, docker' > /var/www/index.html
ENTRYPOINT ["/usr/sbin/apache2"]
CMD ["-D", "FOREGROUND"]
I found a reference online reference
I have added this line "RUN apt-get install -y tzdata" because it was asking for an option of tzdata and stopping image creation.
Now when I run my image I am getting the below error
[Thu Jan 07 09:43:57.213998 2021] [core:warn] [pid 1] AH00111: Config variable ${APACHE_RUN_DIR} is not defined
apache2: Syntax error on line 80 of /etc/apache2/apache2.conf: DefaultRuntimeDir must be a valid directory, absolute or relative to ServerRoot
I am new to docker and it's a bit of a task for me to understand it.
Could anyone help me out of this?
This seems to be Apache issue, not docker issue. Your conf seems to have errors. You have a parameter there called DefaultRuntimeDir which is pointing ad directory which does not exist in docker. Review your config file and ensure directories you specified in there exist in docker.
You can play within docker by simply:
docker build -t my_image_name .
docker run -it --rm --entrypoint /bin/bash my_image_name
# now you are in your docker container, you can check if your directories exist
Without knowing your config I would simply add one more RUN (I made this path up, you can change it to whatever you like)
ENV APACHE_RUN_DIR /var/lib/apache/runtime
RUN mkdir -p ${APACHE_RUN_DIR}
As a side note I would also combine all RUN into single like this:
RUN echo "welcome to yellow pages" \
&& apt-get update \
&& apt-get install -y tzdata apache2 \
&& rm -rf /var/lib/apt/lists/* \
&& mkdir -p /var/www \
&& echo 'Hello, docker' > /var/www/index.html

Docker error when containerizing app in Google Cloud Run

I am trying to run transformers from huggingface in Google Cloud Run.
My first idea was to run one of the dockerfiles provided by huggingface, but it seems that is not possible.
Any ideas on how to get around this error?
Step 6/9 : WORKDIR /workspace
---> Running in xxx
Removing intermediate container xxx
---> xxx
Step 7/9 : COPY . transformers/
---> xxx
Step 8/9 : RUN cd transformers/ && python3 -m pip install --no-cache-dir .
---> Running in xxx
←[91mERROR: Directory '.' is not installable. Neither 'setup.py' nor 'pyproject.toml' found.
The command '/bin/sh -c cd transformers/ && python3 -m pip install --no-cache-dir .' returned a non-zero code: 1
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
←[0m
-------------------------------------------------------------------------------------------------------------------------------------------------------------------
ERROR: (gcloud.builds.submit) build xxx completed with status "FAILURE"
Dockerfile from huggingface:
FROM nvidia/cuda:10.1-cudnn7-runtime-ubuntu18.04
LABEL maintainer="Hugging Face"
LABEL repository="transformers"
RUN apt update && \
apt install -y bash \
build-essential \
git \
curl \
ca-certificates \
python3 \
python3-pip && \
rm -rf /var/lib/apt/lists
RUN python3 -m pip install --no-cache-dir --upgrade pip && \
python3 -m pip install --no-cache-dir \
mkl \
tensorflow
WORKDIR /workspace
COPY . transformers/
RUN cd transformers/ && \
python3 -m pip install --no-cache-dir .
CMD ["/bin/bash"]
.dockerignore file from Google Cloud Run documentation:
Dockerfile
README.md
*.pyc
*.pyo
*.pyd
__pycache__
.pytest_cache
---- Edit:
Managed to get working based on the answer from Dustin. I basically:
left the Dockerfile in the root folder, together with the transformers folder.
updated the COPY line from the dockerfile to:
COPY . ./
The error is:
Directory '.' is not installable. Neither 'setup.py' nor 'pyproject.toml' found.
This is due to these two lines in your Dockerfile:
COPY . transformers/
RUN cd transformers/ && \
python3 -m pip install --no-cache-dir .
This attempts to copy the local directory containing the Dockerfile into the container, and then install it as a Python project.
It looks like the Dockerfile expects to be run at the repository root of https://github.com/huggingface/transformers. You should cloning the repo and move the Dockerfile you want to build into the root, and then build again.

How to identify why a program is not starting inside docker

I have a docker image which has a c++ executable with dependencies packed into it. This executable runs fine outside docker environment and i have tested it multiple times.
However inside docker it stops immediately as and when started.
To debug i have added a std::cout << "Main 1" << std::endl as soon as main() function is called. But even this is not being printed when i start the executable inside docker.
Any tips on how to debug this issue.
Adding docker file which is used to build the docker image.
FROM ubuntu:18.04
# install app dependencies
RUN apt-get -yqq update \
&& apt-get -yqq dist-upgrade \
&& apt-get -yqq install apt-utils libgomp1 libprotobuf10 libboost-thread1.65.1 libboost-filesystem1.65.1 libopencv-core3.2 libopencv-imgproc3.2 libopencv-imgcodecs3.2 libjpeg-turbo8 libpo
&& apt-get -yqq remove systemd cups perl ffmpeg apt-utils \
&& rm -rf /var/lib/apt/lists/*
# create app folder
RUN mkdir -p /opt/aimes
# copy app, dependencies and config
COPY deps/aimes /opt/aimes/
COPY deps/*.* /opt/aimes/
COPY deps/config /opt/aimes/config
# copy wrapper script
COPY run-es.sh /opt/aimes/
# run command
WORKDIR /opt/aimes
ENV LD_LIBRARY_PATH .
ENTRYPOINT ["./run-es.sh"]
Adding --cap-add=SYS_PTRACE to docker run command helped in finding out issue using gdb.
Also the solution was to add the above option to docker run command, since the exe required root permissions.
Below command solved my issue.
docker run --cap-add=SYS_PTRACE -it --rm

gcc error while building docker image for django on windows

I am trying to build a docker image using Visual Studio Code following this tutorial "https://code.visualstudio.com/docs/python/tutorial-deploy-containers".
I created a django app with a connection to a MSSQLserver on azure with the package pyodbc.
During the build of the docker image i receive the following error messages:
unable to execute 'gcc': No such file or directory
error: command 'gcc' failed with exit status 1
----------------------------------------
Failed building wheel for pyodbc
and
unable to execute 'gcc': No such file or directory
error: command 'gcc' failed with exit status 1
----------------------------------------
Failed building wheel for typed-ast
I read solutions for linux systems where one should install python-dev, but since i am working on a windows machine this is no solution.
Then i read that on windows all the needed files are in the 'include' directory of the python installation. But in a venv installation this directory is empty... so i created a directory junction to the original 'include'. The error still exists.
My docker file is included below.
# Python support can be specified down to the minor or micro version
# (e.g. 3.6 or 3.6.3).
# OS Support also exists for jessie & stretch (slim and full).
# See https://hub.docker.com/r/library/python/ for all supported Python
# tags from Docker Hub.
FROM tiangolo/uwsgi-nginx:python3.6-alpine3.7
# Indicate where uwsgi.ini lives
ENV UWSGI_INI uwsgi.ini
# Tell nginx where static files live (as typically collected using Django's
# collectstatic command.
ENV STATIC_URL /app/static_collected
# Copy the app files to a folder and run it from there
WORKDIR /app
ADD . /app
# Make app folder writable for the sake of db.sqlite3, and make that file also writable.
# RUN chmod g+w /app
# RUN chmod g+w /app/db.sqlite3
# If you prefer miniconda:
#FROM continuumio/miniconda3
LABEL Name=hello_django Version=0.0.1
EXPOSE 8000
# Using pip:
RUN python3 -m pip install -r requirements.txt
CMD ["python3", "-m", "hello_django"]
# Using pipenv:
#RUN python3 -m pip install pipenv
#RUN pipenv install --ignore-pipfile
#CMD ["pipenv", "run", "python3", "-m", "hello_django"]
# Using miniconda (make sure to replace 'myenv' w/ your environment name):
#RUN conda env create -f environment.yml
#CMD /bin/bash -c "source activate myenv && python3 -m hello_django"
I could use some help in building the image without the errors.
Based on the answer of 2ps i added these lines almost at the top of the docker file
FROM tiangolo/uwsgi-nginx:python3.6-alpine3.7
RUN apk update \
&& apk add apk add gcc libc-dev g++ \
&& apk add libffi-dev libxml2 libffi-dev \
&& apk add unixodbc-dev mariadb-dev python3-dev
and received a new error...
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.7/community/x86_64/APKINDEX.tar.gz
v3.7.1-98-g2f2e944c59 [http://dl-cdn.alpinelinux.org/alpine/v3.7/main]
v3.7.1-105-g7db92f4321 [http://dl-cdn.alpinelinux.org/alpine/v3.7/community]
OK: 9053 distinct packages available
ERROR: unsatisfiable constraints:
add (missing):
required by: world[add]
apk (missing):
required by: world[apk]
The command '/bin/sh -c apk update && apk add apk add gcc libc-dev g++ && apk add libffi-dev libxml2 libffi-dev && apk add unixodbc-dev mariadb-dev python3-dev' returned a non-zero code: 2
Found out that adding
RUN echo "ipv6" >> /etc/modules
helped with the errors above. Taken from: https://github.com/gliderlabs/docker-alpine/issues/55
The app now works, exept that the intended connection to the MsSQL database still not works.
Error at /
('01000', "[01000] [unixODBC][Driver Manager]Can't open lib 'ODBC Driver 13 for SQL Server' : file not found (0) (SQLDriverConnect)")
I think i should get my hands dirty on some docker documentation.
I gave up on the solution with alpine and switched to debian
FROM python:3.7
# needed files for pyodbc
RUN apt-get update
RUN apt-get install gcc libc-dev g++ libffi-dev libxml2 libffi-dev unixodbc-dev -y
# MS SQL driver 17 for debian
RUN apt-get install apt-transport-https \
&& curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -\
&& curl https://packages.microsoft.com/config/debian/9/prod.list > /etc/apt/sources.list.d/mssql-release.list \
&& apt-get update \
&& ACCEPT_EULA=Y apt-get install msodbcsql17 -y
You'll need to use apk to install gcc and other native dependencies needed to build your pip dependencies. For the ones that you listed (typedast and pyodbc), I think they would be:
RUN apk update \
&& apk add apk add gcc libc-dev g++ \
&& apk add libffi-dev libxml2 libffi-dev \
&& apk add unixodbc-dev mariadb-dev python3-dev