execute sqlite commands within dockerfile - dockerfile

This dockerfile is working correctly. But how do I execute commands within dockerfile?
FROM alpine
RUN apk add --update sqlite && rm -rf /var/cache/apk/*
RUN apk add --update wget && rm -rf /var/cache/apk/*
RUN wget --no-check-certificate https://cdn.rawgit.com/times/data/master/sunday_times_panama_data.zip
RUN unzip sunday_times_panama_data.zip
But the next part needs to be executed at sqlite prompt. How do I declare this part?
# sqlite commands:
sqlite3 sundayTimesPanamaPapers.sqlite
.mode csv
CREATE TABLE panama(company_url TEXT,company_name TEXT,officer_position_es TEXT,officer_position_en TEXT,officer_name TEXT,inc_date TEXT,dissolved_date TEXT,updated_date TEXT,company_type TEXT,mf_link TEXT);
.import sunday_times_panama_data.csv panama

I can save the commands to a file and then execute the file in a dockerfile like this...
ADD sqlite_commands.sql /
RUN sqlite3 panama.sqlite < /sqlite_commands.sql

Just feed it the commands using a pipe:
RUN echo '.mode csv\nCREATE TABLE panama(company_url TEXT,company_name TEXT,officer_position_es TEXT,officer_position_en TEXT,officer_name TEXT,inc_date TEXT,dissolved_date TEXT,updated_date TEXT,company_type TEXT,mf_link TEXT);\n.import sunday_times_panama_data.csv panama' | sqlite3 sundayTimesPanamaPapers.sqlite

Related

/entrypoint.sh: line 8: syntax error: unexpected end of file

Docker project was created on Linux machine, I'm running windows and I can't get docker-compose up to work. I've read through Stackoverflow's answers and so far I've tried the following (none have worked):
Using Visual Studio Code I saved as "LF" instead of CRLF
Deleted the file entirely, created a new one using Visual Studio Code, typed the words
Cut the entire file, pasted it in Notepad so that formatting gets cleared, copied and pasted back
Added various forms of #!/bin/bash to the start of the entrypoint.sh
Changed Docker File to use COPY instead of ADD
At this point I'm not sure what else to try. Any ideas?
Edit
entrypoint.sh
if [ "$1" == 'celery' ]; then
celery -A vicmun worker -l info --uid=celery --gid=celery
else
./../wait_for_it.sh db:5433 --timeout=10
python manage.py migrate
python manage.py runserver 0.0.0.0:8000
fi
Dockerfile
FROM python:3.9
ENV PYTHONUNBUFFERED 1
ARG APP_ENV=${APP_ENV}
RUN mkdir /src
RUN mkdir /static
WORKDIR /src
ADD ./src /src
ADD entrypoint-${APP_ENV}.sh /entrypoint.sh
ADD wait_for_it.sh /wait_for_it.sh
RUN addgroup --system celery && adduser --system --ingroup celery celery
RUN ["chmod", "+x", "/wait_for_it.sh"]
RUN apt-get -y update
RUN apt-get -y install ffmpeg
RUN pip install -r requirements.txt
ENTRYPOINT ["bash", "/entrypoint.sh"]
I don't know your config, but I could resolve that problem by adding in the CMD.
In my case, I could execute a script with docker as follows:
Dockerfile
FROM python:3.10-alpine3.15
ENV PYTHONUNBUFFERED=1
WORKDIR /app
RUN apk update \
&& apk add --no-cache gcc musl-dev postgresql-dev python3-dev libffi-dev \
&& pip install --upgrade pip
COPY requirements.txt .
RUN python -m pip install -r requirements.txt
COPY . .
CMD [ "sh", "entrypoint.sh" ]
entrypoint.sh
#!/bin/sh
python manage.py makemigrations
python manage.py migrate
python manage.py runserver 0.0.0.0:8000
Well, I feel the cringe for this. Turns out the solution was something I had already done, but it didn't go through until I rebuilt with --no-cache option.
Solution was to:
Using Visual Studio Code I saved as "LF" instead of CRLF
and run docker-compose build --no-cache

Cannot source tmux config file in a Dockerfile

I'm building a Docker image where I need tmux, and rather than having to run tmux source-file ~/.tmux.conf every time I start the container (that way madness lies), I'd like to source the config file at build time. However, this isn't working:
ARG PYTORCH="1.6.0"
ARG CUDA="10.1"
ARG CUDNN="7"
FROM pytorch/pytorch:${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel
RUN apt-get update && apt-get install -y man-db manpages-posix vim screen tmux\
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# configuration for tmux
COPY src/.tmux.conf ~/.tmux.conf
RUN tmux source-file ~/.tmux.conf
I get the error:
error connecting to /tmp/tmux-0/default (No such file or directory)
The command '/bin/sh -c tmux source-file ~/.tmux.conf' returned a non-zero code: 1
What's happening? It doesn't seem to be a file not found error.
There's no tmux server running (no server at all has been running yet, hence the missing file). The config file will be loaded automatically when you run tmux in the container, so the failing line can be dropped
Also, docker doesn't expand the ~, so you'll need to provide the absolute path. The resulting Dockerfile should look something like this, assuming you're running as root in the container:
ARG PYTORCH="1.6.0"
ARG CUDA="10.1"
ARG CUDNN="7"
FROM pytorch/pytorch:${PYTORCH}-cuda${CUDA}-cudnn${CUDNN}-devel
RUN apt-get update && apt-get install -y man-db manpages-posix vim screen tmux\
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# configuration for tmux
COPY src/.tmux.conf /root/.tmux.conf

How to run the bash when we trigger docker run command without -it?

I have a Dockerfile as follow:
FROM centos
RUN mkdir work
RUN yum install -y python3 java-1.8.0-openjdk java-1.8.0-openjdk-devel tar git wget zip
RUN pip install pandas
RUN pip install boto3
RUN pip install pynt
WORKDIR ./work
CMD ["bash"]
where i am installing some basic dependencies.
Now when I run
docker run imagename
it does nothing but when I run
docker run -it imageName
I lands into the bash shell. But I want to get into the bash shell as soon as I trigger the run command without any extra parameters.
I am using this docker container in AWS codebuild and there I can't specify any parameters like -it but I want to execute my code in the docker container itself.
Is it possible to modify CMD/ENTRYPOINT in such a way that when running the docker image I land right inside the container?
I checked your container, it will not even build due to missing pip. So I modified it a bit so that it at least builds:
FROM centos
RUN mkdir glue
RUN yum install -y python3 java-1.8.0-openjdk java-1.8.0-openjdk-devel tar git wget zip python3-pip
RUN pip3 install pandas
RUN pip3 install boto3
RUN pip3 install pynt
WORKDIR ./glue
Build it using, e.g.:
docker build . -t glue
Then you can run command in it using for example the following syntax:
docker run --rm glue bash -c "mkdir a; ls -a; pwd"
I use --rm as I don't want to keep the container.
Hope this helps.
We cannot login to the docker container directly.
If you want to run any specific commands when the container start in detach mode than either you can give it in CMD and ENTRYPOINT command of the Dockerfile.
If you want to get into the shell directly, you can run
docker -it run imageName
or
docker run imageName bash -c "ls -ltr;pwd"
and it will return the output.
If you have triggered the run command without -it param then you can get into the container using:
docker exec -it imageName
and you will land up into the shell.
Now, if you are using AWS codebuild custom images and concerned about how the commands can be submitted to the container than you have to put your commands into the build_spec.yaml file and put your commands either in pre_build, build or post_build parameter and those commands will be submitted to the docker container.
-build_spec.yml
version: 0.2
phases:
pre_build:
commands:
- pip install boto3 #or any prebuild configuration
build:
commands:
- spark-submit job.py
post_build:
commands:
- rm -rf /tmp/*
More about build_spec here

Can't modify files created in docker container

I got a container with django application running in it and I sometimes go into the shell of the container and run ./manage makemigrations to create migrations for my app.
Files are created successfully and synchronized between host and container.
However in my host machine I am not able to modify any file created in container.
This is my Dockerfile
FROM python:3.8-alpine3.10
LABEL maintainer="Marek Czaplicki <marek.czaplicki>"
WORKDIR /app
COPY ./requirements.txt ./requirements.txt
RUN set -ex; \
apk update; \
apk upgrade; \
apk add libpq libc-dev gcc g++ libffi-dev linux-headers python3-dev musl-dev pcre-dev postgresql-dev postgresql-client swig tzdata; \
apk add --virtual .build-deps build-base linux-headers; \
apk del .build-deps; \
pip install pip -U; \
pip --no-cache-dir install -r requirements.txt; \
rm -rf /var/cache/apk/*; \
adduser -h /app -D -u 1000 -H uwsgi_user
ENV PYTHONUNBUFFERED=TRUE
COPY . .
ENTRYPOINT ["sh", "./entrypoint.sh"]
CMD ["sh", "./run_backend.sh"]
and run_backend.sh
./manage.py collectstatic --noinput
./manage.py migrate && exec uwsgi --strict uwsgi.ini
what can I do to be able to modify these files in my host machine? I don't want to chmod every file or directory every time I create it.
For some reason there is one project in which files created in container are editable by host machine, but I cannot find any difference between these two.
By default, Docker containers runs as root. This has two issues:
In development as you can see, the files are owned by root, which is often not what you want.
In production this is a security risk (https://pythonspeed.com/articles/root-capabilities-docker-security/).
For development purposes, docker run --user $(id -u) yourimage or the Compose example given in the other answer will match the user to your host user.
For production, you'll want to create a user inside the image; see the page linked above for details.
Usually files created inside docker container are owned by the root user of the container.
You could try with this inside your container:
chown 1000:1000 file-you-want-to-edit-outside
You could add this as the last layer of your Dockerfile as RUN
Edit:
If you are using docker-compose, you can add user to your container:
service:
container:
user: ${CURRENT_HOST_USER}
And have CURRENT_HOST_USER be equal to $(id -u):$(id -g)
The solution was to add
USER uwsgi_user
to Dockerfile and then simpy run docker exec -it container-name sh

Getting Permission Denied error while accessing a file in Docker

I am trying to deploy a model on AWS Sagemaker and using the following docker file:
FROM ubuntu:16.04
#MAINTAINER Amazon AI <sage-learner#amazon.com>
RUN apt-get -y update && apt-get install -y --no-install-recommends \
wget \
python3.5-dev \
gcc \
nginx \
ca-certificates \
libgcc-5-dev \
&& rm -rf /var/lib/apt/lists/*
# Here we get all python packages.
# There's substantial overlap between scipy and numpy that we eliminate by
# linking them together. Likewise, pip leaves the install caches populated which uses
# a significant amount of space. These optimizations save a fair amount of space in the
# image, which reduces start up time.
RUN wget https://bootstrap.pypa.io/3.3/get-pip.py && python3.5 get-pip.py && \
pip3 install numpy==1.14.3 scipy lightfm scikit-optimize pandas==0.22.0 flask gevent gunicorn && \
rm -rf /root/.cache
# Set some environment variables. PYTHONUNBUFFERED keeps Python from buffering our standard
# output stream, which means that logs can be delivered to the user quickly. PYTHONDONTWRITEBYTECODE
# keeps Python from writing the .pyc files which are unnecessary in this case. We also update
# PATH so that the train and serve programs are found when the container is invoked.
ENV PYTHONUNBUFFERED=TRUE
ENV PYTHONDONTWRITEBYTECODE=TRUE
ENV PATH="/opt/program:${PATH}"
# Set up the program in the image
COPY lightfm /opt/program
WORKDIR /opt/program
The docker container is built successfully, but when I write the following command:
docker run XYZ train
on my local or even on Sagemaker, I am getting the following error:
standard_init_linux.go:207: exec user process caused "permission denied"
In the docker file I am copying a folder called Lightfm and there is a file called "train" in it.
Can anyone help?
OUTPUT OF MY DOCKER BUILD:
$ docker build -t lightfm .
Sending build context to Docker daemon 41.47kB
Step 1/9 : FROM ubuntu:16.04
---> 5e13f8dd4c1a
Step 2/9 : RUN apt-get -y update && apt-get install -y --no-install-recommends wget python3.5-dev gcc nginx ca-certificates libgcc-5-dev && rm -rf /var/lib/apt/lists/*
---> Using cache
---> 14ae3a1eb780
Step 3/9 : RUN wget https://bootstrap.pypa.io/3.3/get-pip.py && python3.5 get-pip.py && pip3 install numpy==1.14.3 scipy lightfm scikit-optimize pandas==0.22.0 flask gevent gunicorn && rm -rf /root/.cache
---> Using cache
---> 5a2727e27385
Step 4/9 : ENV PYTHONUNBUFFERED=TRUE
---> Using cache
---> 43bf8c5e8414
Step 5/9 : ENV PYTHONDONTWRITEBYTECODE=TRUE
---> Using cache
---> 7d2c45d61cec
Step 6/9 : ENV PATH="/opt/program:${PATH}"
---> Using cache
---> f3cc6313c0d9
Step 7/9 : COPY lightfm /opt/program
---> ad929ba84692
Step 8/9 : WORKDIR /opt/program
---> Running in a040dd0bab03
Removing intermediate container a040dd0bab03
---> 8f53c5a3ba63
Step 9/9 : RUN chmod 755 serve
---> Running in 5666abb27cd0
Removing intermediate container 5666abb27cd0
---> e80aca934840
Successfully built e80aca934840
Successfully tagged lightfm:latest
SECURITY WARNING: You are building a Docker image from Windows against a non-Windows Docker host. All files and directories added to build context will have '-rwxr-xr-x' permissions. It is recommended to double check and reset permissions for sensitive files and directories.
Assuming train is the executable you want to run, give it exec permission. After COPY lightfm /opt/program line, add RUN chmod +x /opt/program/train.