Docker: Cannot COPY from parent directory while building the image [duplicate] - django

This question already has answers here:
How to include files outside of Docker's build context?
(19 answers)
Closed last month.
I am trying to use docker-compose up -d for deploying my django application. The problem is that my Dockerfile and docker-compose.yml are in one directory, but need access to a requirements.txt from the parent directory.
Minimal Example:
Filestructure:
requirements.txt (file)
docker (directory)
docker/Dockerfile (file)
docker/docker-compose.yml (file)
Dockerfile:
FROM python:3.10-slim
COPY ./../requirements.txt /requirements.txt
docker-compose.yml:
version: '3'
services:
django:
container_name: django_123
build:
context: ./..
dockerfile: ./docker/Dockerfile
expose:
- "8000"
The setup works on Docker Desktop 4 on Windows 10, but not on Ubuntu 22.
I get the error:
Step 1/2 : COPY ./../requirements.txt /requirements.txt
COPY failed: forbidden path outside the build context: ../requirements.txt ()
ERROR: Service 'django' failed to build : Build failed
I already read that I should build the image from the parent directory, but I get the same error message if I use docker build -f ../Dockerfile ..
What could be the problem? And why does it work on Windows and not on Ubuntu?

For security reasons, you can only copy from the directory set as the "build context" and below. So doing COPY ./../requirements.txt is not allowed, since that would copy from a directory above the build context.
Since you've set your build context to the parent directory, you should be able to get at the requirements file. You just need to specify your host file paths as originating from the build context, like this:
COPY ./requirements.txt /requirements.txt

Related

docker doesn't reflect code change in docker container

I have a Django application that runs in the Docker container locally on Mac machine. This docker container is managed via docker-compose. Django application is configured to reload if its code changes.
However, and this is a problem, when I change application code, Django default server reloads service, but changes are not reflected in the container. API response doesn't change. Here is my docker configuration:
Dockerfile
FROM python:2-slim
RUN apt-get update && apt-get install -y build-essential
COPY requirements /requirements
RUN pip install -r /requirements/build.txt
# copy app source into image
COPY service_api /opt/service_api
COPY manage.py /opt
docker-compose.yml
version: '3.7'
services:
service-django:
image: service-django
build:
dockerfile: Dockerfile
context: .
ports:
- 8000:8000
volumes:
- ./service_api/:/opt/service_api/service_api # this path is correct!
container_name: service-django
hostname: service-django
restart: always
Docker desktop: 3.5.0
Docker Engine: 20.10.7
Compose: 1.29.2
Big Sur: 11.4
Any help will be appreciated!
You would either inject your code into the container during the build time using that COPY service_api /opt/service_api in your Dockerfile (which is not what you want here, since it kinda burns the source code into the image) or going for another approach (which is desired here) which is to bind your source code directory as a volume into the container, which enables the modifications you make into your source code to be visible inside the container, hence allowing code updates to be applied by Django server reload (As you're doing the exact thing in your compose file)
So all thing you need to do here is to remove copying files during build time and let your source code to be visible only through that volume.

Getting "Error processing tar file(exit status 1): open /myenv/include/python3.6m/Python-ast.h: no such file or directory" while docker-compose build

So I am pretty new to docker and django. Unfortunately while running the below command on my linux machine which i am connected using my physical windows machine using putty:
docker-compose build
I am getting an error:
Error processing tar file(exit status 1): open /myenv/include/python3.6m/Python-ast.h: no such file or directory
'myenv' is the environment I have created inside my folder.
I am getting a container started on port 9000. The app doesn't have anything yet just a simple project so i just expect to see the 'congratulations' screen. I don't know where I am going wrong. My final goal would be to run the docker url in my windows browser and see the screen of docker container.
This is my docker-compose.yml file:
version: '3'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:9000
ports:
- 202.179.92.106:8000:9000
the IP: 202.179.92.106 is my public IP. I did the above binding so as to access the docker container from my windows machine. Would request additional inputs for the port binding as well if correct/incorrect.
Below is my Dockerfile:
FROM python:3.6.9
RUN mkdir djangotest
WORKDIR djangotest
ADD . /djangotest
RUN pip install -r requirements.txt
Please help me out peeps!
If you have a virtual environment in your normal development tree, you can't copy it into a Docker image. You can exclude this from the build sequence by mentioning it in a .dockerignore file:
# .dockerignore
myenv
Within the Dockerfile, the RUN pip install line will install your application's dependencies into the Docker image, so you should have a complete self-contained image.

.dockerignore: some files are ignored, some are not

The Problem
Hi I'm new to Docker. I want to ignore some files and directories using .dockerignore in my Django project. In the beginning, no files were ignored , then I searched in stackoverflow and found that its because of the volumes in docker-compose.yml, so I commented it out. But now some of the files and directories are getting ignored but some are not( pycache , db.sqlite3 ). I went through a lot of questions but couldn't find any solution.
Project structure
-src
--coreapp
---migrations
---__init__.py
---__pycache__
---admin.py
---apps.py
---models.py
---tests.py
---views.py
---tests.py
--admin.json
--db.sqlite3
--manage.py
-.dockerignore
-.gitignore
-docker-compose.yml
-Dockerfile
-Procfile
-README.md
-requirements.txt
-runtime.txt
Dockerfile
FROM python:3.7
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /code/requirements.txt
RUN pip install -r /code/requirements.txt
COPY . /code/
WORKDIR /code/
EXPOSE 8000
docker-compose.yml
version: '3'
services:
db:
image: postgres
web:
build: .
command: bash -c "python src/manage.py runserver 0.0.0.0:8000"
# volumes:
# - .:/code
ports:
- "8000:8000"
depends_on:
- db
.dockerignore
# Byte-compiled / optimized / DLL files
__pycache__/
**/migrations
src/media
src/db.sqlite3
Procfile
.git
Commands
# build image
sudo docker-compose up --build
# to enter container
sudo docker exec -it [container id] bash
# to check ignored files inside the container
ls
Expected output
# Byte-compiled / optimized / DLL files
__pycache__/ # ignored
**/migrations # ignored
src/media # ignored
src/db.sqlite3 # ignored
Procfile # ignored
.git # ignored
Original Output
# Byte-compiled / optimized / DLL files
__pycache__/ # NOT ignored
**/migrations # ignored
src/media # ignored
src/db.sqlite3 # NOT ignored
Procfile # ignored
.git # ignored
Attempts
__pycache__/
**/__pycache__
**/*__pycache__
**/*__pycache__*
**/*__pycache__/
**/__pycache__/
*/db.sqlite3
db.sqlite3
The .dockerignore file only affects what files are copied into the image in the Dockerfile COPY line (technically, what files are included in the build context). It doesn't mean those files will never exist in the image or in a container, just that they're not included in the initial copy.
You should be able to verify this by looking at the docker build output. After each step there will be a line like ---> 0123456789ab; those hex numbers are valid Docker image IDs. Find the image created immediately after the COPY step and run
docker run --rm 0123456789ab ls
If you explore this way a little bit, you should see that the __pycache__ directory in the container is either absent entirely or different from the host.
Of the specific files you mention, the db.sqlite3 file is your actual application's database, and it will be created when you start the application; that's why you see it if you docker exec into a running container, but not when you docker run a clean container from the image. What is __pycache__? clarifies that the Python interpreter creates that directory on its own whenever it executes an import statement, so it's not surprising that that directory will also reappear on its own.
What exactly do you have in requirements.txt?
Is there some package in this file that created this directory? Because docker CLI can only ignore this before sending context for the build, once the build starts(docker base image, pip install, etc as written in dockerfile) then dockerignore might not be able to ignore it from docker image.
If not then you can try
*/db* -> eliminate files starting with db one level below the root,
*sqlite3
As per https://docs.docker.com/engine/reference/builder/ Matching is done using Go’s filepath.Match rules. A preprocessing step removes leading and trailing whitespace and eliminates . and .. elements using Go’s filepath.Clean. Lines that are blank after preprocessing are ignored.
In your attempts */db.sqlite3 db.sqlite3, maybe the . is getting eliminated as mentioned above and hence unable to remove the requested file from build.

Dockerfile with copy command and relative path

Have some way to use copy command with the relative path in dockerfile? I'm trying to use:
COPY ./../folder/*.csproj ./
OBS.: My struct folder (I'm running the dockerfile on project-test and other files are in the project-console folder) is:
|- project-console
|- project-test
And I receive the following error:
ERROR: service 'app' failed to build: COPY failed: no source files were specified.
My purpose is have two projects in the same docker. I have a dotnet core console and another with a unity test (NUnity), I'm trying to run the unity test in docker.
UPDATE
Is possible to use Multi-stage: https://docs.docker.com/develop/develop-images/multistage-build/
Or using docker-compose with build: https://docs.docker.com/engine/reference/commandline/build/
Like:
services:
worker:
build:
context: ./workers
dockerfile: Dockerfile
Reference: Allow Dockerfile from outside build-context
You can try this way
$ cd project-console
$ docker build -f ../project-test/Dockerfile .
Update:
By using docker-compose
build:
context: ../
dockerfile: project-test/Dockerfile
../ will be set as the context, it should include project-console and project-test in your case. So that you can COPY project-console/*.csproj in Dockerfile.

Docker compose with multiple Dockfiles and images (Django+Angular)

I'm building an app that uses Django and Angular that is split up into two different repositories and docker images.
My file structure is:
.
docker-compose.yml
djangoapp/
Dockerfile
manage.py
...
angularapp/
Dockerfile
...
I've been unable to get it to work properly, all documentation I found on this seem to expect you to have the docker-compose.yml file together with the DockerFile.
I've tried multiple different variations but my current (not working) docker-compose.yml file looks like this:
version : '3'
services:
web:
build: ./djangoapp
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
This gives me error can't open file 'manage.py': [Errno 2] No such file or directory.
If I go into the djangoapp/ directory and create a docker-compose.yml file there according to the offical Docker docs, it works fine. So it's nothing wrong with the actual build, the issue is accessing it from outside like I'm trying to do.
Update: I also decided to add my Dockerfile located at ./djangoapp/Dockerfile.
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
From what I can see it seems just to be a simple typo
version : '3'
services:
web:
build: ./djangoapp
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
That ./ is all your missing I think. I'm pretty sure without it the docker compose file doesn't go into the directory specified.
Update
Now that I saw your Dockerfile, I've noticed that you havent added the manage.py to your container. The same way you added your requirements.txt is what you need to do with the manage.py. When a Docker container is built, it only has what you give it plus any other file structure it has from the base Image.
You can either add it in the Dockerfile or you can have a shared volume between a local directory and a container directory. I think adding it in the Dockerfile would be easiest though.
This issue was solved by rebuilding my Docker image with docker-compose build web command. Seems like at the stage when I was having the error, that build didn't include the files needed. So in the code that I show above in the question post, nothing is wrong, it just needed to be rebuild.