docker: failed to compute cache key: "/requirements.txt" not found: not found - dockerfile

I tried to build a docker image with the following command:
docker build -< Dockerfile
I did it in the main directory of the app. I found this command somewhere in a documentation "how to build a docker image". However the build failed with failure:
failed to compute cache key: "/requirements.txt" not found: not found
My test-app structure looks like:
.
+-- src
| +-- static
| +-- templates
| +-- app.py
+-- Dockerfile
+-- requirements.txt
I'm not a docker expert and there are several instructions out there. It's somehow frustrating. Other stack overflow questions did not solve my issue.
What am I doing wrong?

After some investigation and try out i found the following solution:
The command docker build -> Dockerfile was somehow wrong. Don't know if it is outdated or incomplete.
However i used the following command and it worked:
docker build --tag docker_example .
Very important is the dot at the and. Without this it will not work. It tells Docker which Dockerfile to use.
If you have a "custom" Dockerfile name like "something.Dockerfile" you have to add the -f option followed by the name of your Dockerfile to build the right one.
Example:
docker build --tag docker_example -f something.Dockerfile .

Related

Create-React-App "npm run build" results in partially populated build folder when run inside Docker container

I have an old project that I started by using Create React App to generate boilerplate. At some point down the line I "ejected" the project. Running npm run build successfully generates all of the expected build artifacts inside the /build folder when run on my dev machine and serving the build folder works perfectly.
Now I'm trying to Dockerize this app and have created the following Dockerfile
FROM node:16-alpine
WORKDIR /app
COPY package.json .
COPY package-lock.json .
ENV NODE_ENV production
RUN npm install
COPY . .
RUN npm run build
CMD [ "npx", "serve", "-l", "3000", "build" ]
I can build and run the image as a Docker container, and the build folder is served fine except that most of its contents are missing. The only two files present are favicon.ico and manifest.json. If I open a shell and list out the build folder it's the same. For some reason the webpack bundles, index.html, styles etc. are all missing.
I've been Googling around for hours but can't find anything. I can't even find out how to log the output of CRA's npm run build command to a file so that I can see the build log.
Can anyone give me a hint at where the problem might lie? Thanks

is docker-compose.yml not supported in AWS Elastic Beanstalk?

In my root directory, I have my docker-compose.yml.
$ ls
returns:
build cmd docker-compose.yml exp go.mod go.sum LICENSE media pkg README.md
In the same directory, I ran:
$ eb init -p docker infogrid
$ eb create infogridEnv
However, this gave me an error:
Instance deployment: Both 'Dockerfile' and 'Dockerrun.aws.json' are missing in your source bundle. Include at least one of them. The deployment failed.
The fact that it does not even include docker-compose.yml as the missing file makes me think it does not support docker-compose. This is contradicting with the main documentation where it explicitly shows an example with docker-compose.yml.
It may be that you use "Amazon AMI" your enviroment should be the new "Docker running on 64bit Amazon Linux 2"
only then you get the docker-compose.yml support
source https://docs.amazonaws.cn/en_us/elasticbeanstalk/latest/dg/docker-multicontainer-migration.html

.dockerignore: some files are ignored, some are not

The Problem
Hi I'm new to Docker. I want to ignore some files and directories using .dockerignore in my Django project. In the beginning, no files were ignored , then I searched in stackoverflow and found that its because of the volumes in docker-compose.yml, so I commented it out. But now some of the files and directories are getting ignored but some are not( pycache , db.sqlite3 ). I went through a lot of questions but couldn't find any solution.
Project structure
-src
--coreapp
---migrations
---__init__.py
---__pycache__
---admin.py
---apps.py
---models.py
---tests.py
---views.py
---tests.py
--admin.json
--db.sqlite3
--manage.py
-.dockerignore
-.gitignore
-docker-compose.yml
-Dockerfile
-Procfile
-README.md
-requirements.txt
-runtime.txt
Dockerfile
FROM python:3.7
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt /code/requirements.txt
RUN pip install -r /code/requirements.txt
COPY . /code/
WORKDIR /code/
EXPOSE 8000
docker-compose.yml
version: '3'
services:
db:
image: postgres
web:
build: .
command: bash -c "python src/manage.py runserver 0.0.0.0:8000"
# volumes:
# - .:/code
ports:
- "8000:8000"
depends_on:
- db
.dockerignore
# Byte-compiled / optimized / DLL files
__pycache__/
**/migrations
src/media
src/db.sqlite3
Procfile
.git
Commands
# build image
sudo docker-compose up --build
# to enter container
sudo docker exec -it [container id] bash
# to check ignored files inside the container
ls
Expected output
# Byte-compiled / optimized / DLL files
__pycache__/ # ignored
**/migrations # ignored
src/media # ignored
src/db.sqlite3 # ignored
Procfile # ignored
.git # ignored
Original Output
# Byte-compiled / optimized / DLL files
__pycache__/ # NOT ignored
**/migrations # ignored
src/media # ignored
src/db.sqlite3 # NOT ignored
Procfile # ignored
.git # ignored
Attempts
__pycache__/
**/__pycache__
**/*__pycache__
**/*__pycache__*
**/*__pycache__/
**/__pycache__/
*/db.sqlite3
db.sqlite3
The .dockerignore file only affects what files are copied into the image in the Dockerfile COPY line (technically, what files are included in the build context). It doesn't mean those files will never exist in the image or in a container, just that they're not included in the initial copy.
You should be able to verify this by looking at the docker build output. After each step there will be a line like ---> 0123456789ab; those hex numbers are valid Docker image IDs. Find the image created immediately after the COPY step and run
docker run --rm 0123456789ab ls
If you explore this way a little bit, you should see that the __pycache__ directory in the container is either absent entirely or different from the host.
Of the specific files you mention, the db.sqlite3 file is your actual application's database, and it will be created when you start the application; that's why you see it if you docker exec into a running container, but not when you docker run a clean container from the image. What is __pycache__? clarifies that the Python interpreter creates that directory on its own whenever it executes an import statement, so it's not surprising that that directory will also reappear on its own.
What exactly do you have in requirements.txt?
Is there some package in this file that created this directory? Because docker CLI can only ignore this before sending context for the build, once the build starts(docker base image, pip install, etc as written in dockerfile) then dockerignore might not be able to ignore it from docker image.
If not then you can try
*/db* -> eliminate files starting with db one level below the root,
*sqlite3
As per https://docs.docker.com/engine/reference/builder/ Matching is done using Go’s filepath.Match rules. A preprocessing step removes leading and trailing whitespace and eliminates . and .. elements using Go’s filepath.Clean. Lines that are blank after preprocessing are ignored.
In your attempts */db.sqlite3 db.sqlite3, maybe the . is getting eliminated as mentioned above and hence unable to remove the requested file from build.

Dockerfile with copy command and relative path

Have some way to use copy command with the relative path in dockerfile? I'm trying to use:
COPY ./../folder/*.csproj ./
OBS.: My struct folder (I'm running the dockerfile on project-test and other files are in the project-console folder) is:
|- project-console
|- project-test
And I receive the following error:
ERROR: service 'app' failed to build: COPY failed: no source files were specified.
My purpose is have two projects in the same docker. I have a dotnet core console and another with a unity test (NUnity), I'm trying to run the unity test in docker.
UPDATE
Is possible to use Multi-stage: https://docs.docker.com/develop/develop-images/multistage-build/
Or using docker-compose with build: https://docs.docker.com/engine/reference/commandline/build/
Like:
services:
worker:
build:
context: ./workers
dockerfile: Dockerfile
Reference: Allow Dockerfile from outside build-context
You can try this way
$ cd project-console
$ docker build -f ../project-test/Dockerfile .
Update:
By using docker-compose
build:
context: ../
dockerfile: project-test/Dockerfile
../ will be set as the context, it should include project-console and project-test in your case. So that you can COPY project-console/*.csproj in Dockerfile.

AWS CodeBuild - Unable to find DockerFile during build

Started playing with AWS CodeBuild.
Goal is to have a docker images as a final results with the nodejs, hapi and sample app running inside.
Currently i have an issue with:
"unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /tmp/src049302811/src/Dockerfile: no such file or directory"
Appears on BUILD stage.
Project details:
S3 bucket used as a source
ZIP file stored in respective S3 bucket contains buildspec.yml, package.json, sample *.js file and DockerFile.
aws/codebuild/docker:1.12.1 is used as a build environment.
When i'm building an image using Docker installed on my laptop there is no issues so i can't understand which directory i need to specify to get rid off this error message.
Buildspec and DockerFile attached below.
Thanks for any comments.
buildspec.yml
version: 0.1
phases:
pre_build:
commands:
- echo Logging in to Amazon ECR...
- $(aws ecr get-login --region eu-west-1)
build:
commands:
- echo Build started on `date`
- echo Building the Docker image...
- docker build -t <CONTAINER_NAME> .
- docker tag <CONTAINER_NAME>:latest <ID>.dkr.ecr.eu-west-1.amazonaws.com/<CONTAINER_NAME>:latest
post_build:
commands:
- echo Build completed on `date`
- echo Pushing the Docker image...
- docker push <id>.eu-west-1.amazonaws.com/<image>:latest
DockerFile
FROM alpine:latest
RUN apk update && apk upgrade
RUN apk add nodejs
RUN rm -rf /var/cache/apk/*
COPY . /src
RUN cd /src; npm install hapi
EXPOSE 80
CMD ["node", "/src/server.js"]
Ok, so the solution was simple.
Issue was related to the Dockerfile name.
It was not accepting DockerFile (with capital F, strange it was working locally) but Dockerfile (with lower-case f) worked perfectly.
Can you validate that Dockerfile exists in the root of the directory? One way of doing this would be to run ls -altr as part of the pre-build phase in your buildspec (even before ecr login).