I am still puzzled as to why the npm run build command on an elasticbeanstalk instance does not produce the build folder when when I build the dockerfile. Its definitely not a permissions issue as I can mkdir and touch new directory and file respectively. I even list it in the dockerfile and I can confirm that the build folder isn't there.
Also the npm install works and I cal see all the installed libraries in there
However when I build the same dockerfile locally, I can see that it creates a build folder. So its definetely an environment issue. I read somewhere that sometimes npm install can time out on t2.micro. So I even upgraded to t2.small. Still the issue persists.
Can someone help me figure out as to what is going on here ?
Below is my dockerfile
FROM node:alpine as builder
WORKDIR '/app'
COPY package.json ./
RUN npm install
COPY ./ ./
RUN ls
RUN pwd
CMD ["npm", "run" ,"build"]
RUN mkdir varun
RUN touch var
RUN ls
FROM nginx
EXPOSE 80
RUN pwd
COPY --from=builder ./app/build /usr/share/nginx/html
Below are the elasticbeanstalk logs
i-0276e4b74ee15c98f Severe 29 minutes 18 -- -- -- -- -- -- -- -- -- -- 0.03 0.28 0.2 0.1 99.7 0.0
Application update failed at 2018-12-29T06:18:23Z with exit status 1 and error: Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03build.sh failed.
cat: Dockerrun.aws.json: No such file or directory
cat: Dockerrun.aws.json: No such file or directory
cat: Dockerrun.aws.json: No such file or directory
alpine: Pulling from library/node
7fc670963d22: Pull complete
Digest: sha256:d2180576a96698b0c7f0b00474c48f67a494333d9ecb57c675700395aeeb2c35
Status: Downloaded newer image for node:alpine
Successfully pulled node:alpine
Sending build context to Docker daemon 625.7kB
Step 1/15 : FROM node:alpine as builder
---> 9036ebdbc59d
Step 2/15 : WORKDIR '/app'
---> Running in e623a08307d5
Removing intermediate container e623a08307d5
---> b4e9fe3e4b82
Step 3/15 : COPY package.json ./
---> cb5e6a9b109b
Step 4/15 : RUN npm install
---> Running in 8a00eb1143a5
[91mnpm[0m[91m [0m[91mWARN[0m[91m
Removing intermediate container 8a00eb1143a5
---> c568ef0a4bc3
Step 5/15 : COPY ./ ./
---> cfb3e22fc373
Step 6/15 : RUN ls
---> Running in f6aad2a0f22e
Dockerfile
Dockerfile.dev
README.md
docker-compose.yml
node_modules
package-lock.json
package.json
public
src
Removing intermediate container f6aad2a0f22e
---> 016d1ded2f97
Step 7/15 : RUN pwd
---> Running in eaae644b1d96
/app
Removing intermediate container eaae644b1d96
---> 61285a5062ea
Step 8/15 : CMD ["npm", "run" ,"build"]
---> Running in 5cbca2213f4f
Removing intermediate container 5cbca2213f4f
---> 8566953eebaa
Step 9/15 : RUN mkdir varun
---> Running in a078760b6dcb
Removing intermediate container a078760b6dcb
---> 34c25b5aab32
Step 10/15 : RUN touch var
---> Running in d725dafc9409
Removing intermediate container d725dafc9409
---> 70195ffecb54
Step 11/15 : RUN ls
---> Running in b96bc198883c
Dockerfile
Dockerfile.dev
README.md
docker-compose.yml
node_modules
package-lock.json
package.json
public
src
var
varun
Removing intermediate container b96bc198883c
---> 1b205ffb5e3f
Step 12/15 : FROM nginx
latest: Pulling from library/nginx
bbdb1fbd4a86: Pull complete
Digest: sha256:304008857c8b73ed71fefde161dd336240e116ead1f756be5c199afe816bc448
Status: Downloaded newer image for nginx:latest
---> 7042885a156a
Step 13/15 : EXPOSE 80
---> Running in 412e17c44274
Removing intermediate container 412e17c44274
---> e1e1ea0c7dfb
Step 14/15 : RUN pwd
---> Running in 1bc298a11ef1
/
Removing intermediate container 1bc298a11ef1
---> 291575f13e2f
Step 15/15 : COPY --from=builder ./app/build /usr/share/nginx/html
COPY failed: stat /var/lib/docker/devicemapper/mnt/e2b112f1a046c00990aa6fc01e9fabc9e147420a214682a06637ef8cbcb9414a/rootfs/app/build: no such file or directory
Failed to build Docker image aws_beanstalk/staging-app, retrying...
Sending build context to Docker daemon 625.7kB
Oh my god..Finally after 1 full day of tinkering everything on aws, figured it out. I was using CMD instead of RUN to execute the command npm run build. CMD doesn't actually run it when you are building a dockerfile. So the folder isn't actually there for my second image to use.
Related
I am trying to upload a Django app to Docker Hub. On the local machine (Ubuntu 18.04) everything works fine, but on Docker Hub there is an issue that the requirements.txt file cannot be found.
Local machine:
sudo docker-compose build --no-cache
Result (it's okay):
Step 5/7 : COPY . .
---> 5542d55caeae
Step 6/7 : RUN file="$(ls -1 )" && echo $file
---> Running in b85a55aa2640
Dockerfile db.sqlite3 hello_django manage.py requirements.txt venv
Removing intermediate container b85a55aa2640
---> 532e91546d41
Step 7/7 : RUN pip install -r requirements.txt
---> Running in e940ebf96023
Collecting Django==3.2.2....
But, Docker Hub:
Step 5/7 : COPY . .
---> 852fa937cb0a
Step 6/7 : RUN file="$(ls -1 )" && echo $file
---> Running in 281d9580d608
README.md app config docker-compose.yml
Removing intermediate container 281d9580d608
---> 99eaafb1a55d
Step 7/7 : RUN pip install -r requirements.txt
---> Running in d0e180d83772
[91mERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'
Removing intermediate container d0e180d83772
The command '/bin/sh -c pip install -r requirements.txt' returned a non-zero code: 1
app/Dockerfile
FROM python:3.8.3-alpine
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
WORKDIR /code
COPY . .
RUN file="$(ls -1 )" && echo $file
RUN pip install -r requirements.txt
docker-composer.yml
version: '3'
services:
web:
build:
context: app
dockerfile: Dockerfile
volumes:
- ./app/:/code/
ports:
- "8000:8000"
env_file:
- ./config/.env.dev
command: python manage.py runserver 0.0.0.0:8000
Project Structure:
UPDATE:
Docker is building from Github.
File requirements.txt is in the GitHub repository (app folder), but for some reason during build Docker Hub copies files from the project root folder and not the contents of the app folder.
Github:
https://github.com/sigalglebru/django-on-docker
The problem is that you need to tell Docker Hub where to find your build context.
When you run docker-compose build locally, docker-compose reads your docker-compose.yml file and knows to build inside the app directory, because you've explicitly set the build context:
build:
context: app
dockerfile: Dockerfile
When you build on Docker Hub, by default it will assume the build
context is the top level of your repository. If you set the path to
your Dockerfile to, e.g., app/Dockerfile, this is equivalent to
running:
docker build -f app/Dockerfile .
If you try that, you'll see if fail the same way. Rather than setting
the path to the Dockerfile, you need to set the path to the build
context to the app directory. For example:
(Look at the "Build Context" column).
When configured correct, your repository builds on Docker Hub without errors.
Thank you, I found solution:
I just copied files from./app to the mounted volume, and little changed context, but still don't understand why it worked fine on the local machine
Dockerfile:
FROM python:3.8.3-alpine
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
WORKDIR /code
COPY ./app .
RUN pip install -r requirements.txt
docker-compose.yml
version: "3.6"
services:
python:
restart: always
build:
context: .
dockerfile: docker/Dockerfile
expose:
- 8000
ports:
- 8000:8000
command: "python manage.py runserver 0.0.0.0:8000"
My Project directory view is in this image
When I am trying to create image for Srv.OAuth project by keeping the Dockerfile in the main directory of this project getting the following error:
Skipping project "/Lib.Communication/Lib.Communication.csproj" because it was not found.
Skipping project "/Lib.Database/Lib.Database.csproj" because it was not found.
Docker Command :
FROM microsoft/dotnet:2.1-sdk AS build-env
WORKDIR /app
COPY Libraries/Lib.Database.csproj ./
COPY *.csproj ./
RUN dotnet restore
COPY . ./
RUN dotnet publish -c Release -o out
FROM microsoft/dotnet:aspnetcore-runtime
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "Srv.OAuth.dll"]
Output of the execution :
D:\xxx\xyz\abc\Srv.OAuth>docker image build -f Dockerfile -t almsoauth .
Sending build context to Docker daemon 2.016MB
Step 1/10 : FROM microsoft/dotnet:2.1-sdk AS build-env
---> b23b0e275fcd
Step 2/10 : WORKDIR /app
---> Running in a0b396e6b4be
Removing intermediate container a0b396e6b4be
---> 43a54eff5619
Step 3/10 : COPY *.csproj ./ ---> 0cb2208057f7
Step 4/10 : RUN dotnet restore ---> Running in a9ab75a03390 Skipping project "/Lib.Communication/Lib.Communication.csproj" because it was not found.
Skipping project "/Lib.Database/Lib.Database.csproj" because it was not found.
I have a basic django project and I am trying to get it running locally through docker. I have the docker file. I build the docker image. I ran the docker image. It is running, but my webpage shows an error on the screen like it is not connecting to the docker server... Here is what I have:
docker file:
FROM python:3
WORKDIR general
COPY requirements.txt ./
EXPOSE 8000
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
Here is how I am buiding and running this project:
omars-mbp:split omarjandali$ docker build -t splitbeta/testing2 .
Sending build context to Docker daemon 223.7kB
Step 1/7 : FROM python:3
---> 79e1dc9af1c1
Step 2/7 : WORKDIR general
---> 04a6f8a7f92a
Removing intermediate container b2ffb485e485
Step 3/7 : COPY requirements.txt ./
---> 649d77ec499e
Step 4/7 : EXPOSE 8000
---> Running in 7d8d6fe8de1d
---> c328d885a5f1
Removing intermediate container 7d8d6fe8de1d
Step 5/7 : RUN pip install -r requirements.txt
---> Running in 1c9aca43dc14
Collecting Django==1.11.5 (from -r requirements.txt (line 1))
Downloading Django-1.11.5-py2.py3-none-any.whl (6.9MB)
Collecting gunicorn==19.6.0 (from -r requirements.txt (line 2))
Downloading gunicorn-19.6.0-py2.py3-none-any.whl (114kB)
Collecting pytz (from Django==1.11.5->-r requirements.txt (line 1))
Downloading pytz-2017.3-py2.py3-none-any.whl (511kB)
Installing collected packages: pytz, Django, gunicorn
Successfully installed Django-1.11.5 gunicorn-19.6.0 pytz-2017.3
---> 602e88557c8b
Removing intermediate container 1c9aca43dc14
Step 6/7 : COPY . .
---> 55cff629cb51
Step 7/7 : CMD python manage.py runserver 0.0.0.0:8000
---> Running in efd75f8fb602
---> 2cef664a626d
Removing intermediate container efd75f8fb602
Successfully built 2cef664a626d
Successfully tagged splitbeta/testing2:latest
omars-mbp:split omarjandali$ docker run -d spltibeta/testing2
Here is the project running:
omars-mbp:split omarjandali$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fc14f03a18b0 splitbeta/testing2 "python manage.py ..." 3 seconds ago Up 3 seconds 8000/tcp loving_volhard
THe webpage is giving the following error when it is supposed to display a template page....
This site can’t be reached
127.0.0.1 refused to connect.
I got it running yesterday but it is not working any more... I dont know why. I didnt change anything
I am logged into my dockerhub account in my terminal
It seems your Docker run command doesn't publish port 8000. By default, docker won't publish any container ports on the host system if you don't tell it to explicitly. Try using the -p or --publish option of docker run:
docker run -d -p 8000:8000 spltibeta/testing2
Alternatively, you can use the -P or --publish-all option to publish all exposed ports of your container on your host system. This will assign a random port on the host.
docker run -d -P spltibeta/testing2
I have a note app that I am building with a Dockerfile in the maven app.
I want to copy the artifact note-1.0.war to local linked volume to folder like webapps. So far I have the following in a Dockerfile:
FROM maven:latest
MAINTAINER Sonam <emailme#gmail.com>
RUN apt-get update
WORKDIR /code
#Prepare by downloading dependencies
ADD pom.xml /code/pom.xml
RUN ["mvn", "dependency:resolve"]
RUN ["mvn", "verify"]
#Adding source, compile and package into a fat jar
ADD src /code/src
RUN ["mvn", "clean"]
#RUN ["mvn", "install"]
RUN ["mvn", "install", "-Dmaven.test.skip=true"]
RUN mkdir webapps
COPY note-1.0.war webapps
#COPY code/target/note-1.0.war webapps
Unfortunately, I keep seeing the "no such file or directory" at the COPY statement. The following is the error from build on Docker hub:
...
---> bd555aecadbd
Removing intermediate container 69c09945f954
Step 11 : RUN mkdir webapps
---> Running in 3d114c40caee
---> 184903fa1041
Removing intermediate container 3d114c40caee
Step 12 : COPY note-1.0.war webapps
lstat note-1.0.war: no such file or directory
How can I copy the war file to a "webapps" folder that I executed in
RUN mkdir webapps
thanks
The COPY instruction copies new files or directories from <src> and adds them to the filesystem of the container at the path <dest>.
In your example the docker build is looking for note-1.0.war in the same directory than Dockerfile.
If I understand your intention, you want to copy a file inside the image that is build from previous RUN in Dockerfile.
So you should use something like
RUN cp /code/target/note-1.0.war /code/webapps
I would like install auditserver on nodejs server , So my auditserver with rpm . it is working fine as a manual steps.
I write a Dockerfile like below.
FROM centos:centos6
# Enable EPEL for Node.js
RUN rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
# Install Node.js and npm
RUN yum install -y npm
# ADD rpm into container
ADD auditserver-1-1.x86_64.rpm /opt/
RUN mkdir -p /opt/auditserver
RUN cd /opt
RUN rpm -Uvh auditserver-1-1.x86_64.rpm
# cd to auditserver
RUN cd /opt/auditserver
# Install app dependencies
RUN npm install
# start auditserver
RUN node server
EXPOSE 8080
while building the docker file I see below issue.
root#CloudieBase:/tmp/sky-test# docker build -t sky-test .
Sending build context to Docker daemon 38.4 kB
Step 1 : FROM centos:centos6
---> 9c95139afb21
Step 2 : RUN rpm -Uvh http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
---> Using cache
---> fd5b1bb647fc
Step 3 : RUN yum install -y npm
---> Using cache
---> b7c2908fc583
Step 4 : ADD auditserver-1-1.x86_64.rpm /opt/
---> 26ace798f98c
Removing intermediate container 5ea6221797f5
Step 5 : RUN mkdir -p /opt/auditserver
---> Running in 8f7292364245
---> 9b340033f6b7
Removing intermediate container 8f7292364245
Step 6 : RUN cd /opt
---> Running in c7d20fd251f3
---> 0cdf90b6cb2e
Removing intermediate container c7d20fd251f3
Step 7 : RUN rpm -Uvh auditserver-1-1.x86_64.rpm
---> Running in 4473241e5077
error: open of auditserver-1-1.x86_64.rpm failed: No such file or directory
The command '/bin/sh -c rpm -Uvh auditserver-1-1.x86_64.rpm' returned a non-zero code: 1
root#CloudieBase:/tmp/sky-test#
Can any help on this to made perfect Dockerfile. thanks.
The problem is that you are not in the /opt directory when executing the rpm command (step 7). See this answer to find out why it happens. Quote:
Each time you RUN, you spawn a new container and therefore the pwd is '/'.
For how to fix it see this question. To summarize: you can use the WORKDIR dockerfile command or change this part:
RUN cd /opt
RUN rpm -Uvh auditserver-1-1.x86_64.rpm
to this:
RUN cd /opt && rpm -Uvh auditserver-1-1.x86_64.rpm