Dockerfile copying war to local linked volume - dockerfile

I have a note app that I am building with a Dockerfile in the maven app.
I want to copy the artifact note-1.0.war to local linked volume to folder like webapps. So far I have the following in a Dockerfile:
FROM maven:latest
MAINTAINER Sonam <emailme#gmail.com>
RUN apt-get update
WORKDIR /code
#Prepare by downloading dependencies
ADD pom.xml /code/pom.xml
RUN ["mvn", "dependency:resolve"]
RUN ["mvn", "verify"]
#Adding source, compile and package into a fat jar
ADD src /code/src
RUN ["mvn", "clean"]
#RUN ["mvn", "install"]
RUN ["mvn", "install", "-Dmaven.test.skip=true"]
RUN mkdir webapps
COPY note-1.0.war webapps
#COPY code/target/note-1.0.war webapps
Unfortunately, I keep seeing the "no such file or directory" at the COPY statement. The following is the error from build on Docker hub:
...
---> bd555aecadbd
Removing intermediate container 69c09945f954
Step 11 : RUN mkdir webapps
---> Running in 3d114c40caee
---> 184903fa1041
Removing intermediate container 3d114c40caee
Step 12 : COPY note-1.0.war webapps
lstat note-1.0.war: no such file or directory
How can I copy the war file to a "webapps" folder that I executed in
RUN mkdir webapps
thanks

The COPY instruction copies new files or directories from <src> and adds them to the filesystem of the container at the path <dest>.
In your example the docker build is looking for note-1.0.war in the same directory than Dockerfile.
If I understand your intention, you want to copy a file inside the image that is build from previous RUN in Dockerfile.
So you should use something like
RUN cp /code/target/note-1.0.war /code/webapps

Related

How do I maintain a folder structure in dockerfile

I'm trying to move my nestjs app into container. But I met some problems. I used grpc in my project and my proto files are stored in a protos folder which is the same level as the src folder. My file structure is like this.
-project folder
--protos
---...proto files
--src
---...other codes
And in my docker file, I copied the proto folders, however, when I run the docker-compose up, it shows it cannot find my proto files. But if I just go with npm run start, it runs normally. Here is my dockerfile:
FROM node:14 AS builder
# Create app directory
WORKDIR /app
# A wildcard is used to ensure both package.json AND package-lock.json are copied
COPY package*.json ./
COPY prisma ./prisma/
COPY protos ./protos/
# Install app dependencies
RUN npm install
COPY . .
RUN npm run build
FROM node:14
COPY --from=builder /app/node_modules ./node_modules
COPY --from=builder /app/package*.json ./
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/protos ./protos
EXPOSE 5273
CMD [ "npm", "run", "start:dev" ]
enter code here
Am I missing something here? Any advises will be appreciated.

Docker image creation using nodejs with redis

i am using below Docker file. how can i configure redis in my Dockerfile?
also i am using build command docker build - < Dockerfile but this didn't work out.
if i run this command the following error will show
COPY failed: no source files were specified
FROM node:lts
RUN mkdir -p /app
WORKDIR /app
COPY package*.json /app
RUN yarn
COPY . /app
CMD ["yarn","run","start"]
One cannot use docker build - < Dockerfile to build an image that uses COPY instructions, because those instructions require those files to be present in the build context.
One must use docker build ., where . is the relative path to the build context.
Using docker build - < Dockerfile effectively means that the only thing in the build context is the Dockerfile. The files that one wants to copy into the docker image are not known to docker, because they are not included in the context.

Google Cloud Run: COPY fails when changing source folder from ./ to build

$ gcloud builds submit --tag gcr.io/projectname/testserver
// ... works fine until the COPY step:
Step 6/7 : COPY build ./
COPY failed: stat /var/lib/docker/tmp/docker-builder653325957/build: no such file or directory
ERROR
ERROR: build step 0 "gcr.io/cloud-builders/docker" failed: exit status 1
That build folder listed above, /var/lib/docker/tmp/docker-builder653325957/build, is not a local folder. Does Cloud Builder create a temp folder in that format?
How do I get it to copy my local build folder?
I also tried COPY ./build ./ but the CLI output was the same
Dockerfile below.
FROM node:12-slim
# Create app folder
WORKDIR /usr/src/app
# Install app deps. Copy the lock file
COPY package*.json ./
RUN npm install
ENV SCOPES=removed \
SHOPIFY_API_KEY=removed \
SHOPIFY_API_SECRET=removed \
CLIENT_APP_URL=removed
COPY build ./
CMD ["node", "server.js"]
The gcloud command uses the .gitignore and .gcloudignore files to determine which files and directories to include with the Docker build. If your build directory is listed in either of these files, it won't be available to copy into your container image.

CodeDeploy hooks running scripts in the agent instalation folder

So, I'm setting up my first application that uses CodeDeploy (EC2 + S3) and I'm having a very hard time to figure out how to run the scripts after instalation.
So I defined an AfterInstall hook in the AppSpec file refering to my bash script file in the project diretory. When the commands in the script run I get the error stating the files could not be found. So I put an ls command before it all and checked the logs.
My script file is running in the CodeDeploy agent folder. There are many files there that I accidentally created when I was testing but I was expecting them to be in my project root folder.
--Root
----init.sh
----requirements.txt
----server.py
appspec.yml
version: 0.0
os: linux
files:
- source: ./
destination: /home/ubuntu/myapp
runas: ubuntu
hooks:
AfterInstall:
- location: init.sh
timeout: 600
init.sh
#!/bin/bash
ls
sudo apt install python3-pip
pip3 install -r ./requirements.txt
python3 ./server.py
So when ls is executed, it doesn't list the files in my project root directory. I also tried ${PWD} instead of ./ and it didn't work. It is copying the script file to the agent folder and running it.
Refer to this https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structure-hooks.html
This is written at the end of the above document
The location of scripts you specify in the 'hooks' section is relative
to the root of the application revision bundle. In the preceding
example, a file named RunResourceTests.sh is in a directory named
Scripts. The Scripts directory is at the root level of the bundle.
But apparently it refers to the paths in the appspec file only.
Could someone help? Is it correct? I MUST use absolute paths hard-coded in the script file?
Yes correct. The script doesn't execute in the destination folder, as you might expect. You need to hard code a reference the destination directory /home/ubuntu/myapp to resolve file paths in life cycle scripts.
Use cd to change the directory first:
cd /home/ubuntu/myapp
ls
sudo apt install python3-pip
pip3 install -r ./requirements.txt
python3 ./server.py

npm run build does not create build directory on elasticbeanstalk

I am still puzzled as to why the npm run build command on an elasticbeanstalk instance does not produce the build folder when when I build the dockerfile. Its definitely not a permissions issue as I can mkdir and touch new directory and file respectively. I even list it in the dockerfile and I can confirm that the build folder isn't there.
Also the npm install works and I cal see all the installed libraries in there
However when I build the same dockerfile locally, I can see that it creates a build folder. So its definetely an environment issue. I read somewhere that sometimes npm install can time out on t2.micro. So I even upgraded to t2.small. Still the issue persists.
Can someone help me figure out as to what is going on here ?
Below is my dockerfile
FROM node:alpine as builder
WORKDIR '/app'
COPY package.json ./
RUN npm install
COPY ./ ./
RUN ls
RUN pwd
CMD ["npm", "run" ,"build"]
RUN mkdir varun
RUN touch var
RUN ls
FROM nginx
EXPOSE 80
RUN pwd
COPY --from=builder ./app/build /usr/share/nginx/html
Below are the elasticbeanstalk logs
i-0276e4b74ee15c98f Severe 29 minutes 18 -- -- -- -- -- -- -- -- -- -- 0.03 0.28 0.2 0.1 99.7 0.0
Application update failed at 2018-12-29T06:18:23Z with exit status 1 and error: Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03build.sh failed.
cat: Dockerrun.aws.json: No such file or directory
cat: Dockerrun.aws.json: No such file or directory
cat: Dockerrun.aws.json: No such file or directory
alpine: Pulling from library/node
7fc670963d22: Pull complete
Digest: sha256:d2180576a96698b0c7f0b00474c48f67a494333d9ecb57c675700395aeeb2c35
Status: Downloaded newer image for node:alpine
Successfully pulled node:alpine
Sending build context to Docker daemon 625.7kB
Step 1/15 : FROM node:alpine as builder
---> 9036ebdbc59d
Step 2/15 : WORKDIR '/app'
---> Running in e623a08307d5
Removing intermediate container e623a08307d5
---> b4e9fe3e4b82
Step 3/15 : COPY package.json ./
---> cb5e6a9b109b
Step 4/15 : RUN npm install
---> Running in 8a00eb1143a5
npm WARN
Removing intermediate container 8a00eb1143a5
---> c568ef0a4bc3
Step 5/15 : COPY ./ ./
---> cfb3e22fc373
Step 6/15 : RUN ls
---> Running in f6aad2a0f22e
Dockerfile
Dockerfile.dev
README.md
docker-compose.yml
node_modules
package-lock.json
package.json
public
src
Removing intermediate container f6aad2a0f22e
---> 016d1ded2f97
Step 7/15 : RUN pwd
---> Running in eaae644b1d96
/app
Removing intermediate container eaae644b1d96
---> 61285a5062ea
Step 8/15 : CMD ["npm", "run" ,"build"]
---> Running in 5cbca2213f4f
Removing intermediate container 5cbca2213f4f
---> 8566953eebaa
Step 9/15 : RUN mkdir varun
---> Running in a078760b6dcb
Removing intermediate container a078760b6dcb
---> 34c25b5aab32
Step 10/15 : RUN touch var
---> Running in d725dafc9409
Removing intermediate container d725dafc9409
---> 70195ffecb54
Step 11/15 : RUN ls
---> Running in b96bc198883c
Dockerfile
Dockerfile.dev
README.md
docker-compose.yml
node_modules
package-lock.json
package.json
public
src
var
varun
Removing intermediate container b96bc198883c
---> 1b205ffb5e3f
Step 12/15 : FROM nginx
latest: Pulling from library/nginx
bbdb1fbd4a86: Pull complete
Digest: sha256:304008857c8b73ed71fefde161dd336240e116ead1f756be5c199afe816bc448
Status: Downloaded newer image for nginx:latest
---> 7042885a156a
Step 13/15 : EXPOSE 80
---> Running in 412e17c44274
Removing intermediate container 412e17c44274
---> e1e1ea0c7dfb
Step 14/15 : RUN pwd
---> Running in 1bc298a11ef1
/
Removing intermediate container 1bc298a11ef1
---> 291575f13e2f
Step 15/15 : COPY --from=builder ./app/build /usr/share/nginx/html
COPY failed: stat /var/lib/docker/devicemapper/mnt/e2b112f1a046c00990aa6fc01e9fabc9e147420a214682a06637ef8cbcb9414a/rootfs/app/build: no such file or directory
Failed to build Docker image aws_beanstalk/staging-app, retrying...
Sending build context to Docker daemon 625.7kB
Oh my god..Finally after 1 full day of tinkering everything on aws, figured it out. I was using CMD instead of RUN to execute the command npm run build. CMD doesn't actually run it when you are building a dockerfile. So the folder isn't actually there for my second image to use.