I am having a problem running Docker container after upgrading from NodeJS 8.2 to 9.1. This is the message I am getting.
I used the Dockerfile I found in Docker Hub but got an error of not being able to find package.json. So I commented it out and use the one I found on NodeJS website.
Below is the Docker File:
Dockerfile
FROM node:9.1.0
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
ONBUILD ARG NODE_ENV
ONBUILD ENV NODE_ENV $NODE_ENV
ONBUILD COPY package*.json ./
ONBUILD RUN npm install && npm cache clean --force
ONBUILD COPY . /usr/src/app
CMD [ "npm", "start" ]
I would appreciate help from more experienced users.
Your docker run command syntax is wrong. Everything after the image name is used to override the command run in your container. So docker run myimage -d will try to run -d inside the container, while docker run -d myimage will run your container with the -d option to docker run (detached mode).
The Dockerfile you referenced is meant to be used as parent image for an easy dockerization of your application.
So to dockerize your nodejs application, you'd need to create a dockerfile using the docker image created by said dockerfile.
The ONBUILD instruction gets executed whenever a new image is built with this particular image as parent image (FROM instruction). More info
I've never used an image like this, but from the looks of it, it should be enough to reference the image with the FROM instruction and then provide the NODE_ENV via build args.
The dockerfile to add into your project:
FROM this_image:9.1
How to build your application image:
docker build -t IMAGE_NAME:TAG --build-arg NODE_ENV=production .
Related
I want to spin up a low configuration containerized service for which I created a Dockerfile as below:
docker build -t apache/druid_nano:0.20.2 -f Dockerfile .
FROM ubuntu:16.04
Install Java JDK 8
RUN apt-get update
&& apt-get install -y openjdk-8-jdk
RUN mkdir /app
WORKDIR /app
COPY apache-druid-0.20.2-bin.tar.gz /app
RUN tar xvzf apache-druid-0.20.2-bin.tar.gz
WORKDIR /app/apache-druid-0.20.2
EXPOSE <PORT_NUMBERS>
ENTRYPOINT ["/bin/start/start-nano-quickstart"]
When I start the container using the command "docker run -d -p 8888:8888 apache/druid_nano:0.20.2, I get an error as below:
/bin/start-nano-quickstart: no such file or directory
I removed the ENTRYPOINT command and built the image again just to check if the file exists in the bin directory inside the container. There is a file start-nano-quickstart under the bin directory inside the container.
Am I missing anything here? Please help.
i am using below Docker file. how can i configure redis in my Dockerfile?
also i am using build command docker build - < Dockerfile but this didn't work out.
if i run this command the following error will show
COPY failed: no source files were specified
FROM node:lts
RUN mkdir -p /app
WORKDIR /app
COPY package*.json /app
RUN yarn
COPY . /app
CMD ["yarn","run","start"]
One cannot use docker build - < Dockerfile to build an image that uses COPY instructions, because those instructions require those files to be present in the build context.
One must use docker build ., where . is the relative path to the build context.
Using docker build - < Dockerfile effectively means that the only thing in the build context is the Dockerfile. The files that one wants to copy into the docker image are not known to docker, because they are not included in the context.
i have the next problem:
i try to build a docker image with ros2, in which a code package is downloaded which will be built using the colcon build method.
but when I try to run last of. install / setup.bash doesn't work for me.
I already tried to put it in a script and copy it to the dockerfile but it didn't work
any ideas
here I leave the docker file
FROM osrf/ros:dashing-desktop
WORKDIR /home
COPY mobilidad.sh .
RUN bash mobilidad.sh
ENV ROS2_WS cleanmyway/test_ws
RUN mkdir -p ${ROS2_WS}/src/demo_py
COPY ./ ${ROS2_WS}/src/demo_py
WORKDIR ${ROS2_WS}
SHELL ["/bin/bash", "-c"]
RUN colcon build
RUN . install/setup.bash
note: mobilidad.sh is a script that dowload de code from github, this works fine
I think I managed to find a solution,
building the dockerfile as follows:
FROM osrf/ros:dashing-desktop
WORKDIR /home
COPY mobilidad.sh .
RUN bash mobilidad.sh
ENV ROS2_WS cleanmyway/test_ws
RUN mkdir -p ${ROS2_WS}
WORKDIR ${ROS2_WS}
RUN colcon build
RUN echo "source install/setup.bash" >> /opt/ros/dashing/setup.bash
and when i run the ros2 command works fine
but anyway thanks for the help. :)
note: i'm not sure if the better form but works for me
I'm building a django app using docker. The issue I am having is that my local filesystem is not synced to the docker environment so making local changes have no effect until I rebuild.
I added a volume
- ".:/app:rw"
which is syncing to my local filesystem but does my bundles that get built via webpack during the build don't get inserted (because they aren't in my filesystem)
My dockerfile has this
... setup stuff...
ENV NODE_PATH=$NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules \
PATH=$NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
ENV PATH=/node_modules/.bin:$PATH
COPY package*.json /
RUN (cd / && npm install && rm -rf /tmp/*)
...pip install stuff...
COPY . /app
WORKDIR /app
RUN npm run build
RUN DJANGO_MODE=build python manage.py collectstatic --noinput
So I want to sync to my local filesystem so I can make changes and have them show up immediately AND have my bundles and static assets present. The way I've been developing so far is to just comment out the app:rw line in my docker-compose.yml which allows all the assets and bundles to be present.
The solution that ended up working for me was to assign a volume to the directory I wanted to not sync to my local environment.
volumes:
- ".:/app/:rw"
- "/app/project_folder/static_source/bundles/"
- "/app/project_folder/bundle_tracker/"
- "/app/project_folder/static_source/static/"
Arguably there's probably a better way to do this, but this solution does work. The dockerfile compiles webpack and collect static does it's job both within the container and the last 3 lines above keep my local machine from overwriting them. The downside is that I still have to figure out a better solution for live recompile of scss or javascript, but that's a job for another day.
You can mount a local folder into your Docker image. Just use the --mount option at the docker run command. In the following example the current directory will be available in your Docker image at /app.
docker run -d \
-it \
--name devtest \
--mount type=bind,source="$(pwd)"/target,target=/app \
nginx:latest
Reference: https://docs.docker.com/storage/bind-mounts/
I have a use case in which i am trying to build a django based rest api and then use continuous integration using travis CI when changes are pushed to github. I am also using docker to build and docker-compose to scale my services. The problem is i want to run pytest and flake8 when i push my changes to github. Now i have not added any tests and hence the pytest command is giving an exit status of 5.
To get around this i tried creating a script to do this :
#!/bin/bash
pytest;
err=$? ;
if (( $err != 5 )) ;
then
exit $err;
fi
flake8 ;
But i cannot get docker-compose to run this . When i run the script using the command :
docker-compose run app sh -c "run_script.sh"
It gives the below error message :
sh: run_script.sh: not found
Below is my docker-compose yml file:
version: "3"
services:
app:
build:
context: .
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python manage.py runserver 0.0.0.0:8000"
And below is the dockerfile:
FROM python:3.7-alpine
MAINTAINER Subhayan Bhattacharya
ENV PYTHONUNBUFFERED 1
COPY Pipfile* /tmp/
RUN cd /tmp && pip install pipenv && pipenv lock --requirements > requirements.txt
RUN pip install -r /tmp/requirements.txt
RUN mkdir /app
WORKDIR /app
COPY ./app /app
RUN adduser -D user
USER user
This should be a simple issue but i cannot figure out how to get around this.
Can someone please help me find the solution
Your script isn't working because Alpine base images don't have GNU bash. Your script almost limits itself to the POSIX Shell Command Language; if you do, you can change the "shebang" line to #!/bin/sh.
#!/bin/sh
# ^^^ not bash
pytest # individual lines don't need to end with ;
err=$?
# use [ ... ] (test), not ((...))
if [ "$err" -ne 5 ] && [ "$err" -ne 0 ]; then
exit "$err"
fi
flake8
In the context of a CI system, it is important to remove the volumes: line that mounts a local directory over your container's /app directory: having that line means you are not testing what's in your image at all, but instead a possibly-related code tree that's on the host system.
In practice I'd suggest running both of these tools in a non-Docker environment. It will be simpler to run them and collect their results. Especially a style checker like flake8 will have very few dependencies on system packages or other containers being started, and ideally your unit tests can also run without hard-to-set-up context like a database container as well. I'd suggest a sequence like:
Check out the source code.
Create a virtual environment and install its dependencies.
Run pytest, flake8, and similar test tools.
Then build a Docker image, without test-only tools.
Run the image with its assorted dependencies.
Run further tests based on network calls into the container.