Currently, I'm trying to access a simple Django application, which I created in an Azure Virtual Machine. As the application is still simple, I only want to access the "The install worked successfully! Congratulations!" page from my local machine by accessing http://VM_IP:PORT/. I was able to do just that, but when I tried to Dockerize the project and access the built image from my local machine, it didn't work.
I've already made some setup in my Azure portal so that the Virtual Machine is able to listen to a specific port; in this case is 8080 (so http://VM_IP:8080/). I'm quite new to Docker, so I'm assuming there was something missing in the Dockerfile I've created for the project.
Dockerfile
RUN mkdir /app
WORKDIR /app
# Add current directory code to working directory
ADD . /app/
# set default environment variables
ENV PYTHONUNBUFFERED 1
ENV LANG C.UTF-8
ENV DEBIAN_FRONTEND=noninteractive
# set project environment variables
# grab these via Python's os.environ# these are 100% optional here
ENV PORT=8080
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
tzdata \
python3-setuptools \
python3-pip \ python3-dev \
python3-venv \
git \
&& \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# install environment dependencies
RUN pip3 install --upgrade pip
RUN pip3 install pipenv
# Install project dependencies
RUN pipenv install --skip-lock --system --dev
EXPOSE 8888
CMD gunicorn simple_project.wsgi:application --bind 0.0.0.0:$PORT
I'm not sure what's happening. It will be very appreciated if someone could point out what I was missing? Thanks in advance.
I suspect the problem may be that you're confusing the EXPOSE build-time instruction with the publish runtime flag. Without the latter, any containers on your VM would be inaccessible to the host machine.
Some background:
The EXPOSE instruction is best thought of as documentation; it has no effect on container networking. From the docs:
The EXPOSE instruction does not actually publish the port. It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published.
Seems kind of odd at first glance, but it's for good reason: the image itself does not have the permissions to declare host port-mappings — that's up to the container runtime and whoever is operating it (you!).
The way to do this is by passing the --publish or -p flag to the docker run command, allowing you to define a mapping between the port open in the container network and a port on the host machine. So, for example, if I wanted to run an nginx container that can be accessed at port 8080 on my localhost, I'd run: docker run --rm -d -p 8080:80 nginx. The running container is then accessible at localhost:8080 on the host. Of course, you can also use this to expose container ports from one host to another. Without this, any networking configuration in your Dockerfile is executed in the context of the container network, and is basically inaccessible to the host.
TL;DR: you probably just need to publish your ports when you create and run the container on your VM: docker run -p {vm_host_port}:{container_port} {image_name}. Note that port mappings cannot be added or changed for existing containers; you'd have to destroy the container and recreate it.
Side note: while docker run is quick and easy, it can quickly become unmanageable as your project grows and you add in environment variables, attach volumes, define inter-container dependencies, etc. An alternative with native support is docker-compose, which lets you define the runtime configuration of your container (or containers) declaratively in YAML — basically picking up where the Dockerfile leaves off. And once it's set up, you can just run docker-compose up, instead of having to type out lengthy docker commands, and waste time debugging when you forgot to include a flag, etc. Just like we use Dockerfiles to have a declarative, version-controlled description of how to build our image, I like to think of docker-compose as one way to create a declarative, version-controlled description for how to run and orchestrate our image(s).
Hope this helps!
Related
I wrote a container web app and built it with docker, after it appears to be running, the app is not accessible through typing the link in the browser, but only available through local access link. The same container app always shows 404 No Such Service when pushed to AWS
here is the dockerfile
FROM python:3.8-alpine
EXPOSE 2328
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY main.py /app
COPY Templates /app/templates
COPY blockSchedule.txt /app
COPY blockSchedule.txt /app
CMD [ "python", "./main.py", "production" ]
It would be helpful if you included the commands that you ran and the output that you observed.
I suspect that you're not publishing the container's port (possibly 2328) on the host.
If there is a Python server and it's running on 2328 in the container, you can use the following command to publish (forward) the port to the host:
docker run \
--interactive --tty --rm \
--publish=2328:2328 \
your-container-image:tag
NOTE replace your-container: tag with your container's image name and tag.
NOTE The --publish flag has the following syntax [HOST-PORT]:[CONTAINER-PORT]. The container port is generally fixed by the port used by the container but you can use whichever host port is available.
Using the command above, you should be able to e.g
browse the container from the host using e.g. localhost: 2328
I am using cloud run for my blog and a work site and I really love it.
I have deployed python APIs and Vue/Nuxt Apps by containerising it according to the google tutorials.
One thing I don't understand is why there is no need to have NGINX on the front.
# Use the official lightweight Node.js 12 image.
# https://hub.docker.com/_/node
FROM node:12-slim
# Create and change to the app directory.
WORKDIR /usr/src/app
# Copy application dependency manifests to the container image.
# A wildcard is used to ensure both package.json AND package-lock.json are copied.
# Copying this separately prevents re-running npm install on every code change.
COPY package*.json ./
# Install production dependencies.
RUN npm install --only=production
# Copy local code to the container image.
COPY . ./
# Run the web service on container startup.
RUN npm run build
CMD [ "npm", "start" ]
# Use the official lightweight Python image.
# https://hub.docker.com/_/python
FROM python:3.7-slim
# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
# Install production dependencies.
RUN apt-get update && apt-get install -y \
libpq-dev \
gcc
RUN pip install -r requirements.txt
# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
CMD exec gunicorn -b :$PORT --workers=4 main:server
All this works without me calling Nginx ever.
But I read alot of articles whereby people bundle NGINX in their container. So I would like some clarity. Are there any downsides to what I am doing?
One considerable advantage of using NGINX or a static file server is the size of the container image. When serving SPAs (without SSR), all you need is to get the bundled files to the client. There's no need to bundle build dependencies or runtime that's needed to compile the application.
Your first image is copying whole source code with dependencies into the image, while all you need (if not running SSR) are the compiled files. NGINX can give you the "static site server" that will only serve your build and is a lightweight solution.
Regarding python, unless you can bundle it somehow, it looks ok to use without the NGINX.
I'm building a django app using docker. The issue I am having is that my local filesystem is not synced to the docker environment so making local changes have no effect until I rebuild.
I added a volume
- ".:/app:rw"
which is syncing to my local filesystem but does my bundles that get built via webpack during the build don't get inserted (because they aren't in my filesystem)
My dockerfile has this
... setup stuff...
ENV NODE_PATH=$NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules \
PATH=$NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
ENV PATH=/node_modules/.bin:$PATH
COPY package*.json /
RUN (cd / && npm install && rm -rf /tmp/*)
...pip install stuff...
COPY . /app
WORKDIR /app
RUN npm run build
RUN DJANGO_MODE=build python manage.py collectstatic --noinput
So I want to sync to my local filesystem so I can make changes and have them show up immediately AND have my bundles and static assets present. The way I've been developing so far is to just comment out the app:rw line in my docker-compose.yml which allows all the assets and bundles to be present.
The solution that ended up working for me was to assign a volume to the directory I wanted to not sync to my local environment.
volumes:
- ".:/app/:rw"
- "/app/project_folder/static_source/bundles/"
- "/app/project_folder/bundle_tracker/"
- "/app/project_folder/static_source/static/"
Arguably there's probably a better way to do this, but this solution does work. The dockerfile compiles webpack and collect static does it's job both within the container and the last 3 lines above keep my local machine from overwriting them. The downside is that I still have to figure out a better solution for live recompile of scss or javascript, but that's a job for another day.
You can mount a local folder into your Docker image. Just use the --mount option at the docker run command. In the following example the current directory will be available in your Docker image at /app.
docker run -d \
-it \
--name devtest \
--mount type=bind,source="$(pwd)"/target,target=/app \
nginx:latest
Reference: https://docs.docker.com/storage/bind-mounts/
I’m looking to understand how to properly structure my .gitlab-ci.yml and Dockerfile such that I can build a C++ application into a Docker container.
I’m struggling with where the actual compilation and link of the C++ application should take place within the CI workflow.
What I’ve done:
My current in approach is to use Docker in Docker with a private gitlab docker registry.
My gitlab-ci.yml uses a dind docker image service I created based on the the docker:19.03.1-dind image but includes my certificates to talk securely to my private gitlab docker registry.
I also have a custom base image referenced by my gitlab-ci.yml based on docker:19.03.1 that includes what I need for building, eg cmake, build-base mariadb-dev, etc.
Have my build script added to the gitlab-ci.yml to build the application, cmake … && cmake --build .
The dockerfile then copies the final binary produced in my build step.
Having done all of this it doesn’t feel quite right to me and I’m wondering if I’m missing the intent. I’ve tried to find a C++ example online to follow as example but have been unsuccessful.
What I’m not fully understanding is the role of each player in the docker-in-docker setup: docker image, dind image, and finally the container I’m producing…
What I’d like to know…
Who should perform the build and contain the build environment, the base image specified in my .gitlab-ci.yml or my Dockerfile?
If I build with the dockerfile, how to i get the contents of the source into the docker container? Do I copy the /builds dir? Should I mount it?
Where to divide who performs work, gitlab-ci.yml or Docker file?
Reference to a working example of a C++ docker application built with Docker-in-Docker Gitlab CI.
.gitlab-ci.yml
image: $CI_REGISTRY/building-blocks/dev-mysql-cpp:latest
#image: docker:19.03.1
services:
- name: $CI_REGISTRY/building-blocks/my-dind:latest
alias: docker
stages:
- build
- release
variables:
# Use TLS https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#tls-enabled
DOCKER_TLS_CERTDIR: "/certs"
CONTAINER_TEST_IMAGE: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
CONTAINER_RELEASE_IMAGE: $CI_REGISTRY_IMAGE:latest
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
build:
stage: build
script:
- mkdir build
Both approaches are equally valid. If you look at other SO questions, one thing you'll probably notice is that Java/Docker images almost universally build a jar file on their host and then COPY it into an image, but Go/Docker images tend to use a multi-stage Dockerfile starting from sources.
If you already have a fairly mature build system and your developers already have a very consistent setup, it makes sense to do more work in the CI environment (in your .gitlab.yml file). Build your application the same way you already do, then COPY it into a minimal Docker image. This approach is also helpful if you need to ship both Docker and non-Docker artifacts. If you have a make dist style tar file and want to get a Docker image out of it, you could use a very straightforward Dockerfile like
FROM ubuntu
RUN apt-get update && apt-get install ...
ADD dist/myapp.tar.gz /usr/local # unpacking it
EXPOSE 12345
CMD ["myapp"] # /usr/local/bin/myapp
On the other hand, if your developers have a variety of desktop environments and you're really trying to standardize things, and you only need to ship the Docker image, it could make sense to centralize most things in the Dockerfile. This would have the advantage that every developer could run the exact build sequence themselves locally, rather than depending on the CI system to try simple changes. Something built around GNU Autoconf might look more like
FROM ubuntu AS build
RUN apt-get update \
&& apt-get install --no-install-recommends --assume-yes \
build-essential \
lib...-dev
WORKDIR /app
COPY . .
RUN ./configure --prefix=/usr/local \
&& make \
&& make install
FROM ubuntu
RUN apt-get update \
&& apt-get install --no-install-recommends --assume-yes \
lib...
COPY --from=build /usr/local /usr/local
CMD ["myapp"]
If you do the primary build in a Dockerfile, you need to COPY the source code in. Volume mounts don't work at this point in the sequence. CI systems should avoid bind-mounting source code into a container in any case: you want to run tests against the actual artifact you've built, and not a hybrid of a built Docker image but with all of its source code replaced.
A Meteor app is running on the local machine. Then it gets built appDir$ Meteor build . and the resultant myApp.tar.gz gets copied to the AWS cloud. Then a script runs on the cloud to put the app into a docker container following some Dockerfile commands.
Every time a change needs to be done, a repeat of the above follows, any better way to reduce the effort of re- building/copying/dockerizing?
Is it possible by using volume and docker-compose and just sync the changes from the local development machine to the aws EC2 volume directory? How?
//Dockerfile on AWS EC2
FROM lambdalinux/baseimage-amzn:2016.09-000
RUN curl --silent --location https://rpm.nodesource.com/setup_4.x | bash -
RUN yum install -y tar nodejs
ADD ./myApp.tar.gz /opt/
EXPOSE 80
ENV ROOT_URL http://example.com
ENV MONGO_URL "mongodb://username:pass..."
ENV PORT 80
# Install nodejs modules
WORKDIR /opt/bundle/
RUN npm install fibers
RUN npm install underscore
RUN npm install source-map-support
RUN npm install semver
# Start the app
CMD node ./main.js
There is a command called rsync that will do a smart sync of a whole directory structure - if you unpacked the build locally you could then rsync it up to the server.
It can use either file dates or checksums to work out what has changed, and will make the process quicker. Minified files will probably change every time, but certainly many assets won't change every time.
I would set it up with a mirror of your production directory, sync the files into there, do some (automated) sanity checks first, and then switch the new version into place. If it doesn't work you can switch the old version back. There is a little work required to get this set up, but it will make deployment faster/easier