I wrote a container web app and built it with docker, after it appears to be running, the app is not accessible through typing the link in the browser, but only available through local access link. The same container app always shows 404 No Such Service when pushed to AWS
here is the dockerfile
FROM python:3.8-alpine
EXPOSE 2328
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY main.py /app
COPY Templates /app/templates
COPY blockSchedule.txt /app
COPY blockSchedule.txt /app
CMD [ "python", "./main.py", "production" ]
It would be helpful if you included the commands that you ran and the output that you observed.
I suspect that you're not publishing the container's port (possibly 2328) on the host.
If there is a Python server and it's running on 2328 in the container, you can use the following command to publish (forward) the port to the host:
docker run \
--interactive --tty --rm \
--publish=2328:2328 \
your-container-image:tag
NOTE replace your-container: tag with your container's image name and tag.
NOTE The --publish flag has the following syntax [HOST-PORT]:[CONTAINER-PORT]. The container port is generally fixed by the port used by the container but you can use whichever host port is available.
Using the command above, you should be able to e.g
browse the container from the host using e.g. localhost: 2328
Related
I am using cloud run for my blog and a work site and I really love it.
I have deployed python APIs and Vue/Nuxt Apps by containerising it according to the google tutorials.
One thing I don't understand is why there is no need to have NGINX on the front.
# Use the official lightweight Node.js 12 image.
# https://hub.docker.com/_/node
FROM node:12-slim
# Create and change to the app directory.
WORKDIR /usr/src/app
# Copy application dependency manifests to the container image.
# A wildcard is used to ensure both package.json AND package-lock.json are copied.
# Copying this separately prevents re-running npm install on every code change.
COPY package*.json ./
# Install production dependencies.
RUN npm install --only=production
# Copy local code to the container image.
COPY . ./
# Run the web service on container startup.
RUN npm run build
CMD [ "npm", "start" ]
# Use the official lightweight Python image.
# https://hub.docker.com/_/python
FROM python:3.7-slim
# Copy local code to the container image.
ENV APP_HOME /app
WORKDIR $APP_HOME
COPY . ./
# Install production dependencies.
RUN apt-get update && apt-get install -y \
libpq-dev \
gcc
RUN pip install -r requirements.txt
# Run the web service on container startup. Here we use the gunicorn
# webserver, with one worker process and 8 threads.
# For environments with multiple CPU cores, increase the number of workers
# to be equal to the cores available.
CMD exec gunicorn -b :$PORT --workers=4 main:server
All this works without me calling Nginx ever.
But I read alot of articles whereby people bundle NGINX in their container. So I would like some clarity. Are there any downsides to what I am doing?
One considerable advantage of using NGINX or a static file server is the size of the container image. When serving SPAs (without SSR), all you need is to get the bundled files to the client. There's no need to bundle build dependencies or runtime that's needed to compile the application.
Your first image is copying whole source code with dependencies into the image, while all you need (if not running SSR) are the compiled files. NGINX can give you the "static site server" that will only serve your build and is a lightweight solution.
Regarding python, unless you can bundle it somehow, it looks ok to use without the NGINX.
So I have my Web Application deployed onto AWS. All my services and clients are deployed using Fargate, so I build a Docker Image and push that up.
Dockerfile
# build environment
FROM node:9.6.1 as builder
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
ENV PATH /usr/src/app/node_modules/.bin:$PATH
COPY package.json /usr/src/app/package.json
RUN npm install --silent
RUN npm install react-scripts#1.1.1 -g --silent
COPY . /usr/src/app
RUN npm run build
# production environment
FROM nginx:1.13.9-alpine
COPY --from=builder /usr/src/app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
I also created a DNS on AWS using Route53 that I pointed to my Load Balancer which is connected to my client application. These have an SSL certificate attached.
Example DNS
*.test.co.uk
test.co.uk
When I navigate to the login page of the application, everything works fine, I can navigate around the application. If I try to navigate straight to /dashboard, I receive a 404 Not Found NGINX error.
Example
www.test.co.uk (WORKS FINE)
www.test.co.uk/dashboard (404 NOT FOUND)
Is there something I need to configure, either in the Dockerfile or AWS that will allow users to directly navigate to any path?
Currently, I'm trying to access a simple Django application, which I created in an Azure Virtual Machine. As the application is still simple, I only want to access the "The install worked successfully! Congratulations!" page from my local machine by accessing http://VM_IP:PORT/. I was able to do just that, but when I tried to Dockerize the project and access the built image from my local machine, it didn't work.
I've already made some setup in my Azure portal so that the Virtual Machine is able to listen to a specific port; in this case is 8080 (so http://VM_IP:8080/). I'm quite new to Docker, so I'm assuming there was something missing in the Dockerfile I've created for the project.
Dockerfile
RUN mkdir /app
WORKDIR /app
# Add current directory code to working directory
ADD . /app/
# set default environment variables
ENV PYTHONUNBUFFERED 1
ENV LANG C.UTF-8
ENV DEBIAN_FRONTEND=noninteractive
# set project environment variables
# grab these via Python's os.environ# these are 100% optional here
ENV PORT=8080
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
tzdata \
python3-setuptools \
python3-pip \ python3-dev \
python3-venv \
git \
&& \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# install environment dependencies
RUN pip3 install --upgrade pip
RUN pip3 install pipenv
# Install project dependencies
RUN pipenv install --skip-lock --system --dev
EXPOSE 8888
CMD gunicorn simple_project.wsgi:application --bind 0.0.0.0:$PORT
I'm not sure what's happening. It will be very appreciated if someone could point out what I was missing? Thanks in advance.
I suspect the problem may be that you're confusing the EXPOSE build-time instruction with the publish runtime flag. Without the latter, any containers on your VM would be inaccessible to the host machine.
Some background:
The EXPOSE instruction is best thought of as documentation; it has no effect on container networking. From the docs:
The EXPOSE instruction does not actually publish the port. It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published.
Seems kind of odd at first glance, but it's for good reason: the image itself does not have the permissions to declare host port-mappings — that's up to the container runtime and whoever is operating it (you!).
The way to do this is by passing the --publish or -p flag to the docker run command, allowing you to define a mapping between the port open in the container network and a port on the host machine. So, for example, if I wanted to run an nginx container that can be accessed at port 8080 on my localhost, I'd run: docker run --rm -d -p 8080:80 nginx. The running container is then accessible at localhost:8080 on the host. Of course, you can also use this to expose container ports from one host to another. Without this, any networking configuration in your Dockerfile is executed in the context of the container network, and is basically inaccessible to the host.
TL;DR: you probably just need to publish your ports when you create and run the container on your VM: docker run -p {vm_host_port}:{container_port} {image_name}. Note that port mappings cannot be added or changed for existing containers; you'd have to destroy the container and recreate it.
Side note: while docker run is quick and easy, it can quickly become unmanageable as your project grows and you add in environment variables, attach volumes, define inter-container dependencies, etc. An alternative with native support is docker-compose, which lets you define the runtime configuration of your container (or containers) declaratively in YAML — basically picking up where the Dockerfile leaves off. And once it's set up, you can just run docker-compose up, instead of having to type out lengthy docker commands, and waste time debugging when you forgot to include a flag, etc. Just like we use Dockerfiles to have a declarative, version-controlled description of how to build our image, I like to think of docker-compose as one way to create a declarative, version-controlled description for how to run and orchestrate our image(s).
Hope this helps!
This question already has answers here:
Are a WSGI server and HTTP server required to serve a Flask app?
(3 answers)
Closed 3 years ago.
I have simple Flask application (simply shows "Hello world"), I would like to deploy it on AWS Elastic BeanStalk. Multiple tutorial show deployment with nginx and gunicorn.
1) I don't understand why we need to use nginx, gunicorn is already a web-server to replace Flask build-in web server.
2) Tutorials show how to build two Docker containers: one for Flask and gunicorn and another for nginx. Why do I need two containers, can I package all in one? With two containers I cannot use Single Container Docker, I need to use Multicontainer Docker.
Any thoughts?
Usually in this trio nginx is used as reverse proxy.
It is possible to package flask+gunicorn+nginx in the same docker container:
For example:
FROM python:3.6.4
# Software version management
ENV NGINX_VERSION=1.13.8-1~jessie
ENV GUNICORN_VERSION=19.7.1
ENV GEVENT_VERSION=1.2.2
# Environment setting
ENV APP_ENVIRONMENT production
# Flask demo application
COPY ./app /app
RUN pip install -r /app/requirements.txt
# System packages installation
RUN echo "deb http://nginx.org/packages/mainline/debian/ jessie nginx" >> /etc/apt/sources.list
RUN wget https://nginx.org/keys/nginx_signing.key -O - | apt-key add -
RUN apt-get update && apt-get install -y nginx=$NGINX_VERSION
&& rm -rf /var/lib/apt/lists/*
# Nginx configuration
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d/nginx.conf
# Gunicorn installation
RUN pip install gunicorn==$GUNICORN_VERSION gevent==$GEVENT_VERSION
# Gunicorn default configuration
COPY gunicorn.config.py /app/gunicorn.config.py
WORKDIR /app
EXPOSE 80 443
CMD ["nginx", "-g", "daemon off;"]
I'm trying to put my new ASP Website (MCV5) on the AWS server. For this I used EC2 and I created a virtual machine on linux. I also build my docker and now I'm trying to run it with:
sudo docker run -t -d -p 80:5004 myapp
But each time I just get a random number like this :
940e7abfe315d32cc8f5cfeb0c1f13750377fe091aa43b5b7ba4
When I try to know if my docker is running with:
sudo docker ps
no information is showed...
For information, when I put sudo docker images, I get my application created.
My Dockerfile contain:
FROM microsoft/aspnet
COPY . /app
WORKDIR /app
RUN ["dnu", "restore"]
EXPOSE 5004
ENTRYPOINT ["dnx", "-p", "project.json", "kestrel"]
-d means to run detached. The number you see is the container id.
If you want to get information about the container, you can use that container id
docker inspect <container id>
docker logs <container id>
If you'd like the container to run and print logs to the terminal, remove the -d