How to access Django app in docker container from another machine? - django

I am pretty new to Docker and Django. So what i did was, putty to a linux server and created a folder in the root directory and then using django-admin startproject I started a new project.
Now, since I am using putty for ssh terminal access, I will not be able to access a browser in the linux machine and then ping 127.0.0.1:8000 to see whether "congratulations!" screen by Django is visible or not.
So I assumed that the server might be running after runserver command. Then using docker I prepared a container in the linux machine where I have exposed port 9000. Also I cannot access this container since I cannot access the browser in the linux machine. Now, I have three questions below:
1.) How to access this docker container (inside the linux machine) using my windows machine? By this I mean, if I open up lets say google chrome browser in the windows machine, and enter some url:port, will I be able to see the "congratulations!" screen on my browser on windows?
2.) I am pretty confused with how this container network port and ip works (I mean how does host or any other pc access this docker container) I tried looking up on many documentation and youtube videos but I am very much confused. Because I know to make your website/app accessible to the external world we need domain name hosted on some cloud for which we need to pay, but how can docker do this for free? Might sound like a lame one, but please help me understand.
3.) How should my docker run command look like for accessing from my windows machine?
My dockerfile:
FROM python:3.6-slim
ENV PYTHONUNBUFFERED=1
RUN mkdir /Django
WORKDIR /Django
ADD . /Django
RUN pip install -r requirements.txt
EXPOSE 9000
CMD python manage.py runserver 0.0.0.0:9000
I am using the following command to build:
docker build -t myproj .
Please help clarifying my questions guy. I'll be forever grateful :)
Thanks all!

When you run the container, you need a docker run -p option:
docker run -p 12345:9000 myproj
The second port number must match the port number the actual server process is listening on (in your case, the port argument to ./manage.py runserver). The first port number can be any port number that's not otherwise in use on the host system.
Then (up to networking and firewall constraints) another system can reach the container by using the host's IP address and the first port number; http://my-dev-system.internal.example.com:12345. If you're calling from the host directly then these two systems are the same and in this special case you can use http://localhost:12345.
As an implementation detail the container happens to have its own IP address but you never need to look it up or use it. (Among other problems, it is not accessible from other machines.) From other systems' points of view a Docker-based process is indistinguishable from the process running directly on the host. Docker doesn't address the problems of needing somewhere to host the application, coming up with a DNS name for the host, or other similar concerns.

Try running it without EXPOSE 9000, when you are exposing port it's visible only inside of a container and not to the outer world. After doing so, go to a browser and navigate to <server_ip>:9000 and you will probably see the message.

Related

I can not see running docker compose containers on ubuntu 20.04

I have web app in 3 containers running on linux server (ubuntu 20.04), Django webapp, nginx webserver and postgres db. When i run 'docker-compose ps' it does not show any container, or error, only headings, like there is no container, not even crashed one.
I am sure that it is the right folder as there is no other docker-compose.yml on this server.
It seems almost like app is not running there except that it is accessible via browser and working well.
I tried all kinds of commands for showing containers or images using docker and docker-compose with no result
I tried docker service restart - app went offline for a moment and then back online (I have 'restart: allways' set in compose file) also I restarted the whole server with the same result.
Even script which is making db dumb does not see containers and started do fail
When I try to bring up the project running docker-compose up webapp and db containers starts but webserver not because port is already taken
Have anyone experienced this? I work with docker-compose for a while but it never happened to me before, I don't know what to do, I need to update the code of this application and I don't want to lose data in DB (I am also not able to make dump or ssh to the container).
This app was working for years with the same configuration before, on the other server with Ubuntu 18.04. Maybe it is server related problem.
Thanks.
It sounds like there is some fundamental problem going on. Have you tried using simply docker ps to see if your containers are running, or to see if anything is running?
If the containers are listed in the docker ps output, make sure you have the names correct in your docker-compose.yml
If you don't see your containers running with docker ps then maybe they crashed immediately after start (and are therefore no longer running).
I would have expected docker-compose ps to have shown something - even if your containers are crashing.
Please provide more details of your output from docker ps and/or docker-compose ps and maybe the contents of your docker-compose.yml if these things don't help. You said "it doesn't show any container" when you run docker-compose ps - does it show anything at all (errors, blank lines, etc.)

cannot access a development server on a server im ssh'd into

I am deploying a django app from a Centos server. When i do a python3.6 manage.py runserver 8000 command it starts a development server no problem. I am not able to access this page from my local computer to test it.
so the steps i take are: i ssh into the server by doing ssh <user>#url.com and then run the dev server with the above command. I then go to the browser on my laptop and type url.com:8000 and will come up with Unable to connect
I also have this problem when running my apache server for production. i would have no problems putting up the server on the server im ssh'd into but cannot access the webpage.
I know this is very little information to go on but does this sound like a server side issue at url.com? Should i be contacting the administrators with this, or is this something on my end possibly?
Maybe i need to configure the address my settings.py in my django app?
You probably want to run it so it listens on any interface. From the documentation:
Note that the default IP address, 127.0.0.1, is not accessible from
other machines on your network. To make your development server
viewable to other machines on the network, use its own IP address
(e.g. 192.168.2.1) or 0.0.0.0 or :: (with IPv6 enabled).
By example, you should start the server with
python3.6 manage.py runserver 0.0.0.0:8000
In general, it is probably not wise to keep such a thing running on the web, particularly with debug on. From the same documentation link:
DO NOT USE THIS SERVER IN A PRODUCTION SETTING. It has not gone
through security audits or performance tests. (And that’s how it’s
gonna stay. We’re in the business of making Web frameworks, not Web
servers, so improving this server to be able to handle a production
environment is outside the scope of Django.)

Unable to bring up docker project

I'm following this Docker tutorial, which creates a simple Docker-managed Django site, and when I try to run docker-compose up to launch my docker project, I get the ambiguous error:
ERROR: Couldn't connect to Docker daemon at http+docker://localunixsocket - is it running?
The error suggests that the Docker daemon isn't running, but service docker status shows the Docker daemon is running.
If instead I run sudo docker-compose up, then it succeeds, but it chowns a lot of my local development files to the root user, which is easy enough to fix, but annoying.
Why does Docker require root access just to start a local Django development server? How do I fix this?
My versions:
Docker version 18.06.1-ce, build e68fc7a
docker-compose version 1.11.1, build 7c5d5e4
Ubuntu 16.04.5 LTS
If you can run any Docker command at all, you can trivially root the host:
docker run --rm -v /:/host busybox \
cat /host/etc/shadow
Additionally, Docker containers frequently run as root within their own container space, which means that whatever parts of the host filesystem you choose to expose into them, they can make arbitrary changes as arbitrary user IDs. You can use a docker run -u option to pick a different user ID, but you can pick any user ID, even one that belongs to another user on a shared system.
It is very reasonable to use sudo as a way to get root privileges for things that need it, and this is a typical out-of-the-box Docker configuration.
At the end of the day the only real gate on this is the Unix permissions on the file /var/run/docker.sock. This is often mode 0660 owned by a dedicated docker group. If you don’t mind your normal user being able to read and write arbitrary host files without much of a control at all, you can add yourself to that group. That’s frequently appropriate for something like a developer laptop; but on anything like a production system it deserves some real consideration of its security implications.

How to configure Elastic IP with django app in aws?

I am building an app using django in EC2-ubuntu and i have associated Elastic ip with my instance.
i have done following steps :
1. first created instance of ubuntu in ec2 free tier.
2. installed python.
3. installed pip.
4. installed django.
5. create a django project using django-admin startproject.
6. run server using these commads python manage.py runserver 0.0.0.0:80
7. created an elastic ip and associated to the instance.
8. configure security inbound settings with http 0.0.0.0:80 address.
9. able to ping my project using any browser.
But the problem is when i am closing my putty session where i supplied runserver command, django project is also stopped. i did not stop it manually.
Please, help me to keep on running after closing putty session as well.
Thanks,
Kripa Sharma
Take a look at this Answer
I highly recommend that you start using Elastic Beanstalk (Python instance) to take care of all these steps for you. Very simple to setup, and no need to worry about any of the steps you listed.
You can use this instruction to see how you can deploy a Django app in less than 5 minutes.
The problem
You are trying to persist the debug server for a remotely deployed application.
You probably need to review the runserver command documentation. Here are the relevant parts:
django-admin runserver [addrport]
Starts a lightweight development Web server on the local machine. By default, the server runs on port 8000 on the IP address 127.0.0.1. You can pass in an IP address and port number explicitly.
...
DO NOT USE THIS SERVER IN A PRODUCTION SETTING. It has not gone through security audits or performance tests. (And that’s how it’s gonna stay. We’re in the business of making Web frameworks, not Web servers, so improving this server to be able to handle a production environment is outside the scope of Django.)
A webserver
Having skimmed the above docs, you may want to look at "How to deploy with WSGI" section, which gives a few recommendations for commonly used Web servers. My favorite, Gunicorn, includes a usage example:
$ pip install gunicorn
$ gunicorn myproject.wsgi
Having decided, and installed a webserver, you'd need to "daemonize" it and expose it to the world.
The former is usually done by creating a service on your OS, for ubuntu it would be either upstart or systemd depending on the version. Gunicorn docs have examples for both.
The latter is usually achieved with an http-server/proxy such as nginx or apache httpd. And again, Gunicorn has an example for us.
You can see why I like it so much ☺️
Epilogue
While technically possible to run the debug server as a service or even in a terminal multiplexer such as GNU screen or tmux, it's not a recommended or stable long term solution.
That said, these are very useful to know about, so read on the above tools and learn to use them, since they would be invaluable to have in your toolset in the future, for example to avoid accidentally terminating a long running command (such as migration), etc.

How to keep django app running in the background within the vagrant?

I have a Ubuntu 14.04 host headless Server.
Using root user, I vagrant up a VM that is using VirtualBox.
Inside this VM, is a Django Python 3 app.
Every time I vagrant up and vagrant ssh this VM, I need to run sudo service gunicorn start.
If I exit from the vagrant ssh, and then switch to another user, the app dies.
How do I maintain this Django app running from the VM permanently?
If the host machine has to reboot for whatever reason, how can the Django app automatically run itself?
In summary:
how to allow vagrant and the gunicorn inside the VM run for a very long time while I switch between users in the host OS?
Is there a way to automatically revive the vagrant and the gunicorn inside, whenever the host OS is rebooted?
Use:
sudo service gunicorn start &
The & sign will make your command to run on a different process then the terminal one, so you can close the terminal without closing the gunicorn.
By the way, this is not a vagrant related, it happens on all linux-like terminals.
For your second question, you need to use something like supervisor to handle this for you.