I want to try to dockerize my Django project, but I face a problem that the database doesn't exist. I run docker on WSL2. I use PostgreSQL.
TLDR: Try adding - DATABASE_HOST=db on line 28
Even though you are running all the containers on your host. They do not share localhost nor 127.0.0.1. Docker creates its own network and every containers have its own IP address(es) and Network interface(s).
When using Docker Compose, you can use the service name in this case db and web to point to a container. You can also use host.docker.internal to point at the actual host.
In your case, Django is trying to connect to a database that runs on the web container but there is none.
Related
I am pretty new to Docker and Django. So what i did was, putty to a linux server and created a folder in the root directory and then using django-admin startproject I started a new project.
Now, since I am using putty for ssh terminal access, I will not be able to access a browser in the linux machine and then ping 127.0.0.1:8000 to see whether "congratulations!" screen by Django is visible or not.
So I assumed that the server might be running after runserver command. Then using docker I prepared a container in the linux machine where I have exposed port 9000. Also I cannot access this container since I cannot access the browser in the linux machine. Now, I have three questions below:
1.) How to access this docker container (inside the linux machine) using my windows machine? By this I mean, if I open up lets say google chrome browser in the windows machine, and enter some url:port, will I be able to see the "congratulations!" screen on my browser on windows?
2.) I am pretty confused with how this container network port and ip works (I mean how does host or any other pc access this docker container) I tried looking up on many documentation and youtube videos but I am very much confused. Because I know to make your website/app accessible to the external world we need domain name hosted on some cloud for which we need to pay, but how can docker do this for free? Might sound like a lame one, but please help me understand.
3.) How should my docker run command look like for accessing from my windows machine?
My dockerfile:
FROM python:3.6-slim
ENV PYTHONUNBUFFERED=1
RUN mkdir /Django
WORKDIR /Django
ADD . /Django
RUN pip install -r requirements.txt
EXPOSE 9000
CMD python manage.py runserver 0.0.0.0:9000
I am using the following command to build:
docker build -t myproj .
Please help clarifying my questions guy. I'll be forever grateful :)
Thanks all!
When you run the container, you need a docker run -p option:
docker run -p 12345:9000 myproj
The second port number must match the port number the actual server process is listening on (in your case, the port argument to ./manage.py runserver). The first port number can be any port number that's not otherwise in use on the host system.
Then (up to networking and firewall constraints) another system can reach the container by using the host's IP address and the first port number; http://my-dev-system.internal.example.com:12345. If you're calling from the host directly then these two systems are the same and in this special case you can use http://localhost:12345.
As an implementation detail the container happens to have its own IP address but you never need to look it up or use it. (Among other problems, it is not accessible from other machines.) From other systems' points of view a Docker-based process is indistinguishable from the process running directly on the host. Docker doesn't address the problems of needing somewhere to host the application, coming up with a DNS name for the host, or other similar concerns.
Try running it without EXPOSE 9000, when you are exposing port it's visible only inside of a container and not to the outer world. After doing so, go to a browser and navigate to <server_ip>:9000 and you will probably see the message.
we are using superset docker compose as per the installation guide, However, once the docker is crashed or machine is restarted, we need to re-configure dashboard.. not sure how to configure with persistence volume that it can continue work after restart.
By default superset docker-compose has PostgreSQL db service and data's are mounted as volume, which is holding all of data related to superset. So instead of using default db service use extranal database instance in that case even docker-compose restart you'll not loose your data.
I am building an app using django in EC2-ubuntu and i have associated Elastic ip with my instance.
i have done following steps :
1. first created instance of ubuntu in ec2 free tier.
2. installed python.
3. installed pip.
4. installed django.
5. create a django project using django-admin startproject.
6. run server using these commads python manage.py runserver 0.0.0.0:80
7. created an elastic ip and associated to the instance.
8. configure security inbound settings with http 0.0.0.0:80 address.
9. able to ping my project using any browser.
But the problem is when i am closing my putty session where i supplied runserver command, django project is also stopped. i did not stop it manually.
Please, help me to keep on running after closing putty session as well.
Thanks,
Kripa Sharma
Take a look at this Answer
I highly recommend that you start using Elastic Beanstalk (Python instance) to take care of all these steps for you. Very simple to setup, and no need to worry about any of the steps you listed.
You can use this instruction to see how you can deploy a Django app in less than 5 minutes.
The problem
You are trying to persist the debug server for a remotely deployed application.
You probably need to review the runserver command documentation. Here are the relevant parts:
django-admin runserver [addrport]
Starts a lightweight development Web server on the local machine. By default, the server runs on port 8000 on the IP address 127.0.0.1. You can pass in an IP address and port number explicitly.
...
DO NOT USE THIS SERVER IN A PRODUCTION SETTING. It has not gone through security audits or performance tests. (And that’s how it’s gonna stay. We’re in the business of making Web frameworks, not Web servers, so improving this server to be able to handle a production environment is outside the scope of Django.)
A webserver
Having skimmed the above docs, you may want to look at "How to deploy with WSGI" section, which gives a few recommendations for commonly used Web servers. My favorite, Gunicorn, includes a usage example:
$ pip install gunicorn
$ gunicorn myproject.wsgi
Having decided, and installed a webserver, you'd need to "daemonize" it and expose it to the world.
The former is usually done by creating a service on your OS, for ubuntu it would be either upstart or systemd depending on the version. Gunicorn docs have examples for both.
The latter is usually achieved with an http-server/proxy such as nginx or apache httpd. And again, Gunicorn has an example for us.
You can see why I like it so much ☺️
Epilogue
While technically possible to run the debug server as a service or even in a terminal multiplexer such as GNU screen or tmux, it's not a recommended or stable long term solution.
That said, these are very useful to know about, so read on the above tools and learn to use them, since they would be invaluable to have in your toolset in the future, for example to avoid accidentally terminating a long running command (such as migration), etc.
I am quite computer-illiterate, but I have managed to utilize the Django framework on my own machine. I have had an account on Amazon Web Service (AWS) for some time, but it appeared rather complex to set-up and to make use of, so I put it of for a while. Then I decided to give it a try, and it was not so hard as I first thought to load a AMI and connect to the server with PuTTY. But since I were already using BitNami's Django-Stack, I decided to take a look at their hosting offer (which builds on AWS). Since they appeared to offer "one-click deployment", I set up a new server through their interface. But then, it seems like the "one-click deployment"-promise is with regard to the server itself. There does not seem to be any interface for deploying Django projects through their site. Having used PuTTY already, and adding WinSCP to my machine, I can acceess the server and load my Django-code unto the server. But then I am lost. The documentation seems a bit thin (look here).
The crux of this is the following: Can anyone make this part of the process more understandable. I.e., how to deploy a Django project on a Linux server with Apache/mod_WSGI?
The other question is: I want to use Postgres. Am I free to install this on the server. Should I opt for EBM (EMB?) for this, or what is the downside of not having EBM?
I hope I am not too unworthy of your attention, thanks!
how to deploy a Django project on a Linux server with Apache/mod_WSGI The Bitnami AMI already comes with all this configured. Once installed try going to the EC2 public url on the default 8000 port and you will see the demo django project setup there. You can add your own project once you have logged into the machine via putty check the /home/bitnami/ directory for the demo project. Copy your project, configure your database The other question is: I want to use Postgres. Am I free to install this on the server Postgres and Mysql are already installed the same way you would do on your local machine. The in your project do ./manage.py runserver 0.0.0.0:9000 since the 8000 port is already running another application.
I have a new website built on Django and Python 2.6 which I've deployed to the cloud (buzzword compliant AND the Amazon micro EC2 instance is free!).
Here are my detailed notes: https://docs.google.com/document/d/1qcZ_SqxNcFlGKNyp-CXqcFxXXsKs26Avv3mytXGCedA/edit?hl=en_US
As this is a new site (and wanting to play with the latest and greatest) I used Nginx and Gunicorn on top of Supervisor.
All software installed from trunk using YUM / easy_install.
My database is Sqlite (for now - not sure of where to go next, but that is not the question). Also on the todo list: virtualenv + pip.
So far so good.
My code in in SVN. I wrote a simple fabfile to deploy - checks out the latest code and restarts Gunicorn via Supervisor. I hooked my DNS name to an Elastic IP.
It works.
My question is, how do I update the site without a disruption of service? Users of the site get 404s / 500s when I run my little update script.
Is there a way to do this without adding another server (price is key)?
I would love to have a staging system (on a different port?) and a seamless switch between Staging and Production. On the same (free) server. Via Fabric.
How do I do that? Is it the same Nginx running both sites? Can I upgrade Staging without hurting Production? What would the fabfile look like? What would the directory tree look like?
Thanks!
Tal.
Related:
Seamless deployment in Rails
-
Nginx allows you to setup failover for your reverse proxies you can put one gunicorn instance as the primary and as long as that version is running it will never look at the failover.
If you configure your site so that your new version is in the failover instance you just need to write your fab file to update the failure instance with the new version of the site and then when ready, turn off primary instance. Nginx will seamlessly failover to second instance and bam you are running on new version with no downtime.
You can then update the primary version and then turn it back on and your primary is now live. At this point you can keep the failover instance running just in case, or turn it off.
Some things to consider. You have to be careful with databases, if you are using sqllite make sure both gunicorn instances can accesss the sqllite file.
If you have a normal database this is less of a problem, you just need to make sure you apply any database migrations that the new version needs before you switch to it.
If they are backwards compatible changes then it isn't a big deal. If they aren't backwards compatible then be careful, you could break the old version of the site before you switch over to new version.
To make things easier I would run the versions on different virtual environments.
If you use supervisord to control gunicorn, then you can use the supervisorctl commands to reload/restart which ever instance you want to deploy without affecting the other one.
Hope that helps
Here is an example of and nginx config (not a full config file, removed the unimportant parts)
This assumes the primary gunicorn instance is running on port 9005 and the other is running on port 9006
upstream service-backend {
server localhost:9005; # primary
server localhost:9006 backup; # only used when primary is down
}
server {
listen 80;
root /opt/htdocs;
server_name localhost;
access_log /var/logs/nginx/access.log;
error_log /var/logs/nginx/error.log;
location / {
proxy_pass http://service-backend;
}
}
Sounds like you need to figure out how to tell gunicorn to gracefully restart. It seems like all you have to do is issue a HUP to the gunicorn process when to notify to reload the app. As describe in the link about, the gunicorn docs explain how to do it.