Should I use docker in order to be able to run ChomeDriver on Azure Web App Services in Django Server? - django

Recently I have started a Django server on Azure Web App Service, now I want to add a usage of "ChromoDriver" for web scraping, I have noticed that for that I need to install some additional Linux packages (not python) on the machine. the problem is that it gets erased on every deployment, does it mean that I should switch to Docker ?

Container works, but you can also try to pull down the additional packages in the custom start up file without messing around the machine after the deployment
https://learn.microsoft.com/en-us/azure/developer/python/tutorial-deploy-app-service-on-linux-04

Related

What is the best way to develop an Elastic Beanstalk Django application locally?

I have recently deployed my Django application on Elastic Beanstalk.
I have everything working now, but I'm curious what the best way to develop locally is.
Currently, after I make a change locally, I have to commit the changes via git and then run eb deploy. This process takes 1-3 minutes which is not ideal for making changes.
The Django application will not run on my local machine, as it is configured for EB.
You are right, having to deploy remotely during development isn't best practice.
Have you considered Docker?
To run a typical Django app locally using Docker, you'll need to dockerize:
The Django app
Database eg Postgres
Worker eg Celery
Local mailer eg Mailhog
Not a very long list.
Obviously you'll add or remove from that list depending on how complex or simple your app is.

How to protect my .pyc files wont get extract out from docker

We have built the Django application dockerized till now there was no issue since we were hosting solution for a client, but now we also need to support in-house deployment which means my docker will be installed at the client machine. I see from the root access I can copy the files inside the docker. How we can save the copy issue from inside docker.
How to add such security in my docker? Please suggest.

Deployment of Django in local network while continuing development

The question is in the title.
I need to deploy a Django application in a local network (i still don't know how to do but i suppose it's quite easy) but i still need to develop it. My question is how to do to allow users to use the application while i'm still developing it ?
Is it a solution to keep to versions of the application, one deployed and one in development ? In this way, I can replace the application deployed by the newly developed one when I finish coding it.
Another question concerns the database, can I still modify the database if I just add new models without touching existing ones ?
Thank you in advance,
This is a good blog entry that covers deploying Django using Heroku. I'll give you a quick rundown of what makes all the different technologies important:
Git
Git, or any other version control system, is certainly not required. Why it lends itself to deploying Django projects is that your usually distributing your application by source, i.e. your not compiling it or packaging it as an egg. Usually you'll organize your Git repository as such that updating your application on the server only requires you to do a checkout of the latest sources--nothing else.
virtualenv and pip
This, again, is not a strict requirement, but I'd strongly suggest you take the time to familiarize yourself with virtualenv and pip if you already haven't done so, since it's going to make deploying your Python applications across different runtime environments, local or remote, a breeze.
Basically, your project will need to have at least Django and Gunicorn available on the Python path, possibly even a database driver. What that means is that every time you try to deploy your application somewhere you'll have to install Python and do the easy_install dance all over.
virtualenv will redistribute a Python installation, which in turn means that the new Python instance will, by default, have it's very own Python path configuration relative to the installation. pip is like easy_install on steroids, since it supports checking out Python dependencies directly from code repositories and supports a requirements file format with which you can install and configure all of your dependencies in one fell swoop.
With virtualenv and pip, all you'd need to do is have a simple text file with all your dependencies that can be parsed with pip and an installed Python distribution on the machine. From there you just do git checkout repo /app/path; easy_install virtualenv; virtualenv /app/path; ./app/path/scripts/activate; pip install -r /app/path/requirements.txt. Voila, Gunicorn, Django and all other dependencies are then installed and available immediately. When you run the Gunicorn Django script with the Python instance in /app/path/scripts, then the script will immediately have access to the Gunicorn sources and it will be able to locate your Django project which will have access to Django and other dependencies as well.
Gunicorn
This is the actual Python application that will manage your Django instance and provide an HTTP interface that exposes it to HTTP clients. It'll start several worker processes which will all be distinct Python virtual machines loaded with the sources of your application and it's dependencies. The main Gunicorn process will in turn take charge of managing which worker processes manage which requests for maximum throughput.
The basic principle of wiring Nginx and Gunicorn
The most important thing to observe is that Nginx and Gunicorn are separate processes that you manage independently.
The Nginx Web server will be publicly exposed, i.e. it will be directly accessible over the internet. For requests to static media, such as actual images, CSS stylesheets, JavaScript sources and PDF files accessible via the filesystem, Nginx will take charge of returning them in the response body to HTTP clients if you configure it to look for files on the path where you configured your project to collect static media.
Any other request should be proxied to your Gunicorn instance. It will be configured to listen to HTTP requests on a certain port on the loopback interface, so you'll using Nginx as a revers proxy to http://127.0.0.1:8080 for requests to your Django instance.
This is the basic rundown for deploying your Django projects into production that should satisfy the needs of 95% Django projects running out there. While I did reference Nginx and Gunicorn, it's the usual approach when it comes to setting up any Web server to act as a reverse-proxy to a Python WSGI server.

How could i use let's encrypt behind a django application without stopping the server?

I have a django application running on a server. I want to use let's encrypt to provide an encrypted connection. I could use the standalone option of their ACME client, but i don't want to stop my server, what i would have to do.
So there is the webroot option, that work with my allready running webserver (nginx). Django would process the request in this case. My question is, how should it look like on the django side to get this running (keeping automated renewal several months in mind)?
I don't know what setup others use, but I generally set up Django apps with Nginx serving static content and Gunicorn as the application server. It's widely accepted that Django apps usually use this kind of two web server setup. The standard instructions for setting up Let's Encrypt with Nginx worked fine for me.
Or Digital Ocean have an excellent guide too.
EDIT: It looks like Nginx can do a "graceful" reload that just updates the config with no downtime. For Debian or Ubuntu pre Systemd this would be sudo service nginx reload, while for a distro with Systemd the command is sudo systemctl reload nginx.service.
In case other users come this way like I did from Google, here's how I improved this situation:
I was unsatisfied by my options when it came to creating ACME challenges for Let's Encrypt when running a Django application. So, I rolled my own solution and created a Django app! Basically, you can manage your ACME challenges as just another object, and the app will produce the proper end-point URL.
Yes you are installing an app which means a deploy / update to your app, but once you've done that managing your challenges is far easier in the long run.
Simply pip install django-letsencrypt and follow the README to be on your way.

deploy bitnami django

I am quite computer-illiterate, but I have managed to utilize the Django framework on my own machine. I have had an account on Amazon Web Service (AWS) for some time, but it appeared rather complex to set-up and to make use of, so I put it of for a while. Then I decided to give it a try, and it was not so hard as I first thought to load a AMI and connect to the server with PuTTY. But since I were already using BitNami's Django-Stack, I decided to take a look at their hosting offer (which builds on AWS). Since they appeared to offer "one-click deployment", I set up a new server through their interface. But then, it seems like the "one-click deployment"-promise is with regard to the server itself. There does not seem to be any interface for deploying Django projects through their site. Having used PuTTY already, and adding WinSCP to my machine, I can acceess the server and load my Django-code unto the server. But then I am lost. The documentation seems a bit thin (look here).
The crux of this is the following: Can anyone make this part of the process more understandable. I.e., how to deploy a Django project on a Linux server with Apache/mod_WSGI?
The other question is: I want to use Postgres. Am I free to install this on the server. Should I opt for EBM (EMB?) for this, or what is the downside of not having EBM?
I hope I am not too unworthy of your attention, thanks!
how to deploy a Django project on a Linux server with Apache/mod_WSGI The Bitnami AMI already comes with all this configured. Once installed try going to the EC2 public url on the default 8000 port and you will see the demo django project setup there. You can add your own project once you have logged into the machine via putty check the /home/bitnami/ directory for the demo project. Copy your project, configure your database The other question is: I want to use Postgres. Am I free to install this on the server Postgres and Mysql are already installed the same way you would do on your local machine. The in your project do ./manage.py runserver 0.0.0.0:9000 since the 8000 port is already running another application.