Is it okay to run Django API server with root permissions? - django

I am running django server, using gunicorn. Apart from gunicorn, I have a layer of nginx as a load balancer and using supervisord to manage gunicorn.
From the perspective of security is it fine to run my gunicorn server with sudo permission? Is there any potential security leak?
Also, does it makes any difference if I am a superuser and not running process with sudo permission as in any case I have sudo permissions as the user.

Does it need to run as root?
If it doesn't, don't run it as root.
Even better, add a separate user for the app and run it as that user.

I believe the answer to question "is it ok to run xxx with root permissions" should not be "If it doesn't, don't run it as root." but rather a clear "NO".
Every single server and framework is designed to be run without root rights.
What can go wrong? In case you have a vulnerability allowing to remotely execute code on the server you would be simply giving root rights to whoever can exploit it. In case one of your developers in team does something stupid like deleting the root directory, it will be deleted. You don't want that a single app running on the server disrupts your whole system, do you?

It is not a good practice to run any external network facing application with root user privilege.
Consider a scenario where your uploaded file is not validated or sanitized ( file upload vulnerability). If someone uploads some vulnerable file and executes it. Consider that file to have implemented reverse shell. Then it gets easier to take down your server.

Related

Switching Git branches while running in Docker container causes permission error

I'm running Docker 19 on Windows 10 for development. A container volume binds directly to a Git repo folder and serves the Django app in that folder. I use Mingw-w64 to run Git (aka Git Bash).
Occasionally, I'll do the following (or something similar to the following):
Request a page served by the Docker container. (To replicate an error, for example.)
Switch to a different branch.
Request a page served by the Docker container from the new branch.
Switch to a different branch.
On the last branch switch, Git will freeze for a bit and then say permission denied on a particular file. The file is a difference between the two branches, so Git is trying to change it.
Process Explorer tells me the files are used by the system process so the only way to get it to let go is by restarting.
My gut is telling me the Django web process (manage.py runserver) is likely locking the file until the request connection is fully closed and is probably lingering as an established connection.
Is my gut right? If it is... Why is the lock held by the system process and not Docker? Is there anything to do to check before I do a branch change? Is there any way to prevent it from happening at all?

Creating a CLI application with root access

I am developing a php application which serves as a GUI for a seever side application. Because of the nature of the application, it needs to run exec commands which require root privileges. (things like restarting a service). I was able to get around it by giving nginx sudo access to specific commands. But it still requires a few functions which will be easy to make with a CLI.
Now the problem I am facing is starting this application from php with arguments as root. This is how I launch my app,
path/application - e "command I want"
The web app will be only one installed on the server (kind of like a control panel). Should I focus on making a service instead of an application? If I do make an service how would I let php contact it? I have developed windows applications in the past using .NET and c++.
I did look at dotnet core to make a Linux service, but I don't think it'll be what I need. Can I have any suggestions? All I need the app to have is root access, possibly without sudo.
Could the application be a setuid root application? Please test it for security before doing so
chown root /path/to/binary
chmod u+s /path/to/binary

SQLite database pushed to EC2 server having trouble

I have a django application that uses an SQLite database. I use git to push the changes to my EC2 instance, which runs the website on an Elastic IP. The site itself works, but when I try to log in to the admin interface I get one of two errors from django:
attempt to write a readonly database
or
unable to open database file
It seems that chmod u+rw leads to the first error and a+rw leads to the second error, but I'm unsure of what is happening. The testserver on my local machine works as expected.
I realize that SQLite may be a bad choice for production, but the site will not have much traffic and I will be the only one using the admin interface or writing to the database. If someone has a solution for setting up MySQL or Postgres and somehow synchronizing the database contents, I would accept that too.
I found the solution after much research. I only defined one user in my EC2 server, but apparently Apache needs access as the user www-data.
sudo chown www-data /projectdir
sudo chown www-data /projectdir/sqlite.db
Is your application running in multiple threads?
The problem is that sqlite3 databases can not be shared between multiple threads of your Django application. I am not sure this is a general sqlite3 bug, a Django bug or just intended. Maybe other users figured out a way to deal with it. I didn't and use either PostgreSQL or MySQL on production servers.
If your site is really low traffic you may just set your webservers config to run your app single threaded (and within a single process). This will have a similiar behavior (and limits) as running the Django test server locally.

Jenkins can't copy files to windows remote host

I have a Jenkins server on OS X 10.7, which polls a subversion server, builds the code and packages the app. The last step that I need to complete is deploying the app on a remote host, which is a windows share. Note that my domain account has write access to the target folder and the volume is mounted. I've tried using a shell script build step:
sudo cp "path/to/app" "/Volumes/path/to/target"
However i get a "no tty" response. I was able to run this command succesfully in Terminal, but not as a build step in Jenkins.
Does this have something to do with the user being used when starting up Jenkins? As a side note, the default user.name is jenkins and my JENKINS_HOME resides in /Users/Shared/Jenkins. I would appreciate any help as to how to achieve this.
Your immediate problem seems to be that you are running Jenkins in the background and sudo wants to input a password. Run Jenkins in the foreground with $ java -jar jenkins.war.
However, this most probably won't solve your problem as you'll be asked to enter a password when the command runs - from the terminal you started Jenkins from (presumably it's not what you want). You need to find a way to copy your files without needing root permissions. In general, it it not a good idea to rely on administrative permissions in your builds (there are exceptions, but your case is not it).

Seamless deployment of Django to single server

I have a new website built on Django and Python 2.6 which I've deployed to the cloud (buzzword compliant AND the Amazon micro EC2 instance is free!).
Here are my detailed notes: https://docs.google.com/document/d/1qcZ_SqxNcFlGKNyp-CXqcFxXXsKs26Avv3mytXGCedA/edit?hl=en_US
As this is a new site (and wanting to play with the latest and greatest) I used Nginx and Gunicorn on top of Supervisor.
All software installed from trunk using YUM / easy_install.
My database is Sqlite (for now - not sure of where to go next, but that is not the question). Also on the todo list: virtualenv + pip.
So far so good.
My code in in SVN. I wrote a simple fabfile to deploy - checks out the latest code and restarts Gunicorn via Supervisor. I hooked my DNS name to an Elastic IP.
It works.
My question is, how do I update the site without a disruption of service? Users of the site get 404s / 500s when I run my little update script.
Is there a way to do this without adding another server (price is key)?
I would love to have a staging system (on a different port?) and a seamless switch between Staging and Production. On the same (free) server. Via Fabric.
How do I do that? Is it the same Nginx running both sites? Can I upgrade Staging without hurting Production? What would the fabfile look like? What would the directory tree look like?
Thanks!
Tal.
Related:
Seamless deployment in Rails
-
Nginx allows you to setup failover for your reverse proxies you can put one gunicorn instance as the primary and as long as that version is running it will never look at the failover.
If you configure your site so that your new version is in the failover instance you just need to write your fab file to update the failure instance with the new version of the site and then when ready, turn off primary instance. Nginx will seamlessly failover to second instance and bam you are running on new version with no downtime.
You can then update the primary version and then turn it back on and your primary is now live. At this point you can keep the failover instance running just in case, or turn it off.
Some things to consider. You have to be careful with databases, if you are using sqllite make sure both gunicorn instances can accesss the sqllite file.
If you have a normal database this is less of a problem, you just need to make sure you apply any database migrations that the new version needs before you switch to it.
If they are backwards compatible changes then it isn't a big deal. If they aren't backwards compatible then be careful, you could break the old version of the site before you switch over to new version.
To make things easier I would run the versions on different virtual environments.
If you use supervisord to control gunicorn, then you can use the supervisorctl commands to reload/restart which ever instance you want to deploy without affecting the other one.
Hope that helps
Here is an example of and nginx config (not a full config file, removed the unimportant parts)
This assumes the primary gunicorn instance is running on port 9005 and the other is running on port 9006
upstream service-backend {
server localhost:9005; # primary
server localhost:9006 backup; # only used when primary is down
}
server {
listen 80;
root /opt/htdocs;
server_name localhost;
access_log /var/logs/nginx/access.log;
error_log /var/logs/nginx/error.log;
location / {
proxy_pass http://service-backend;
}
}
Sounds like you need to figure out how to tell gunicorn to gracefully restart. It seems like all you have to do is issue a HUP to the gunicorn process when to notify to reload the app. As describe in the link about, the gunicorn docs explain how to do it.