TLDR:
Would a "real server" (Apache? Flask? Django?) be able to unify various services behind readable subdirectories (abc.work.com/svn, abc.work.com/hg) instead of using port numbers (abc.work.com:8000, abs.work.com:8001)?
Long Version:
In the last year I've learned how to serve files with Mongoose, run a minimal python webserver, host version control repositories with Subversion and Mercurial, and host a Trac issue tracker/project management framework.
In each case I've been using the easy built-in webserver provided by each tool to host it from my Windows 7 laptop at work (I'm an engineer who codes, not actually paid to be a "software guy"). In order to avoid clashing I've used different port numbers in the 8000 range for each server to listen on, and sent my coworkers links like http://machinename.domain.com:8042 to access these magical things I've created.
The first obvious problem is that I'm running a lot of these things out of a command prompt and just letting it sit open on my desktop. I also know how to call cmd.exe from VBScript in order to hide the command prompt if that's all I wanted. Many of the built-in webservers even have options to run as a service, which can get harry with permissions, but is closer to the "right" way to host a server of any kind.
The bigger problem is that I'm sending people links to my machine with different port numbers. I'm ok with them having to use my machine name - I assume I'd need the network admin folks to add a DNS entry to call it TeamAwesome.company.com instead of machinename.company.com:8000?
The bigger question is, if I did something fancy like an Apache, Django, or Flask webserver, could I set it up like machinename.company.com/trac for the trac server and machinename.company.com/hg/project1 for the HG repository for project1? I'm looking at Apache, Django, and Flask because I've been diving into Python for 2 years now and those appear the most applicable/approachable for my needs.
I understand that ideally this stuff should be hosted on a separate linux-y server machine, but I'll need to prove the usefulness of the tools I'm developing before I request server resources from my boss (who hired me to do engineering, not programming, or web development, or systems administration, etc.).
I see this looks related. Are http proxies, virtual hosts, nginx, and WebSockets things to look at?
Looking at Apache VirtualHost examples looks promising though I can't decipher if one of those examples actually does what I'm talking about. Thanks for any suggestions as I go further down the rabbit hole with this stuff!
Apache virtual hosts can be differentiated by listening ip and/or port number only.
The mod_proxy module can do what you want if you want/have those services to be running separately as well:
ProxyRequests Off
<Proxy *>
Allow from all
</Proxy>
ProxyPass /folder_a http://backend_a:1234/ retry=5
ProxyPass /folder_b http://backend_b:8888/ retry=5
# etc
However if you just want to point different urls to point to unrelated folders on your server then check the Location and Alias directives
No need for proxy settings. You can use a single virtual server, but run each application under a certain subdirectory: for example, with Django or another wsgi app, you would simply set WSGIScriptAlias to the relevant dir.
A big clarification, though: Django is not in any way a server. In fact, you need Apache or an equivalent in order to serve Django properly.
Related
I am just about to go live with a website and am addressing security issues. The site has been public for some time but not linked to the search engines.
I log all incoming requests and today noticed this one:
GET /home/XXXXX/code/repositories/YYYYY-website/templates
where XXXXX is a sudo user on my server and YYYYY is my company name.
This is actually the structure of my Django project code.
My website is coded using Django and runs under Apache2 on Ubuntu.
My question is how can this guy possibly know the underlying code/directory structure on my server, in order to create this request?
Their IP is : 66.249.65.221.
They come up as 100% a hacker on https://ip-46.com
Any contributions welcome.
EDIT1 25/11/2019
With some helpful input from Loïc, I have done some investigation.
The Ubuntu 18.04 server is locked down as far as logging in goes - you can only get in with one of my private keys. The PostgreSQL is locked down - it will only accept connections from one IP where my dev machines reside. RabbitMQ is locked down - it won't accept ANY external incoming connections. The robots.txt allows all crawling but the robots meta restricts access to about 12 pages only.
Somebody who knows Django, would know how to form this directory path if they knew the Django project directory but they also have this relative to root on the server. The only place where this is available is in the Apache2 config file. Obviously Apache needs to know where to pick up the Django web server.
I am 99% sure that this 'hacker' got this via some sort of command to Apache. Everything is redirected to port 443 https. The above GET request doesn't actually do anything because the url doesn't exist.
So to make the question more refined. How can a hacker pull my Django absolute project path from my Apache2 config file?
There are a lot of different ways to learn about the directory structure of a given server.
The easiest usually being error logs;
If in your django settings, DEBUG is set to True, it is very easy for an attacker to get the directory structure of your project.
Then there is LFI, a security issue allowing an attacker to read local files. It's then possible to read some logs, or apache configuration to learn what is your project directory...
The problem could come from another service running on your server as well...
One cannot really give you a complete answer on this topic, as there are a lot of different ways this could happen.
I have a site that I want to expose to a bunch of colleagues that serves as an interface for some Machine learning tools.
I’ve made the site in Django 2.0 and would like to serve it from a small windows PC under my desk, and then from a more dedicated server once it’s operational.
I’ve had a look around and it looks like my options are using uWSGI or Django it self to serve the site. Obviously Django is mich slower, but on a PC with an i5 i recon it should be able to handle a couple of requests a minute, which is the peak traffic I’m expecting.
FastCGI appears to be depreciated, so what other options, prioritizing ease of confit on my part are there?
So anyone in a similar situation:
I ended up using waitress as the server and whitenoise to serve the static files.
This appears to work sufficiently fast and requires little configuration, once you have have worked out how to get the static files going with whitenoise.
Django itself can serve pages to other machines on your network. SImply start the local server with this command
python manage.py runserver 0.0.0.0:8000
Then, you can access the local website by requesting http://<dev_machine_ip>:8000. Obviously, this is a very simple and inefficient solution if you want to provide this website to many users connected all at the same time. But for tests/demo purpose, it is very handy
To this point, I just created and played my Django server in my localhost, like setting up a basic server Linux distro on my another device and testing etc.
However, I also heard of server applications like apache2 or nginx. The thing I wonder about is: Do I really need to use one of them in production? I want to buy (or rent?) a VPS service, then deploy (or publish?) my project on that server. The questions on my head are:
Running server with manage.py runserver 0.0.0.0:80 means it does not make my application worldwide? An server application (or whatever it is) makes it accessible outside?
Or a server application is simply needed for better performance, optimization etc. ?
Simply, why do I need to use apache2 or nginx to deploy my project?
It's a long story. In few words:
Running your project on localhost surely wont make it worldwide accessible, since at least you need a public address for your server, but not a local one.
Speaking honestly it is not a problem to run a site in pro using django built-in server. But, as you can read in docs, it is strongly NOT recommended. Why? Because it was developed specially for testing. It is written in python (slow enough for web server) and not suitable for handling multiple queries to the server and it is only a matter of time when it will crash. Of course, there are plenty of other reasons like cache and access settings, redirects and others.
I'm new to mezzanine and Django. I have set up a site, everything is working but I can only launch the server on "development". I would like to access de site on the port 80 on the internet instead of internally, as I have no way other than redirecting the port via SSH to access it. I would like to know how to do that.
And another question, is Nginx included with Mezzanine automatically ? Cause I have a tuned up Nginx server there and I'm not sure what I need to do, if run it with my existing Nginx server or with the one included with Django if that is how it works .... thank you for bring some light on this.
NGINX is not included with Mezzanine, it's an entirely separate piece of software, similar to Apache.
Mezzanine includes a fabric script which can automatically set up a production server if you'd like to use it, and will install NGINX on the server for you, among many other things.
Given your question, I can't recommend enough that you read and understand all the related documentation on this topic. Start with the Mezzanine link below, it references many other documentation sites - Django, Fabric, NGINX, plus more.
Enjoy the adventure: http://mezzanine.jupo.org/docs/deployment.html
I'm currently writing a webservice (with node.js) for an AngularJS frontend which is hosted with node.js
It will later be available through a proxy under domain.com/api and therefore I don't need JSONP.
For local testing purposes i have my AngularJS app running on localhost:80 and my node.js backend on localhost:3000. Naturally I'm not able to query json requests. The easies
What would be the best setup to test my homepage locally without screwing to much in my setup?
I'm currently working on windows. Linux is also an option if it is easier.
Would it be possible to write a simple proxy for express that hosts both apps in the same domain?
You can use the hosts file to set the domain.com/api to localhost. This is done in /etc/hosts in Linux, but its present somewhere in Windows too. Another thing that helps me a lot is ssh tunneling. You can, for example, tunnel remote ports (where your backend is running) to localhost with ssh -L localPort:your.server:remotePort
It's not exactly what you asked for, but the easiest might be to have the node.js app serve the AngularJS app, too. It's quite efficient, certainly efficient enough to use for development.
If it's an expressjs app, you can just add
app.use(express['static'](__dirname + "/public"));
before your other routes to have it look for static files in ./public/
If your app is served by a template or build system that you can't easily reproduce in node.js, then another option would be to run nginx, apache or haproxy on some port (80 or 5000 or ...) and have that proxy to the current backend server for the app and port 3000 (the node.js app) for the API/data requests.
You might even be able to have your server currently running on port 80 do this.
As a final idea you could also setup the node.js app to proxy to port 80 for the "app files".
Edit - I just realized that both of your apps are written in node.js. Would it be possible to set it up so you run them separately in production but together in development? Put all the real functionality in modules and then have three separate "loaders" that start the apps, one together and then a loader for each individually.