I teach a course with over 40 students, who have to create Python web applications with Flask. I would like them to upload their applications to the department server (running Ubuntu). If all of them upload and run their apps, all except the first one will probably get an error that the port (5000 - the default port for Flask) is in use. I can ask each student to pick a random port number. But I would like the apps to be accessible using the students' names, so that, for example:
http://myserver.com/student1
would link to the application of student1.
Is there a way to do it, which can be done by the students themselves when they submit, so that I do not have to do manual work for each submission?
Another possibility is to reverse proxy Unix sockets instead of using TCP ports and use the name of the student as the the actual socket name. For instance in NGINX each could be a location configuration which would make it easier to identify who's who.
server {
listen 80 default;
location /student1/ {
proxy_pass http://student1/;
}
location /student2/ {
proxy_pass http://student2/;
}
}
upstream student1 {
server unix:/home/ubuntu/student1;
}
upstream student1 {
server unix:/home/ubuntu/student2;
}
...etc.
Related
Localhost follows the loopback mechanism.
why we have to loop back the packets to our computer itself? what is the need for that(general case and specially socket programming)?
Also kindly specify some practical applications of localhost too?
And another clarfication i need was
localhost resolves to 127.0.0.1 (most time)
myhost name say "vinoth-computer" resolves to 192.168.111.12
is 127.0.0.1 and 192.168.111.12 one and same?
Think about next situation: you have a client and server application running on separate stations in production. But in QA or for unit testing you want to run the client and the server instances on the same station. You can put in client definitions or parameters address of server as 'localhost' or '127.0.0.1'.
Also, sometimes you want to run 2 separate processes on the same station, when by design they should be running on the same station. You can set a communication between them through sockets and use localhost on the client side part.
Local loop back can be used for communication applications with each other. There is a lot ways to do this, but this is one of simplest.
To specify application, great example is Apache server, which by default listen on localhost as well. So when you are developing web application, you can simply use localhost or 127.0.0.1 as address in your favorite browser.
192.168.111.12 is not same as 127.0.0.1
In your case its IP which refer to your computer in your local network (behind some router). Other computers in your network can address you computer using this address.
If you want to know more, or explain something in more detail, feel free to ask.
I have a bunch of different websites, mostly random weekend projects that I'd like to keep on the web because they are still useful to me. They don't see more than 3-5 hits per day between all of them though, so I don't want to pay for a server for each of them when I could probably fit them all on a single EC2 micro instance. Is that possible? They all run off different web servers, since I tend to experiment with a lot of new tech. I was thinking I could have each webserver serve on a different port, then have incoming requests to app1.com get routed to app1.com:3000 and requests to app2.com get routed to app2.com:3001 and so on, but I don't know how I would go about setting that up.
I would suggest that what you are looking for is a reverse web proxy, which typically includes among its features the ability to understand portions of the request at layer 7, and direct the incoming traffic to the appropriate set of one or more back-end ip/port combinations based on what's observed in the request headers (or other aspects of the request).
Apache, Varnish, and Nginx all have this capacity, as does HAProxy, which is the approach that I use because it seems to be very fast and easy on memory, and thus appropriate for use on a micro instance... but that is not at all to imply that it is somehow more "correct" to use than the others. The principle is the same with any of those choices; only the configuration details are different. One service is listening to port 80, and based on the request, relays it to the appropriate server process by opening up a TCP connection to the appropriate destination, tying the ends of the two pipes together, and otherwise for the most part staying out of the way.
Here's one way (among several alternatives) that this might look in an haproxy config file:
frontend main
bind *:80
use_backend app1svr if { hdr(host) -i app1.example.com }
use_backend app2svr if { hdr(host) -i app2.example.com }
backend app1svr
server app1 127.0.0.1:3001 check inter 5000 rise 1 fall 1
backend app2svr
server app2 127.0.0.1:3002 check inter 5000 rise 1 fall 1
This says listen on port 80 of all local IP addresses; if the "Host" header contains "app1.example.com" (-i means case-insensitive) then use the "app1" backend configuration and send the request to that server; do something similar for app2.example.com. You can also declare a default_backend to use if none of the ACLs match; otherwise, if no match, it will return "503 Service Unavailable," which is what it will also return if the requested back-end isn't currently running.
You can also configure a stats endpoint to show you the current state and traffic stats of your frontends and backends in an HTML table.
Since the browser isn't connecting "directly" to the web server any more, you have to configure and rely on the X-Forwarded-For header inserted into the request headers to identify the browser's IP address, and there are other ways in which your applications may have to take the proxy into account, but this overall concept is exactly how web applications are typically scaled, so I don't see it as a significant drawback.
Note these examples do use "Anonymous ACLs," of which the documentation says:
It is generally not recommended to use this construct because it's a lot easier
to leave errors in the configuration when written that way. However, for very
simple rules matching only one source IP address for instance, it can make more
sense to use them than to declare ACLs with random names.
— http://cbonte.github.io/haproxy-dconv/configuration-1.4.html
For simple rules like these, this construct makes more sense to me than explicitly declaring an ACL and then later using that ACL to cause the action that you want, because it puts everything together on the same line.
I use this to solve a different root problem that has the same symptoms -- multiple sites for development/test projects, but only one possible external IP address (which by definition means "port 80" can only go to one place). This allows me to "host" development and test projects on different ports and platforms, all behind the single external IP of my home DSL line. The only difference in my case is that the different sites are sometimes on the same machine as the haproxy and other times they're not, but the application seems otherwise identical.
Rerouting in way you show - depends on the OS your server is hosting on. For linux you have to use iptables, for windows you could use windows firewall. You should set all incoming connections to a port 80 to be redirected do desired port 3000
But, instead of port, you could use a different host name for each service, like
app1.apps.com
app2.apps.com
and so on. You can configure it with redirecting on your DNS hosting, for apps.com IMHO this is best solution, if i got you right.
Also, you can configure a single host to reroute to all other sites, like
app1.com:3001 -> apphost1.com
app1.com:3002 -> apphost2.com
Take in mind, in this case, all traffic will pas through app1.com.
You can easily do this. Set up a different hostname for each app you want to use, create a DNS entry that points to your micro instance, and create a name-based virtual host entry for each app.
Each virtual host entry should look something like:
<VirtualHost *>
ServerName app1.example.com
DocumentRoot /var/www/html/app1/
DirectoryIndex index.html
</VirtualHost>
My Django app is hosted up on Amazon EC2. Gunicorn runs on the same machine and serves all the dynamic content that I have. There is no static content. I have TWO of these machines (both machines are running Ubuntu 11.04 on a micro instance. These are easy to scale horizontally) and I have a ELB (Elastic Load Balancer) sitting in front of both of these servers.
For the sake of example, the external ip of both of these gunicorn/django ubuntu machines is:
12.34.567.12:8000 & 21:43:765:21:8000 (gunicorn runs on port 8000).
If I were to put either of these addresses into my browser, I can interact with my server and send/retrieve data.
When I place an ELB infront of these two machines, the new address I can use to interact with BOTH DJANGO/GUNICORN SERVERS is:
dualstack.myloadbalancer-123456789.us-east-1.elb.amazonaws.com:8000
When I've been reading a lot of resources across the internet and many suggested having an NGINX box sitting in front the Django app servers behind the ELB to buffer requests for slow clients. I think this would be a good feature to have since I don't want to lose any requests. The diagram below will explain is more clearly:
Just like the diagram above, how can I configure the nginx box sitting in front of the django app/gunicorn servers to act as a reverse proxy so that it can buffer requests for slow clients? (this way, instead of timing out, it keeps the request without losing it)
You're looking for the nginx HttpProxyModule I believe.
You define a upstream in nginx
upstream webservers {
server 12.34.567.12:8000;
server 21.43.765.21:8000;
}
And then forward requests via proxy_pass to the upstream.
server {
listen 443; //Port you want nginx to listen on
location / {
proxy_set_header Host $http_host;
proxy_read_timeout 330;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://webservers ;
}
}
And unless I'm mistaken the HttpProxyModule buffers the entire request before passing it on.
This may break some items that require streaming or interaction during this process but that's a limit you face.
My nginx is a bit rusty so it might not work but it should be something along these lines
You definitely want nginx sitting in front of gunicorn. It is a common setup and you can find a lot of resources to help you get started. I like this tutorial: http://senko.net/en/django-nginx-gunicorn/, which will also walk you through supervisord and setting up a virtualenv.
If your looking to host on EC2 with Ubuntu server than these are some good tutorials apart from the one mentioned by Nathan.
http://ijcdigital.com/blog/django-gunicorn-and-nginx-setup/
This one is setting up Nginx and Gunicorn in Ubuntu.
http://ydevel.tumblr.com/post/22850778860/configuring-an-https-site-with-django-on-nginx
This one is for setting up with Https Configurations also.
http://adrian.org.ar/automatic-setup-of-django-nginx-and-gunicorn-on-ec2/ This one is quite interesting because it explains How to install Nginx with Gunicorn in EC2 using Bellatrix(a nice library over boto). Do go through this one atleast once. Its very good
Best Of Luck with your deployment. If you find the answer helpful do accept the answer
I've a running WebService published on "http://localhost:8080/FreeMeteoWS/FreeMeteoWS?WSDL". I want to access this webservice from a device on the same network...what address should I put in order to retrieve the wsdl?
If that port i.e. 8080 is open for incoming connections on your computer you'd only need to find your local ip-address, this is done in different manners on different operating systems. When you have obtained local ip-address switch localhost out in favor for that ip-address.
I have written a client/server code.
The server program executes on a terminal which plainly receives text data from the client and the client is a GUI where in you could specify the IP address of the machine where the server would be running.
However this works only in a closed Network(LAN).
I have just learnt TCP IP and have written a few codes that run on a LAN.
I wanted to make this program work across the network(over the internet).
But I have some basic doubts like,does one need the permission of the local ISP for such programs to execute across the internet.Does it involve buying a domain or some kind of permissions?
Can some one please help me on,what should I be doing,or where should I start from?
Listener have to have IP port opened in some way. If you are behind router, you should set proper port forwarding on router and if ISP provides its own subnet, you should know how to setup such link. (i do not know what kind of tehnology ISP might use for this).
For beginning you do not need you own domain name but you should be able to address by ip. If you need domain, register own domain name or create subdomain for free (i was using http://freedns.afraid.org/ )
If your server is behind a router which creates a LAN, you have to configure the router that it will forward the packages from your client to the server.
You have to forward all the incoming packages at the specific port to the local ip of the server.