running multiple variants of a webserver on different ports - cookies interfere - cookies

I have two instances of an identical service running on 2 ports - think of them as dev and test. They use session cookies that expire on inactivity.
When using the same browser, whenever I switch a tab from "dev" to "test", I have to log in again. Which becomes annoying after first few minutes. And this is because the HTTP cookies are tied to a server name, oblivious of the port.
I actually have multiple teams use the same test machine. My current solution is to use different browsers. I am also thinking of taking different domain names and map them to same IP address.
My question is - what am I missing, is there a better solution? Is it possible to make cookies port-aware (I don't have control over the software that names the cookies)?

Related

How to fire up all docker containers on a same local ip address in django?

I am writing a django based application with docker where there are 3 projects apps running in different containers. All django applications run at 0.0.0.0:8000.
But when I check the ip address of containers to browser the application in browser, they all run at different ip addresses:
project1 runs at 172.18.0.10:8000 can be accessed at: 172.18.0.10:8000/app1
project2 runs at 172.18.0.9:8000 can be accessed at: 172.18.0.9:8000/app2
project3 runs at 172.18.0.7:8000 can be accessed at: 172.18.0.7:8000/app3
which makes the hyperlinks of my app unusable. How do I run all the containers at one single ip, 'localhost:8000'?
Any suggestions where I am going wrong?
You are wrong in the design, mapping multiple containers to one ip+port is simply impossible. One port on one ip is always one application that listens, no matter if it is container application or not.
Simple prove: And who would then decide to which container to send the request? To all of them? Then who would decide which response is the correct one? That's what are ip addresses and ports for, to be able to send request to specific aplications on specific machines.
I think you should reconsider whatever you are doing, and do a bit more research on networking. There are several online courses on that. (I don't want to discourage you in any way, just aim you the right direction)
Simple solution without redesign you app, is putting in front of your app reverse proxy (e. g. nginx). That's the response to my rhetorical question. Reverse proxy can be a middle man that can decide to which application send the request based on something else then ip/port. Reverse proxy listens on some specific port and then by rules you provide to it (e. g. path based), can proxy the request to specific app/ip/port and proxy the response back.
But reverse proxy in this case is more a hack than proper solution, keep that in mind.

How much overhead would a DNS call add to the response time of my API?

I am working on a cross-platform application that runs on iOS, Android and the web. Currently there is an API server which interacts with all the clients. Each request to the API is made through the ip (eg. http://1.1.1.1/customers). This disallows me to move the backend quickly whenever I want to another cloud VPS as I need to update iOS and Android versions of the app with a painful migration process.
I though the solution would be introducing a subdomain. (eg http://api.example.com/customers). How much would an additional DNS call would affect the response times?
The thing to remember about DNS queries is that, as long as you have configured your DNS sensibly, clients will only ever make a single call the first time communication is needed.
A dns query will typically involve three queries, one to the root server, one to the .com (etc) server, and a final one to the example.com domain. Each of these will take milliseconds and will be performed once, probably every hour or so whenever the TTL expires.
The TL;DR is basically that it is a no brainer, you get far far more advantages from using a domain name than you will ever get from an IP Address. The time is minimal, the packet size is tiny.

Hosting multiple websites on a single server

I have a bunch of different websites, mostly random weekend projects that I'd like to keep on the web because they are still useful to me. They don't see more than 3-5 hits per day between all of them though, so I don't want to pay for a server for each of them when I could probably fit them all on a single EC2 micro instance. Is that possible? They all run off different web servers, since I tend to experiment with a lot of new tech. I was thinking I could have each webserver serve on a different port, then have incoming requests to app1.com get routed to app1.com:3000 and requests to app2.com get routed to app2.com:3001 and so on, but I don't know how I would go about setting that up.
I would suggest that what you are looking for is a reverse web proxy, which typically includes among its features the ability to understand portions of the request at layer 7, and direct the incoming traffic to the appropriate set of one or more back-end ip/port combinations based on what's observed in the request headers (or other aspects of the request).
Apache, Varnish, and Nginx all have this capacity, as does HAProxy, which is the approach that I use because it seems to be very fast and easy on memory, and thus appropriate for use on a micro instance... but that is not at all to imply that it is somehow more "correct" to use than the others. The principle is the same with any of those choices; only the configuration details are different. One service is listening to port 80, and based on the request, relays it to the appropriate server process by opening up a TCP connection to the appropriate destination, tying the ends of the two pipes together, and otherwise for the most part staying out of the way.
Here's one way (among several alternatives) that this might look in an haproxy config file:
frontend main
bind *:80
use_backend app1svr if { hdr(host) -i app1.example.com }
use_backend app2svr if { hdr(host) -i app2.example.com }
backend app1svr
server app1 127.0.0.1:3001 check inter 5000 rise 1 fall 1
backend app2svr
server app2 127.0.0.1:3002 check inter 5000 rise 1 fall 1
This says listen on port 80 of all local IP addresses; if the "Host" header contains "app1.example.com" (-i means case-insensitive) then use the "app1" backend configuration and send the request to that server; do something similar for app2.example.com. You can also declare a default_backend to use if none of the ACLs match; otherwise, if no match, it will return "503 Service Unavailable," which is what it will also return if the requested back-end isn't currently running.
You can also configure a stats endpoint to show you the current state and traffic stats of your frontends and backends in an HTML table.
Since the browser isn't connecting "directly" to the web server any more, you have to configure and rely on the X-Forwarded-For header inserted into the request headers to identify the browser's IP address, and there are other ways in which your applications may have to take the proxy into account, but this overall concept is exactly how web applications are typically scaled, so I don't see it as a significant drawback.
Note these examples do use "Anonymous ACLs," of which the documentation says:
It is generally not recommended to use this construct because it's a lot easier
to leave errors in the configuration when written that way. However, for very
simple rules matching only one source IP address for instance, it can make more
sense to use them than to declare ACLs with random names.
— http://cbonte.github.io/haproxy-dconv/configuration-1.4.html
For simple rules like these, this construct makes more sense to me than explicitly declaring an ACL and then later using that ACL to cause the action that you want, because it puts everything together on the same line.
I use this to solve a different root problem that has the same symptoms -- multiple sites for development/test projects, but only one possible external IP address (which by definition means "port 80" can only go to one place). This allows me to "host" development and test projects on different ports and platforms, all behind the single external IP of my home DSL line. The only difference in my case is that the different sites are sometimes on the same machine as the haproxy and other times they're not, but the application seems otherwise identical.
Rerouting in way you show - depends on the OS your server is hosting on. For linux you have to use iptables, for windows you could use windows firewall. You should set all incoming connections to a port 80 to be redirected do desired port 3000
But, instead of port, you could use a different host name for each service, like
app1.apps.com
app2.apps.com
and so on. You can configure it with redirecting on your DNS hosting, for apps.com IMHO this is best solution, if i got you right.
Also, you can configure a single host to reroute to all other sites, like
app1.com:3001 -> apphost1.com
app1.com:3002 -> apphost2.com
Take in mind, in this case, all traffic will pas through app1.com.
You can easily do this. Set up a different hostname for each app you want to use, create a DNS entry that points to your micro instance, and create a name-based virtual host entry for each app.
Each virtual host entry should look something like:
<VirtualHost *>
ServerName app1.example.com
DocumentRoot /var/www/html/app1/
DirectoryIndex index.html
</VirtualHost>

How to handle sessions on multiple jettys on same host & port - dynamic contexts

I have the following requirements
Multiple JARs. Each running an embedded Jetty.
Run everyone on same domain/port - using reverse proxy (Apache)
A JAR can have multiple instances running on different machines (yet under same host/port).
Complete session separation - absolutely no sharing even between 2 instances of same webapp.
Scale this all dynamically.
I do not know if this is relevant, but I know Spring Security is used in some of these web apps.
I got everything up and running by adding reverse proxy rules and restarting Apache.
Here is a simplified description of 2 instances for webapp-1 and 2 instances for webapp-2.
http://mydomain.com/app1 ==> 1.1.1.1:9099
http://mydomain.com/app2 ==> 1.1.1.1:9100
http://mydomain.com/app3 ==> 1.1.1.2:9099
http://mydomain.com/app4 ==> 1.1.1.2:9100
After setting this up successfully (almost), we see problems with JSESSIONID cookie.
Every app overrides the others' cookie - which means we have yet to achieve total session separation as one affects the other.
I read a lot about this issue online, but the solutions never really suffice in my scenario.
The IDEAL solution for me would be to define JETTY to use some kind of UUID for the cookie name. I still cannot figure out why this is not the default.
I would even go for a JavaScript solution. JavaScript has the advantage that it can see the URL after ReverseProxy manipulation. So for http://mydomain.com/XXX I can define cookie name to be XXX_JSESSIONID.
But I cannot find a howto on these.
So how can I resolve this and get a total separation of sessions?
You must spend some time understanding what session manager you are using and what features/benefits it gives you. If you have no db available, and you have no custom session manager then I am inclined to believe you are using a HashSessionManager that we distribute which is usable for session management on a single host only, there is no session sharing across jvms in this instance.
If you are running 4 separate jvm processes (and using the HashSessionManager) as the above seems to indicate then there is no sessions being shared across nodes.
Also you seem to be looking to change the name of the session id variable for each application. To do that simply set a different name for each application.
http://www.eclipse.org/jetty/documentation/current/session-management.html
You can set a new org.eclipse.jetty.servlet.SessionCookie name for each webapp context and that should address your immediate issue.

Move to 2 Django physical servers (front and backend) from a single production server?

I currently have a growing Django production server that has all of the front end and backend services running on it. I could keep growing that server larger and larger, but instead I want to try and leave that main server as my backend server and create multiple front end servers that would run apache/nginx and remotely connect to the main production backend server.
I'm using slicehost now, so I don't think I can benefit from having the multiple servers run on an intranet. How do I do this?
The first step in scaling your server is usually to separate the database server. I'm assuming this is all you meant by "backend services", unless you give us any more details.
All this needs is a change to your settings file. Change DATABASE_HOST from localhost to the new IP of your database server.
If your site is heavy on static content, creating a separate media server could help. You may even look into a CDN.
The first step usually is to separate the server running actual Python code and the database server. Any background jobs that does processing would probably run on the database server. I assume that when you say front end server, you actually mean a server running Python code.
Now, as every request will have to do a number of database queries, latency between the webserver and the database server is very important. I don't know if Slicehost has some feature to allow you to create two virtual machines that are "close" in terms of network latency(a quick google search did not find anything). They seem like nice guys, so maybe you could ask them if they have such a service or could make an exception.
Anyway, when you do have two machines on Slicehost, you could check the latency between them by simply pinging between them. When you have the result you will probably know if this is at all feasible or not.
Further steps depends on your application. If it is media heavy, then maybe using a separate media server would make sense. Otherwise the normal step is to add more web servers.
--
As a side note, I personally think it makes more sense to invest in real dedicated servers with dedicated network equipment for this kind of setup. This of course depends on what budget you are on.
I would also suggest looking into Amazon EC2 where you can provision servers that are magically close to each other.