Are code level changes required for moving to load-balancing environment? - coldfusion

My client wants to move to a ColdFusion load-balancing environment for better availability and scalability of the site. I know how to setup clusters and instances in the ColdFusion Admin. We should also use J2EE session mgmt for sticky sessions.
But I am not sure of other code level changes required while moving from a single server to load-balancing environment.
Anyone having any experience please suggest? Or any helpful links.

Skipping the session scoped issues you're bound to enjoy I'll focus on less common code level strategies.
You will have 2+ isolated application scopes. This creates challenges in synchronicity. Examine the app code for writes to the app scope. Should some condition require updating an app scoped value, that value must be reflected in all sibling application scopes.
Know that each instance will have its own onApplicationStart() & onApplicationEnd() events. Depending on what happens in the code, it could cause mischief.
Be aware of things like FuseBox (framework) when load balancing. FuseBox generates files locally that are not replicated on other server instances.
When logging, emailing errors, etc., use an instance identifier so you'll know which server you're working with.
Should your app need the originating IP address of a request, you may need to enable X-Forwarded-For HTTP headers within your load balancer. Otherwise, you could get the IP of the load balancer on every request.
Verify Identical on EACH Instance:
Security implementations
ColdFusion & Java versions
Datasources
Mappings
Virtual directories
Shared resource locations ..
CF Admin settings: site-wide error handling, etc.
CF account privileges, Important!
Consider using the ColdFusion Server Manager to assist consistification. ;)

Related

Setting up Nginx as a reverse proxy for Apache vs just Apache Event MPM

In the Django docs for setting up mod_wsgi, the tutorial notes:
Django doesn’t serve files itself; it leaves that job to whichever Web
server you choose.
We recommend using a separate Web server – i.e., one that’s not also
running Django – for serving media. Here are some good choices:
Nginx
A stripped-down version of Apache
I understand this might be due to wasted resources when Apache spawns new processes to serve each static file, which Nginx avoids. However, Apache's (newish?) Event MPM seems to act similar to an Nginx instance handing off requests to an Apache worker mpm. Therefore I'd like to ask: instead of setting up Nginx to be a reverse proxy for Apache, would using an Apache Event MPM be sufficient for serving static files in Apache?
Apache doesn't spawn a new process for each static file. Apache keeps persistent processes to handle concurrent and subsequent requests just like nginx. The difference is that nginx uses a full async model, whereas Apache relies on processes and/or threading for concurrency, although event MPM uses an async model for initial request acceptance and keep alive connections now. For the majority of people, Apache alone is still a more than acceptable solution. So don't get ahead of yourself if you are just starting out and think you need a Google/Facebook scale solution from the outset.
More important than separate web server is that if using Apache/mod_wsgi, serve the static files under a different host name. That way you avoid heavy weight cookie information being sent for all static file requests. You can do this using virtual hosts in Apache. Also ensure you are using daemon mode of mod_wsgi for running the Django application as that is a better architecture and provides lots more options for setting timeouts so you can have your application recover from various situations which might otherwise cause the server to lock up when overloaded.
For a system which provides a better out of the box configuration and experience than using Apache/mod_wsgi directly and configuring it yourself, look at using mod_wsgi-express.
https://pypi.python.org/pypi/mod_wsgi
http://blog.dscpl.com.au/2015/04/introducing-modwsgi-express.html
http://blog.dscpl.com.au/2015/04/using-modwsgi-express-with-django.html
http://blog.dscpl.com.au/2015/04/integrating-modwsgi-express-as-django.html
The advice about separating the webservers has two advantages. One clearly outlined by Graham. The other is "predictable resource consumption".
The number of resources per HTML page differ. Leaving one webserver to serve the application and the other to serve static resources, has the advantage that you know exactly how many concurrent visitors you can serve: the MaxClients setting of Apache.
If this slows down the loading of images, those webservers need very few modules and no measurable amount of CPU power so a one core machine with SSD disks is all you need and scaling is cheap.
As Graham indicates it starts with a STATIC_URL that has a different hostname. Run it at the same server at the start. When scaling up, tie that hostname to a reverse proxy that serves from several image server backend machines.

How to handle sessions on multiple jettys on same host & port - dynamic contexts

I have the following requirements
Multiple JARs. Each running an embedded Jetty.
Run everyone on same domain/port - using reverse proxy (Apache)
A JAR can have multiple instances running on different machines (yet under same host/port).
Complete session separation - absolutely no sharing even between 2 instances of same webapp.
Scale this all dynamically.
I do not know if this is relevant, but I know Spring Security is used in some of these web apps.
I got everything up and running by adding reverse proxy rules and restarting Apache.
Here is a simplified description of 2 instances for webapp-1 and 2 instances for webapp-2.
http://mydomain.com/app1 ==> 1.1.1.1:9099
http://mydomain.com/app2 ==> 1.1.1.1:9100
http://mydomain.com/app3 ==> 1.1.1.2:9099
http://mydomain.com/app4 ==> 1.1.1.2:9100
After setting this up successfully (almost), we see problems with JSESSIONID cookie.
Every app overrides the others' cookie - which means we have yet to achieve total session separation as one affects the other.
I read a lot about this issue online, but the solutions never really suffice in my scenario.
The IDEAL solution for me would be to define JETTY to use some kind of UUID for the cookie name. I still cannot figure out why this is not the default.
I would even go for a JavaScript solution. JavaScript has the advantage that it can see the URL after ReverseProxy manipulation. So for http://mydomain.com/XXX I can define cookie name to be XXX_JSESSIONID.
But I cannot find a howto on these.
So how can I resolve this and get a total separation of sessions?
You must spend some time understanding what session manager you are using and what features/benefits it gives you. If you have no db available, and you have no custom session manager then I am inclined to believe you are using a HashSessionManager that we distribute which is usable for session management on a single host only, there is no session sharing across jvms in this instance.
If you are running 4 separate jvm processes (and using the HashSessionManager) as the above seems to indicate then there is no sessions being shared across nodes.
Also you seem to be looking to change the name of the session id variable for each application. To do that simply set a different name for each application.
http://www.eclipse.org/jetty/documentation/current/session-management.html
You can set a new org.eclipse.jetty.servlet.SessionCookie name for each webapp context and that should address your immediate issue.

designing for failure when polling/retrying/queueing is not possible

I'm designing a website/web service to be hosted in the cloud (specifically AWS although that's mostly irrelevant), and I'm spending a lot of time thinking about "designing for failure". I want my system to seamlessly handle node failures, i.e. without any significant user impact or engineer intervention.
In most cases, it's easy to see how to handle sudden node failure. If my app has an API handled by 4 servers behind a load balancer, polled by AJAX or an iPhone app, the poller can simply detect the failed TCP/IP transmission and retry... assuming the load balancer behaves correctly, it will hit a healthy instance.
If the app is more processing-oriented, a queue service like SQS can be used to allow stateless nodes to pick up where the failed nodes left off.
The difficulty I see is with "points of entry", where no retry/polling is possible because the application hasn't been loaded yet, and where a failure means the app never starts. For example, the index.html on a webpage... if a node fails while transmitting that file, the user's browser will likely hang and not automatically retry (they will need to refresh).
The Load Balancer is also a single "point of entry/failure". However, in this case it appears we can solve the problem by creating multiple Load Balancers, and load balancing them using DNS Load Balancing as described here: http://blog.rightscale.com/2012/10/23/dns-load-balancing-and-using-multiple-load-balancers-in-the-cloud/
Is this a solution that would work for the simpler index.html case? Overall, how can we create redundancy where polling/retrying/queuing is not possible?
EDIT: Another idea is to have the index.html hosted statically on a CDN, S3, etc (where resource availability is more dependable), although that prevents using dynamic content. Dynamic content could be added if the page populates itself using JS, but that adds a dependency on JS as well as latency for the user.

Move to 2 Django physical servers (front and backend) from a single production server?

I currently have a growing Django production server that has all of the front end and backend services running on it. I could keep growing that server larger and larger, but instead I want to try and leave that main server as my backend server and create multiple front end servers that would run apache/nginx and remotely connect to the main production backend server.
I'm using slicehost now, so I don't think I can benefit from having the multiple servers run on an intranet. How do I do this?
The first step in scaling your server is usually to separate the database server. I'm assuming this is all you meant by "backend services", unless you give us any more details.
All this needs is a change to your settings file. Change DATABASE_HOST from localhost to the new IP of your database server.
If your site is heavy on static content, creating a separate media server could help. You may even look into a CDN.
The first step usually is to separate the server running actual Python code and the database server. Any background jobs that does processing would probably run on the database server. I assume that when you say front end server, you actually mean a server running Python code.
Now, as every request will have to do a number of database queries, latency between the webserver and the database server is very important. I don't know if Slicehost has some feature to allow you to create two virtual machines that are "close" in terms of network latency(a quick google search did not find anything). They seem like nice guys, so maybe you could ask them if they have such a service or could make an exception.
Anyway, when you do have two machines on Slicehost, you could check the latency between them by simply pinging between them. When you have the result you will probably know if this is at all feasible or not.
Further steps depends on your application. If it is media heavy, then maybe using a separate media server would make sense. Otherwise the normal step is to add more web servers.
--
As a side note, I personally think it makes more sense to invest in real dedicated servers with dedicated network equipment for this kind of setup. This of course depends on what budget you are on.
I would also suggest looking into Amazon EC2 where you can provision servers that are magically close to each other.

BizTalkServerIsolatedHost disappeared from one server in multi-server group

Afternoon all,
We have a group of four BizTalk servers: two orchestration hosts and two adapter hosts. We have a number of orchestrations exposed as web services, and for the purposes of this question, it is important to note that these web services are hosted on the adapter servers, and run under the BizTalkServerIsolatedHost host instance.
This morning, we started seeing odd errors on both of the adapter servers when SOAP calls came into the web services, like this:
The Messaging Engine failed to
register the adapter for “SOAP” for
the receive location blahblahblah.
Please verify that the receive
location exists, and that the isolated
adapter runs under an account that has
access to the BizTalk databases.
We restarted IIS on both servers, which fixed the errors on ONE server, but the other server continued to fail. The errors continued after a reboot as well.
After chasing our tails for a while, we eventually discovered that the BizTalkServerIsolatedHost host instance on the still-failing server was gone. Just... gone. These applications have been in production for months. Everything had been working swimmingly through the morning, until this just happened.
I don't want to muddy the waters, because I think the problems are unrelated, but in the interest of providing enough information, this problem exactly coincided with a problem in our load-balancing network hardware. The load balancer, which provides a single URL to consumers, and round-robins between the two adapter servers, just stopped working. This problem has not been resolved, so I don't know what happened, but it certainly made troubleshooting more interesting...
So, I have two questions:
Has anyone seen this before, where a host instance disappears?
We cannot find anything in the event viewer or anywhere else that says the host instance was deleted. Is this logged somewhere?
Thanks,
Jason