how to carry performance testing on sticky enabled load balanced web application? - cookies

Hie,
I read a lot of blogs and tutorials. I cannot figure it out how to carry out performance testing on a cookie based sticky web application which sits behind a reverse proxy load balancer. I have 3 backed application servers serving same instance of a shopping cart. A load balancer sits infront of them and directs the traffic.
Problem: when i send HTTP request for performance analysis the load balancer (tracks client ip through cookie) redirects the HTTP request to the same back end server that was assigned to. I have an option of using IP spoofing but it wont work when the backend servers are distribted in WAN rather than LAN. Moreover, each backend servers has its own public IP address and sits behind the firewall.
Question: IS there a way Jmeter can be configured to load test in this scenario. or is there othere better solution
Much appreciate your thoughts and contribution.
Regards

Here are few possible workarounds:
Point different JMeter instances directly to different backend hosts bypassing the load balancer.
Use Distributed Testing having JMeter nodes somewhere in the cloud, i.e. Amazon Micro Instances are free. You can use JMeter ec2 Script to simplify the installation, configuration and execution.
Try using DNS Cache Manager, it enables individual DNS resolution for each JMeter thread.

Related

Polyglot and Client Side Load Balancing

With the Cloud Foundry Feature, "Polyglot" for integrated Service Discovery and direct communication between service containers through the internal routes, How does the Load Balancing work? Is Cloud Foundry taking care of the Load Balancing? Is there a way to utilize Client Side Load Balancing, something like Ribbon on top of this Polyglot enabled communication?
When you are using container to container networking...
If you connect directly to IP addresses, no load balancing is done.
If you use the platform's DNS based polyglot service discovery, then you will get limited load balancing via round-robin DNS.
With the polyglot service discovery feature, DNS responses are rotated so that IPs are listed in different orders in the response. You can observe/validate this by doing the following:
Map an internal route to an app
Scale the same app up to have two or more instances
Run cf ssh into any app container
Inside the container, run dig <internal-route>
Repeat the last step any number of times. You should see the response from DNS come back with IP addresses in a different order (they are rotated).
That said, there is nothing to stop you from using a different form of load balancing be that a reverse proxy app you have deployed or something client side like Ribbon.

AWS Load Balancer - Remove cache elements on EC2

I'm currenty upscaling from 1xEC2 server to:
1xLoad Balancer
2xEC2 servers
I have quiet a lot of customers, each running our service on their own domain.
We have a webfront and admin-interface and use a lot of caching. When something is changed on the admin-part, the server calls eg.: customer.net/cacheutil.ashx?f=delete&obj=objectname to remove the object on crossdomains.
Hence the new setup, I don't know how to do this with multiple servers, ensuring that the cached objects is deleted on both servers (or more, if we choose to launch more).
I think that it is a "bit much" to require our customers to add eg. "web1.customer.net", "web2.customer.net" and "customer.net" to point at 3 different DNS CNAMEs, since they are not that IT experienced.
How does anyone else do this?
When scaling horizontally, it is recommended to keep your web servers stateless. That is, do not store data on a specific server. Instead, store the information in a database or cache that can be accessed by all servers. (eg DynamoDB, ElastiCache)
Alternatively, use the Sticky Sessions feature of the Elastic Load Balancing service, which uses a cookie to always redirect a user's connection back to the same server.
See documentation: Configure Sticky Sessions for Your Load Balancer

Clustering webservices over VPN

We have number of web services exposed over VPN to our partners for their consumption. I was wondering, what would be the best way to make those web services highly available and scalabe for their usage. One option could be an apache sitting between our web services acting like a reverse proxy. But, that would introduce a single point of failure too. Can we use physical load balancer? I was not able to find any useful resources for planning out this activity. Any thoughts/ideas?
I did not work with physical load balancer, but Apache is a valid solution in most of the scenarios.
All of our clients (with critical back-end system) uses apache as a load balancer without problemas.
Most of the Application Servers also provide their custom integration with apache, like mod_jk for Weblogic or mod_cluster for JBoss.

Setting up a loadbalancer behind a proxy server on Google Cloud Compute engine

I am looking to build a scalable REST webservice on the Google Cloud Compute Engine but have a couple of requirements that I am not sure how best to implement.
Structure so far:
2 Instances running a REST webservice connected to a MySQL Cloud database.
(number of instances to scale up in the future)
Load balancer to split request between the two or more Instances.
this part is fine.
What I need next is that the traffic (POST requests from instances to an external webservice) must come from a single IP address. I assume these requests can not route back through the public IP of the load balancer?
I get the impression the solution to this is to route all requests from instances though a 3rd instance running squid. Is this the best way to do this? (side question)
Now to my main question:
I have been reading about ApiAxle which sounds like a nice proxy for Web Services, giving some good access control, throttling and reporting capabilities.
Can I have an instance running ApiAxle followed by a google cloud Load Balancer which shares the request from the proxy to the backend instances that do the leg work and feed the response back through the ApiAxle proxy, thus having everything though a single IP visible to clients using the API? (letting me add new instances to the pool to add capacity.)
and Would the proxy be much of a bottle neck?
Thanks in advance.
/Dave
(new to this, so sorry if its a stupid question because I cant find anything like this on the web)
Sounds like you need to NAT on your outbound traffic so it appears to come from one IP address. You need to do that via a third instance since Google LB stack doesn't provide this. GCLB works only with inbound connections on the load-balanced IP.
You can setup source-NAT using advanced routing, or you can use a proxy as you suggested.

Load Balancing Multiple Django Webservers

I was wondering if anyone had ever implemented multiple Django webservers pointing to a single database, essentially functioning as a single website via load balancing?
What software did you use for load balancing?
What additional setup/configuration did your Django webservers require?
Did you need to modify your Django code in any way?
On an Amazon EC2 setup, I found AWS's Elastic Load Balancer to be pretty cool (apart from only supporting a single IP address per ELB instance).
The front-end Django boxes just needed their database settings altering to point to a separate database (ie, given the database box's IP, which was an internal IP in terms of our EC2 ecosystem) and, once the database box was made to listen on that IP and the appropriate port, we were ready to rock.