Is it good to go ahead with webservice load testing? - web-services

We have an application which is meant for Android, iOS and desktop all 3 and all 3 are consuming webservices.
What I would like to know is that, should we record scenarios from Android, iOS and desktop seperately and make script or we can just hit the web services using jmeter?

Well-behaved load test should represent application under test real-life usage as close as possible. If desktop, Android and iOS applications use the same endpoint and send the same requests in terms of parameters - it will be enough to come up with only one Thread Group holding request(s) representing the load.
If there are differences depending on the platform i.e. in request parameters, headers, cookies, etc. - it's better to go for different Thread Groups representing different users sitting on different platforms or consider using i.e. Throughput Controller to mimic real anticipated distribution of the requests basing on the source platform.

Related

Efficient C++ software stack for web development

What C++ software stack do developers use to create custom fast, responsive and not very resource hungry web services?
I'd recommend you to take a look on CppCMS:
http://cppcms.com
It exactly fits the situation you had described:
performance-oriented (preferably web service) software stack
for C++ web development.
It should have a low memory footprint
work on UNIX (FreeBSD) and Linux systems
perform well under high server load and be able to handle many requests with great efficiency
[as I plan to use it in a virtual environment] where resources will be to some extent limited.
So far I have only come across Staff WSF, Boost, Poco libraries. The latter two could be used to implement a custom web server...
The problem that web server is about 2% of web development there are so much stuff to handle:
web templates
sessions
cache
forms
security-security-security - which is far from being trivial
And much more, that is why you need web frameworks.
You could write an apache module, and put all your processing code in there.
Or there's CppCMS, or Treefrog or for writing web services (not web sites) use gSOAP or Apache Axis
But ultimately, there's no "easy to use framework" because C++ developers like to build apps from smaller components. There's no Ruby-style framework, but there is all manner of libraries for handling xml or whatever, and Apache offers the http protocol bits in the module spec so you can build up your app quite happily using whatever pieces make sense to you. Now whether there's a market for bundling this up to make something easier to use is another matter.
Personally, the best web app system I wrote (for a company) used a very think web layer in the web server (IIS and ASP, but this applies to any webserver, use php for example) that did nothing except act as a gateway to pass the data from the requests through to a C++ service. The C++ service could then be written completely as a normal C++ command line server with well-defined entry points, using as thin an RPC system as possible (shared memory, but you may want to check out ZeroMQ), which not only increased security but allowed us to easily scale by shifting the services to app servers and running the web servers on different hardware. It was also really easy to test.

How to send lots of POST requests QUICKLY

I'm planning to develop a program for our university research that has to send lots of post requests to different urls. It must work as quick as possible (we should process about 100kk urls). What language shoud i use (currently i'm writing in c++, delphi and perl a bit)?
Also, I've heard that it's possible to write an multithreaded app in perl using prefork that can process about 20-30k per minute. Is it true?
// Sorry for my bad english, but it seems to be the only place where i can get the right answer
Andrew
The 20-30k per minute is completely arbitrary. If you run this on an 8-core machine with a beefy network connection you could probably surpass that.
However, I don't think your choice of programming language / library is going to matter much here. Instead, you're going to run into the number of concurrent TCP connections allowed by the machine, and also the bandwidth of the link itself.
Webserver Stress Tool claims capable of simulating the HTTP requests generated by up to 10.000 simultaneous users and has an entry in Torry's site: Presumably it's written in Delphi or C++ Builder.
My suggestion:
You can write your custom stress tool (HTTP(S) Client) with Delphi (It happens to be my favorite language so I advocate it) using light HTTP(S) library such as RTC SDK and OmniThreadLibrary for multithreading.
See this page for a clue/hint.
Edit:
Excerpt from Demos\Readme_Demos.txt in RealThinClient_SDK331.zip
App Client, Server and ISAPI demos can be used to stress-test RTC
component using Remote Functions with strong encryption by opening
hundreds of connections from each client and flooding the
Server/ISAPI with requests.
App Client Demo is ideal for stress-testing RTC remote functions using
multiple connections in multi-threaded mode, visualy showing activity
and stage for each connection in a live graph. Client can choose
between "Proxy" and standard connection components, to see the
difference in bandwidth usage and distribution.
I have heard Erlang is pretty good for such applications as it is very efficient to spawn many processes in Erlang quickly. But I think using Python would be fine too, just use the popen module to spawn multiple processes.
After all you are limited by how many you can run at the same time depending on how many processors your machine has. The choice of language may not matter as much depending on what you are doing with the data downloaded from these URLs as that may be more processing intensive than the cost of spawning.

Socket Server vs. Standard Servers

I'm working on a project of which a large part is server side software. I started programming in C++ using the sockets library. But, one of my partners suggested that we use a standard server like IIS, Apache or nginx.
Which one is better to do, in the long run? When I program it in C++, I have direct access to the raw requests where as in the case of using standard servers I need to use a scripting language to handle the requests. In any case, which one is the better option and why?
Also, when it comes to security for things like DDOS attacks etc., do the standard servers already have protection? If I would want to implement it in my socket server, what is the best way?
"Server side software" could mean lots of different things, for example this could be a trivial app which "echoes" everything back on a specific port, to a telnet/ftp server to a webserver running lots of "services".
So where in this gamut of possibilities does your particular application lie? Without further information, it's difficult to make any suggestions, but let's see..
Web Services, i.e. your "server side" requirement is to handle individual requests and respond having done some set of business logic. Typically communication is via SOAP/XML, and this is ideal if you bave web based clients (though nothing prevents your from accessing these services via standalone clients). Typially you host these on web servers as you mentioned, and often they are easiest written in Java (I've yet to come across one that needed to be written in C++!)
Simple web site - slightly different to the above, respods to HTML get/post requests and serves up static or dymanic content (I'm guessing this is not what you're after!)
Standalone server which responds to something specific, here you'd have to implement your own "messaging"/protocols etc. and the server will carry out a specific function on incoming request and potentially send responses back. Key thing here is that the server does something specific, and is not a generic container (at which point 1 makes more sense!)
So where does your application lie? If 1/2 use Java or some scripting language (such as Perl/ASP/JSP etc.) If 3, you can certainly use C++, and if you do, use a suitable abstraction, such as boost::asio and Google Protocol buffers, save yourself a lot of headache...
With regards to security, ofcourse bugs and security holes are found all the time, however the good thing with some of these OS projects is that the community will tackle and fix them. Let's just say, you'll be safer using them than your own custom handrolled imlpementation, the likelyhood that you'll be able to address all the issues that they would have encountered in the years they've been around is very small (no disrespect to your abilities!)
EDIT: now that there's a little more info, here is one possible approach (this is what I've done in the past, and I've jused Java most of the way..)
The client facing server should be something reliable, esp. if it's over the internet, here I would use a proven product, something like Apache is good or IIS (depends on which technologies you have available). IMHO, I would go for jBoss AS - really powerful and easily customisable piece of kit, and integrates really nicely with lots of different things (all Java ofcourse!) You could then have a simple bit of Java which can then delegate to your actual Server processes that do the work..
For the Server procesess you can use C++ if that's what you are comfortable with
There is one key bit which I left out, and this is how 1 & 2 talk to each other. This is where you should look at an open source messaging product (even more higher level than asio or protocol buffers), and here I would look at something like Zero MQ, or Red Hat Messaging (both are MQ messaging protocols), the great advantage of this type of "messaging bus" is that there is no tight coupling between your servers, with your own handrolled implementation, you'll be doing lots of boilerplate to get the interaction to work just right, with something like MQ, you'll have multiplatform communication without having to get into the details... You wil save yourself a lot of time and bother if you elect to use something like that.. (btw. there are other messaging products out there, and some are easier to use - such as Tibco RV or EMS etc, however they are commercial products and licenses will cost a lot of money!)
With a messaging solution your servers become trivial as they simply handle incoming messagins and send messages back out again, and you can focus on the business logic...
my two pennies... :)
If you opt for 1st solution in Nim's list (web services) I would suggest you to have a look at WSO's web services framework for C++ , Axis CPP and Axis2/C web services framework (if you are not restricted to C++). Web Services might be the best solution for your requirement as you can quickly build them and use either as processing or proxy modules on the server side of your system.

Ideal way/architecture to deliver large data over Web Services

We are trying to design 6 web services, which will serve another client component. The client component requires data from the web service we are implementing.
Now, the problem is, there is not 1 Web Service we are implementing, there is one Web Service which the client component hits, this initiates a series (5 more) of Web Services which gather data from their respective data stores and finally provide the data back to the original Web Service, which then delivers the data back to the client component.
So, if the requested data becomes huge, then, this will be a serious problem for our internal communication channel.
So, what do you guys suggest? What can be done to avoid overloading of the communication channel between the internal Web Service and at the same time, also delivering the data to the client component.
Update 1
Using 5 WS, where, 1WS does not know about the others, except the next one is a business requirement. Actually, 5 companies "small services" are being integrated.
We use Java and Axis2
We've had a similar problem. Apart from trying to avoid it (eg for internal communication go direct to db instead of web service) you can mitigate it by at least not performing the 5 or so tasks in series. Make new threads to collect them all in parallel and process them at the end to reduce latency (except where they might contend for the same resource and bottle neck).
But before I'd do anything load test it and see if it is even an issue and get some baseline stats so you can see what improvement each change makes. Also sometimes you might be better off tweaking network settings or the actual network rather than trying to optimise the code - but again test and see.
Put all the data on a temporary compressed file and give back the ftp url of the file.
The client fetches the big data chunk uncompress it and reads it. (maybe some authentication mechanism for the ftp server)

How to keep a C++ realtime server application with a modern web client interface?

I develop industrial client/server application (C++) with strong real time requirements.
I feel it is time to change the look of the client interface - which is developed in MFC - but I am wondering which would be the right choice.
If I go for a web client is there any way to exchange data between C++ and javascript other than AJAX <-> Web service <-> COM ?
Requirements for the web client are: Quick statuses refresh, user commands, tables
My team had to make that same decision a few months ago...
The cool thing about making it a web application would be that it would be very easy to modify later on. Even the user of the interface (with a little know-how) could modify it to suit his/her needs. Custom software becomes just that much easier.
We went with a web interface and ajax seems the way to go, it was quite responsive.
On the other hand, depending on how strong your real time requirements are, it might prove difficult. We had the challenge of plotting real time data through a browser, we ended up going with a firefox plugin to draw the plot. If you're simply trying to display real time text data, it shouldn't be as big an issue.
Run some tests for your specific application and see what it looks like.
Something else to consider, if you are having a web page be an interface to your server, keep in mind you will need to figure a way to update one client when another changes the state of the server if you plan on allowing multiple interfaces to your server.
I usually build my applications 2-folded :
Have the real heavy-duty application CLI-only. The protocol used is usually text-only based, composed of requests and answers.
Wrap a GUI around as another process that talks to the CLI back-end.
The web interface is then just another GUI to wrap around. It is also much easier to wrap a REST/JSON based API on the CLI interface (just automatically translate the messages).
The debugging is also quite easy to do, since you can just dump the requests between the 2 elements and reproduce the bugs much more easily.
Write an HTTP server in your server to handle the AJAX feedback. If you don't want to serve files, create your server on a non-standard port (eg. 8081) and use a regular web server for the actual web page delivery. Now have your AJAX engine communicate with the server on the Bizarro port instead of port 80.
But it's not that hard to write the file server part, also. If you do that, you also get to generate web pages on-the-fly with your data pre-filled, if you want.
Google Desktop Search does this now. When I search my desktop for 'foobar', the URL that opens is this:
http://127.0.0.1:4664/search?q=foobar&flags=68&num=10
In this case, the 4664 is the Bizarro port. (GoogleDesktop serves all the data here; it only uses the Bizarro port to avoid conflicts with any web server I might be running.)
You may want to consider where your data lives. If your application feeds a back-end database, you could write a web app leaving your c++ code in tact -- the web application would be independent and offer up pages to web users and talk directly to the database -- In this case you have as many options, and more, as you have indicated.