I want to stress test a web service method by calling it several thousand times in quick succession. The method has a single string parameter that I will vary on each call.
I'm planning on writing a Powershell script to loop and call this method a number of times.
Is there a better way to do this?
If you run call after call - it's not going to help you too much, as it will not show you how the service behaves under a heavy load of many simultaneous connections.
Go with some multi-threaded solution (I do not know if powershell has this).
Some opensource testing tools are listed here. Just set your web service to accept GET requests as well, not only SOAP(default), so you can form the urls.
For these situations I would use JMeter. You need to play around with it first, but it is very flexible, it will run requests in different threads, it will display the results graphically and it also allows you to script your jobs.
I would also recommend to start it not in the same machine as the server and if possible start two or more instances in different machines to simulate the load.
Personally, I'd use something like openSTA.
This allows a session with a web site to be recorded and then played back via a relatively simple script language.
You can also easily test web services and write your own scripts.
It allows you to put scripts together in a test in any way you want and configure the number of iterations, the number of users in each iteration, the ramp up time to introduce each new user and the delay between each iteration. Tests can also be scheduled in the future.
It's open source and free.
It produces a number of reports which can be saved to a spreadsheet. We then use a pivot table to easily analyse and graph the results.
It really depends on if there are other requirements. Such as logging, history, etc.
If quick and dirty is all you need, then you're good.
If you need something more robust then you can look at either custom building a test harness in the language of your choice or using things such as Mercury, MS Team Tester, nUnit or the like.
For those that stumble upon this and don't want to use the 3rd party options already mentioned. As the accepted answer mentions, a multi-threaded solution is best, this can be achieved in PowerShell:
ForEach -Parallel ($item in $collection) { }
https://learn.microsoft.com/en-us/powershell/module/psworkflow/about/about_foreach-parallel
Although the number of threads could be limited to 5: Does powershell's parallel foreach use at most 5 thread?
Related
We just got a new Server at work that will be used only for running SAS code, and I've been asked to run some tests and make sure that it's performing better than our other servers. I'm not an expert at this so I want to make sure I avoid making naive mistakes that don't properly measure the performance of the server. My header looks like this:
options fullstimer;
%LET BenchStartTime = %sysfunc(datetime(),22.);
Which I use as a check for the "real time" report in the log. I have a vague understanding of the difference between "user cpu time" and "system cpu time", but if anyone wants to offer up additional information on that, that would be helpful.
Anyway, the main point of this post is that I want to know if there are any standard benchmark tests that I should be using in order to see if this new server is better than the old ones. Currently I'm using something I found online which is just appending a bunch of copies of sashelp.class (but I think this might be a bad idea because pulling from the C: and loading it into a different drive might be the same across all servers, right? If the C: is the slowest drive, doesn't that become the bottleneck?), and I'm also using code that basically generates a bunch of random data of a fixed size and comparing runtimes. Is this the correct approach? Is there something else I should be doing? How many times should I be running these benchmark tests to make sure that it isn't a fluke?
Thanks for your help!
I would test by doing the things that you normally do. If you run larger merges, then you're basically talking I/O; so just make a very large dataset, write it out, read it in, etc., and perform the same test on the other machine. Perform each test a few times each in a fresh SAS session.
Further, it sounds like you need to make sure the new server can handle multiple concurrent sessions. You can simulate this in part by submitting many connections from one computer by using SAS/CONNECT; that allows you to start multiple concurrent sessions. Then do so, perhaps I would create a script that starts a local SAS session, signs on to the server and rsubmits a job of a normal difficulty that takes maybe 5 to 10 minutes, and then run that script 20 times concurrently (you can script this or just double click a .bat file 20 times). See how it handles it as compared to the other server.
On SAS 9.2 and later you can use the %SASHOME%/SASFoundation/9.x/sasiotest.exe utility. You will find some further Guidance in Support note 51659.
Edit:
A similiar utility can be downloaded for Unix Plattform, details in Support note 51660.
I hope you are doing good and i really appreciate your help here for my query.
We have our system T3000 written in C++ (http://www.temcocontrols.com/ftp/software/9TstatSoftware.zip and codes are available here https://github.com/temcocontrols/T3000_Building_Automation_System).
I am trying to integrate 'BIRT reporting tool' in my C++ application. I want to create report based on the data available in our T3000 system. I think BIRT is embeddable (??). We don't need to compile and change the project, just need to be able to call it from T3000.exe mainly.
My thinking is we may put one menu label in existing T3000 and try to display report in user single click.
Can you please help me to solve my issue with 'BIRT' ? I really appreciate your answer.
Regards
Raju
Well, the answer depends on what your definition of "embeddable" is.
BIRT is written in pure Java.
I could think of 3 different ways:
Of course it is possible to integrate Java code into an existing C/C++ program (see Embed Java into a C++ application?).
You could just use the BIRT runtime engine and generate the report as PDF or HTML from the command line (that means, basically you call the java executable from your program with several arguments). See Birt - How to run report engine on the console? and http://eclipser-blog.blogspot.de/2008/02/automatic-generation-of-birt-reports.html for more information.
You could run a Java web server like Tomcat in a second process and then start your report by calling a http URL (e.g. you could use the included Servlet example). See http://www.eclipse.org/birt/documentation/integrating/viewer-usage.php
Similar to 3. (see below)
Some notes:
The second option is slow, due to the Java and BIRT engine startup overhead (this may take several seconds). With the first and third option, the startup overhead is or can be minimized to only once (and for each report).
For the second and third option it may be necessary to modify the existing code of the example programs to suit your needs.
The first option is probably the best for an industry-quality solution, but it is also the most difficult to develop.
Anyway, Java skills are necessary IMHO.
If you plan to run this on a SOC instead of a PC, take performance into account.
Is a Java-based solution well-suited for this kind of hardware? BIRT needs quite a lot of RAM and CPU (for a SOC). Hardware like the Raspi 3 should handle this quite easily, I reckon.
I integrated the BIRT runtime into an existing Python application (all this running on an application server) in a fourth way: I wrote a listener program that listens on a TCP socket for BIRT tasks. It uses a pool of worker processes (written in Java) which in turn use the BIRT report engine to generate the output. The client program (here: written in Python) opens a TCP connection to the listener and uses this socket to tell it which report to generate (including report parameters and destination file name). The listener program then in turn chooses a worker process for the task and gives the task to the worker process.
So, basically, this fourth option is similar to the third one, with two differences:
The communication is socket-based (instead of http), allowing bi-di communication.
The architecture is multi-processes instead of multi-threading. We choose this because very large reports could cause out-of-memory errors for otherwise unrelated reports that just happen to run at the same time. It's the same basic architecture Oracle chose for their reports server.
However, developing the programs took months.
HVB: I have to give you more than a simple thanks for the explanation above, this info will save us time I am sure. Raju will be sharing our experience after we get into the project a little deeper so others can benefit.
I am trying to build a test program in c++ to automate testing for a specific application. The testing will involve sending requests which have a field 'CommandType' and some other fields to a server
The commandType can be 'NEW', 'CHANGE' or 'DELETE'
The tests can be
Send a bunch of random requests with no pattern
Send 100 'NEW' requests, then a huge amount of 'CHANGE' requests followed by 200 'DELETE' requests
Send 'DELETE' requests followed by 'CHANGE' requests
... and so on
How can I design my software (what kind of modules or layers) so that adding any new type of test case is easy and modular?
EDIT: To be more specific, this test will be to only test one specific application that gets requests of the type described above and handles them. This will be a client application that will send the requests to the server.
I would not create your own framework. There are many already written that follow a common pattern and can likely accomodate your needs elegantly.
The xUnit framework in all incarnations I have seen allows you to add new test cases without having to edit the code that runs the tests. For example, CppUnit provides a macro that when added to a test case will auto-register the test case with a global registry (through static initialization I assume). This allows you to add new test cases without cracking open and editing the thing that runs them.
And don't let the "unit" in xUnit and CppUnit make you think it is inappropriate. I've used the xUnit framework for all different kinds of testing.
I would separate out each individual test into it's own procedure or, if it requires code beyond a function or two, it's own source file. Then in my main routine I'd do something like:
void main()
{
run_test_1();
run_test_2();
//...
run_test_N();
}
Alternatively, I'd recommend leveraging the Boost Test Library and following their conventions.
I'm assuming you're not talking about creating unit tests.
IMHO, Your question is too vague to provide useful answers. Is this to test a specific application or are you trying to make something generic enough to test as many different applications as is possible? Where do these applications live? Are they client server apps, web apps, etc.?
If it's more than one application that you want your tool to test, you'll need an architecture that creates a protocol in between the testing tool and the applications such that you can convert the instructions your tool and consumers of your tool can understand, into instructions that the application being tested can understand. I've done similar things in the past but I've only ever had to worry about maybe 5 different "applications" so it was a pretty simple matter of summing up all the unique functionality of the apps and then creating an interfact that supports them all.
I wouldn't presume that NEW, CHANGE, and DELETE would be your only command types either. A lot of testing involves data cleanup, test reporting, etc. And applications all handle this their own special ways.
use C++ unit testing framework , Read this for Detail and examples
Ok, maybe I'm missing something, but I really don't see the point of Selenium. What is the point of opening the browser using code, clicking buttons using code, and checking for text using code? I read the website and I see how in theory it would be good to automatically unit test your web applications, but in the end doesn't it just take much more time to write all this code rather than just clicking around and visually verifying things work?
I don't get it...
It allows you to write functional tests in your "unit" testing framework (the issue is the naming of the later).
When you are testing your application through the browser you are usually testing the system fully integrated. Consider you already have to test your changes before committing them (smoke tests), you don't want to test it manually over and over.
Something really nice, is that you can automate your smoke tests, and QA can augment those. Pretty effective, as it reduces duplication of efforts and gets the whole team closer.
Ps as any practice that you are using the first time it has a learning curve, so it usually takes longer the first times. I also suggest you look at the Page Object pattern, it helps on keeping the tests clean.
Update 1: Notice that the tests will also run javascript on the pages, which helps testing highly dynamic pages. Also note that you can run it with different browsers, so you can check cross-browser issues(at least on the functional side, as you still need to check the visual).
Also note that as the amount of pages covered by tests builds up, you can create tests with complete cycles of interactions quickly. Using the Page Object pattern they look like:
LastPage aPage = somePage
.SomeAction()
.AnotherActionWithParams("somevalue")
//... other actions
.AnotherOneThatKeepsYouOnthePage();
// add some asserts using methods that give you info
// on LastPage (or that check the info is there).
// you can of course break the statements to add additional
// asserts on the multi-steps story.
It is important to understand that you go gradual about this. If it is an already built system, you add tests for features/changes you are working on. Adding more and more coverage along the way. Going manual instead, usually hides what you missed to test, so if you made a change that affects every single page and you will check a subset (as time doesn't allows), you know which ones you actually tested and QA can work from there (hopefully by adding even more tests).
This is a common thing that is said about unit testing in general. "I need to write twice as much code for testing?" The same principles apply here. The payoff is the ability to change your code and know that you aren't breaking anything.
Because you can repeat the SAME test over and over again.
If your application is even 50+ pages and you need to do frequent builds and test it against X number of major browsers it makes a lot of sense.
Imagine you have 50 pages, all with 10 links each, and some with multi-stage forms that require you to go through the forms, putting in about 100 different sets of information to verify that they work properly with all credit card numbers, all addresses in all countries, etc.
That's virtually impossible to test manually. It becomes so prone to human error that you can't guarantee the testing was done right, never mind what the testing proved about the thing being tested.
Moreover, if you follow a modern development model, with many developers all working on the same site in a disconnected, distributed fashion (some working on the site from their laptop while on a plane, for instance), then the human testers won't even be able to access it, much less have the patience to re-test every time a single developer tries something new.
On any decent size of website, tests HAVE to be automated.
The point is the same as for any kind of automated testing: writing the code may take more time than "just clicking around and visually verifying things work", maybe 10 or even 50 times more.
But any nontrivial application will have to be tested far more than 50 times eventually, and manual tests are an annoying chore that will likely be omitted or done shoddily under pressure, which results in bugs remaining undiscovered until just bfore (or after) important deadlines, which results in stressful all-night coding sessions or even outright monetary loss due to contract penalties.
Selenium (along with similar tools, like Watir) lets you run tests against the user interface of your Web app in ways that computers are good at: thousands of times overnight, or within seconds after every source checkin. (Note that there are plenty of other UI testing pieces that humans are much better at, such as noticing that some odd thing not directly related to the test is amiss.)
There are other ways to involve the whole stack of your app by looking at the generated HTML rather than launching a browser to render it, such as Webrat and Mechanize. Most of these don't have a way to interact with JavaScript-heavy UIs; Selenium has you somewhat covered here.
Selenium will record and re-run all of the manual clicking and typing you do to test your web application. Over and over.
Over time studies of myself have shown me that I tend to do fewer tests and start skipping some, or forgetting about them.
Selenium will instead take each test, run it, if it doesn't return what you expect it, it can let you know.
There is an upfront cost of time to record all these tests. I would recommend it like unit tests -- if you don't have it already, start using it with the most complex, touchy, or most updated parts of your code.
And if you save those tests as JUnit classes you can rerun them at your leisure, as part of your automated build, or in a poor man's load test using JMeter.
In a past job we used to unit test our web-app. If the web-app changes its look the tests don't need to be re-written. Record-and-replay type tests would all need to be re-done.
Why do you need Selenium? Because testers are human beings. They go home every day, can't always work weekends, take sickies, take public holidays, go on vacation every now and then, get bored doing repetitive tasks and can't always rely on them being around when you need them.
I'm not saying you should get rid of testers, but an automated UI testing tool complements system testers.
The point is the ability to automate what was before a manual and time consuming test. Yes, it takes time to write the tests, but once written, they can be run as often as the team wishes. Each time they are run, they are verifying that behavior of the web application is consistent. Selenium is not a perfect product, but it is very good at automating realistic user interaction with a browser.
If you do not like the Selenium approach, you can try HtmlUnit, I find it more useful and easy to integrate into existing unit tests.
For applications with rich web interfaces (like many GWT projects) Selenium/Windmill/WebDriver/etc is the way to create acceptance tests. In case of GWT/GXT, the final user interface code is in JavaScript so creating acceptance tests using normal junit test cases is basically out of question. With Selenium you can create test scenarios matching real user actions and expected results.
Based on my experience with Selenium it can reveal bugs in the application logic and user interface (in case your test cases are well written). Dealing with AJAX front ends requires some extra effort but it is still feasible.
I use it to test multi page forms as this takes the burden out of typing the same thing over and over again. And having the ability to check if certain elements are present is great. Again, using the form as an example your final selenium test could check if something like say "Thanks Mr. Rogers for ordering..." appears at the end of the ordering process.
We develop custom survey web sites and I am looking for a way to automate the pattern testing of these sites. Surveys often contain many complex rules and branches which are triggered on how items are responded too. All surveys are rigorously tested before being released to clients. This testing results in a lot of manual work. I would like to learn of some options I could use to automate these tests by responding to questions and verifying the results in the database. The survey sites are produced by an engine which creates and writes asp pages and receives the responses to process into a database. So the only way I can determine to test the site is to interact with the web pages themselves. I guess in a way I need to build some type of bot; I really don't know much about the design behind them.
Could someone please provide some suggestions on how to achieve this? Thank you for your time.
Brett
Check out selenium: http://selenium.openqa.org/
Also, check out the answers to this other question: https://stackoverflow.com/questions/484/how-do-you-test-layout-design-across-multiple-browsersoss
You could also check out WatiN.
Sounds like your engine could generate a test script using something like Test::WWW::Mechanize
Usual test methodologies applies; white box and black box.
White box testing for you may mean instrumenting your application to be able to make it go into a particular state, then you can predict the the result you expect.
Black box may mean that you hit a page, then consider of the possible outcomes valid. Repeat and rinse till you get sufficient coverage.
Another thing we use is monitoring statistics for our service. Did we get the expected number of hits on this page. We routinely run a/b tests, and I have run a/b tests against refactored code to verify that nothing changed before rolling things out.
/Allan
I can think of a couple of good web application testing suites that should get the job done - one free/open source and one commercial:
Selenium (open source/cross platform)
TestComplete (commercial/Windows-based)
Both will let you create test suites by verifying database records based on interactions with the web app.
The fact that you're Windows/ASP based might mean that TestComplete will get you up and running faster, as it's native to Windows and .NET. You can download a free trial to see if it'll work for you before making the investment.
Check out the unit testing framework 'lime' that comes with the Symfony framework. http://www.symfony-project.org/book/1_0/15-Unit-and-Functional-Testing. You didn't mention you language, lime is php.
I would suggest the mechanize gem,available for ruby . It's pretty intuitive to use .
I use the QEngine(commerical) for the same purpose. I need to add a data and check the same in the UI. I write one script which does this and call that in a loop. the data can be passed via either csv or excel.
check that www.qengine.com , you can try Watir also.
My proposal is QA Agent (http://qaagent.com). It seems this is a new approach because you do not need to install anything. Just develop your web tests in the browser based ide. By the way you can develop your tests using jQuery and java script. Really cool!