We are evaluating CPPUnitTest framework to do some performance evaluation of a particular production by consuming its API. The test is expected to login into the application and then perform some activities and then logout. While I am aware of how to achieve this using cppUnit suite, the part that I am not aware completely is to know how to set up the suite so that 100 different logins happen in parallel. Do you suggest anything suitable for this.
We want to use cppUnitTest framework and pull off performance testing by letting around 100 users login into the application and do some activity and log out.
Thanks,
Pavan.
Related
I am using cloud firestore + cloud functions + firestore auth to support my game.
I developed the main part of the app with unit tests in the app plus typescript tests for cloud functions. Now I want to add security rules to secure the data.
When I do so, requiring the calls to be authenticated, all my unit tests in unity (naturally) fails, as I do not authenticate a user but mocks them as data representation of the user in the db.
I want to keep using my unit tests in unity but still requiring the real db to demand authentication.
I have tried to look around for mock auth, or auth test environment, but found nothing except the library rules-unit-testing.
I see the content of it with specialized logic for mocking user, making me think that I am understanding this the wrong way by trying to do this in unity. My question is, How to continue to do game tests in unity, which requires interacting with the firestore server, while keeping security rules?
I am answering my own question after more time.
My analysis was that I ran into issues because I had coupled my code too tightly: server logic was on the client side and broke when introducing security rules. I decided to move logic to cloud functions and have only simple client side calls (sets, gets, updates, http functions).
So if someone runs into similar problems (architecture hampers use of best practices) I suggest to re-think the architecture. Feels obvious when writing...
Have fun coding all ^_^
I have a booking app that can deal with both local and remote API bookings. Our logic —for (eg) pricing and availability— follows two very different pathways. We obviously need to test both.
But running regular tests against a remote API is slow. The test environment provided manages a response in 2-17 seconds. It's not feasible to use this in my pre_commit tests. Even if they sped that up, it's never going to be fast and will always require a connection to pass.
But I still need to test our internal logic for API bookings.
Is there some way that within a test runner, I can spin up a little webserver (quite separate to the Django website) that serves a reference copy of their API. I can then plug that into the models we're dealing with and query against that locally, at speed.
What's the best way to handle this?
Again, I need to stress that this reference API should not be part of the actual website. Unless there's a way of adding views that only apply at test-time. I'm looking for clean solutions. The API calls are pretty simple. I'm not looking for verification or anything like that here, just that bookings made against an API are priced correctly internally, handle availability issues, etc.
for your test porpuse you can mock api call functions.
you can see more here:
https://williambert.online/2011/07/how-to-unit-testing-in-django-with-mocking-and-patching/
I can get my Selenium tests running fine for one user/ sequentially on Django 1.4 using LiveServerTestCase, but I would like to emulate parallel multi-user testing. I don't think I need real load testing, since my apps are mostly moderate/low traffic web-sites and internal web-apps, so I would prefer to avoid extra tools like JMeter.
I've started out setting up Selenium Grid but am not sure how to keep my tests independent and still run multiple tests with multiple users. I assume the test cases should be run for different users on the same DB simultaneously - but each test drops and creates a new DB, so I don't understand how that is possible.
And I don't want to sign up for a service like BrowserMob.
I would suggest using a tool like JMeter anyways for a couple different reasons:
If you want to test that there are no bugs when multiple users are hitting the service at the same time, you won't be guaranteed that will happen running two or more automated selenium tests at the same time since it can take quite a while to perform whatever actions happen before a request is actually sent to the server. You are much more likely to hit these kinds of bugs when using a tool like jmeter that can send multiple requests simultaneously with little to no lag between requests. You also can easily execute a lot more threads of jmeter at the same time than you can of Selenium.
If you actually want to test the performance of your site, or the behavior of your site while under more than normal load, you can more easily do this with tools like jmeter.
With that said, if you really want to use Selenium for this, I know that it is fairly simple with Selenium2/Webdriver, however I am not familiar enough with Selenium Grid to provide guidance on what would be required there.
I think I figured this out but welcome more (potentially more elegant) solutions.
I'm running both "clean" and "dirty" tests. The "clean" tests are just normal Selenium tests that set up and tear down the DB after every test. The "dirty" tests are run by passing options to my subclassed DjangoTestSuiteRunner which tell it whether or not to set up or tear down the DB, and also pass in a user id, like so:
python manage.py test myapp --testrunner=testrunner.MySeleniumTestRunner \
--no_setup_db --no_teardown_db --user=1234 --liveserver=localhost:8081
I then string together about 10 of these commands in a shell script and log the output.
The only tricky part is writing your tests in a way that take into account both kinds of test. So, for example, if I'm testing simply adding a product to a shopping cart in my clean test and checking for the item in the cart to indicate success, then I also need to add a condition that checks for things like product availability. So when I run my dirty tests, if there are only four products available, then the first four users are successful because the product was available and I verified the product was added to their cart - but the fifth user also passes the test, because when the product is not available I check for proper error handling, etc.
I know this isn't very unit-test like, and may be even rather non-standard for functional testing, but I think it emulates parallel multi-user testing pretty well, without compromising test independence.
We have a considerable code base with relatively high test coverage for pages/forms, all via vanilla POST/GET.
Now, we are find ourselves moving more into the 'ajaxy' space, and it's not quite possible to test with GET/POST complete scenarios like user registration, or an item creation, as they involve lots of JavaScript/Ajax calls.
While things like that are the most likely candidates for testing with Selenium, I wonder should we adopt the Selenium testing across the board, leaving the old-school POST/GET tests altogether?
Advantages of Selenium adoption seems to be too good - ability to run pretty much same GET/POST tests but across the range of browsers.
Or am I missing something in my pursuit of cool and trendy stuff and ditching the old proven POST/GET tests?
There's advantages and disadvantages of both approaches, so my recommendation would be to use both.
Selenium launches an actual browser and simulates a user interacting with your web application, which can be great if you're testing Ajax features. It can verify that elements are visible and interact with them just as a user would. Another killer feature is the ability to take screenshots through Selenium, which can be incredibly useful when investigating failures.
Unfortunately launching a browser and navigating to a specific page/state in your application can be slow, and you'd need a lot of hardware if you wanted to test concurrent users (load testing) with Selenium.
If you just want to test that your server responds with a HTTP 200 for certain actions, or load test your applications, or that the response contains certain values then basic POST/GET would be more suitable.
If you do decide to go with a pure Selenium approach to testing I would recommend looking into using Selenium Grid or a cloud based service, as running lots of tests through Selenium can be quite time consuming.
I think you should definitely use Selenium and POST/GET (Unit) tests altogether, because the aim of your unit test is to test functionality of a specific section of code but Selenium is is doing integration testing on your web-app.
Should we unit test web service or really be looking to unit test the code that the web service is invoking for us and leave the web service alone or at least until integration testing, etc.... ?
EDIT: Further clarification / thought
My thought is that testing the web service is really integration testing not unit testing?? I ask because our web service at this point (under development) is coded in such a way there is no way to unit test the code it is invoking. So I am wondering if it is worth while / smart to refactor it now in order to be able to unit test the code free of the web service? I would like to know the general consensus on if it's that important to separate the two or if it's really OK to unit test the web service and call it good/wise.
If I separate them I would look to test both but I am just not sure if separation is worth it. My hunch is that I should.
Unit testing the code that the web service is invoking is definitely a good idea since it ensures the "inside" of your code is stable (and well designed). However, it's also a good idea to test the web service calls, especially if a few of them are called in succession to accomplish a certain task. This will ensure that the web services that you've provided are usable, as well as, work properly when called along with other web service calls.
(Not sure if you're writing these tests before or after writing your code, but you should really consider writing your web service tests before implementing the actual calls so that you ensure that they are usable in advance of writing the code behind them.)
Why not do both? You can unit test the web service code, as well as unit test it from the point of view of a client of the web service.
Under my concept, the WS is just a mere encapsulation of a Method of a Central Business Layer Object, in other words, the Web Method is just a "gate" to access methods deeper in the model.
With the former said, i do both operations:
Inside the Server, I create a Winform App who do Load Testing on the Business Layer Method.
Outside the Server (namely a Machine out of the LAN where the Web App "lives"), I create a Tester (Winform or Web) that consumes the WS, doing that way the Load Testing.
That Way I can evaluate the performance of my solution considering and discarding the "Web Effect" (i.e. The time for the data to travel and reach the WS, the WS object creation, etc).
All the above said is of course IMHO. At least that worked a lot for me!
Haj.-
We do both.
We unit test the various code elements.
Plus, we use the unit test framework to execute tests against the web service as a whole. This is rather complex because we have to create (and load) a database, start a server, and then execute requests against that server.
Testing the web service API is easy (it's got an API) and valuable. It's not a unit test, though - it's an "integration", "sub-system", or "system" test (depends on who you ask).
There's no need to delay the testing until some magical period called "integration testing" though, just get some simple tests now and reap the benefit early.
I like the idea of writing unit tests which call your web service through one of its public interfaces. For instance, a given WCF web service may expose HTTP, TCP, and "web" bindings. Such a unit tests proves that the web service can be called through a binding.
Integration testing would involve testing all of the bindings of the service, testing with particular client scenarios, and with particular client tools. For instance, it would be important to show that a Java client can be created with IBM's Rational Web Developer that can access the service when using WS-Security.
If you can, try consuming your web service using some of the development tools your customers will use (Delphi, C#, VB.Net, ColdFusion, hand crafted XML, etc...). Within reason, of course.
1) Different tools may have problems consuming your web service. Better for you to run in to this before your customers do.
2) If a customer runs in to a problem, you can easily prove that your web service is working as expected. In the past year or so, this has stopped finger pointing in its tracks at least a dozen times.
The worst was the developer in a different time zone who was hand crafting the XML for SOAP calls and parsing the responses. Every time he ran in to a problem, he would insist that it was on our end and demand (seriously) that we prove otherwise. I made a dead simple Delphi app to consume the web service, demonstrate that the methods worked as expected and even displayed the XML for each request and response.
re your updated question.
The integration testing and unit testing are only superficially similar, so yes, they should be done and thought of separately.
Refactoring existing code to make it testable can be risky. It's up to you to decide if the benefits outweigh the time and effort it will take. In my case, I'd certainly try, even if you do it a little bit at a time.
On the bright side, a web service has a defined interface, so you don't really have to change anything to add integration testing. Go crazy. If you can, try to do this before you roll the web service out to customers. There's a good chance that using the web service lead to changes in the interface, and you don't want this to mess up customers too much.