Integration testing - mimic expected delays - unit-testing

In summary:
How do I create "integration" tests that mimic expected delays from external systems?
In detail:
I have an application "Main" that communicates with multiple external systems (via web services) which I'll call "Partners".
I am disinterested in the inner workings of the Partners, but I need to fully test Main.
For Main I currently have:
Unit Tests for each individual testable block [so test every public method on every class at every "level" (n-tier) and stub every dependency]
Integration tests that test from the top-down (so test all public methods on the Presentation tier and stub only these partner web services)
What I would also like is to create some integration tests to ensure that the code in MAIN delivers the correct performance.
What is "correct performance"? Well, the Product Manager can say "all Views must return data within 2 seconds". I know that (on average) a call to a partner takes (say) 1.5 seconds so I could write my Integration test with a stop watch that passes if the Main code completed in 0.5 seconds (2 - 1.5). However, in a discussion with a colleague it was suggested that the Stub for the partner should include the 1.5 second expected delay, and so my test should be that the Main code plus stub for partner should complete within the 2 seconds as specified by PM.
QUs:
What is the suggested behaviour?
If as suggested, how is this achieved using Rino Mocks for stubbing?
Thanks everyone
Griff

"all Views must return data within 2 seconds" makes no sense. when limiting the response time, you should also consider the load. when there will be 1 request per 10 second then your views may return in 2 seconds. but when there will be 10k request per second then your response time will be "slightly" longer. better performance requirement would be "x% of responses will take not more then y seconds when load is below z requests per seconds" or something similar.
unit testing delays also makes no sense. you should run full scale performance tests. choose one of the existing tools, prepare a few client nodes (controlled with that tool), run recorded/scripted clients' requests against your tested server and check what are the response times, cpu and memory usage. then add one or mode nodes to your system and check how the system is scaling

Related

Robot Framework: Set Timeout in Robot framework

I have created a framework in which I have used Set Browser Implicit Wait 30
I have 50 suite that contains total of 700 test cases. A few of the test cases (200 TC's) has steps to find if Element present and element not present. My Objective is that I do not want to wait until 30 seconds to check if Element Present or Element not Present. I tried using Wait Until Element Is Visible ${locator} timeout=10, expecting to wait only 10 seconds for the Element , but it wait for 30 seconds.
Question : Can somebody help with the right approach to deal with such scenarios in my framework? If I agree to wait until 30 seconds, the time taken to complete such test case will be more. I am trying to save 20*200 secs currently Please advise
The simplest solution is to change the implicit wait right before checking that an element does not exist, and then changing it back afterwards. You can do this with the keyword set selenium implicit wait.
For example, your keyword might look something like this:
*** Keywords ***
verify element is not on page
[Arguments] ${locator}
${old_wait}= Set selenium implicit wait 10
run keyword and continue on failure
... page should not contain element ${locator}
set selenium implicit wait ${old_wait}
You can simply add timeout="${Time}" next to the keyword you want to execute (Exp., Wait Until Page Contains Element ${locator} timeout=50)
The problem you're running into deals with issue of "Implicit wait vs Explicit Wait". Searching the internet will provide you with a lot of good explanations on why mixing is not recommended, but I think Jim Evans (Creator of IE Webdriver) explained it nicely in this stackoverflow answer.
Improving the performance of your test run is typically done by utilizing one or both of these:
Shorten the duration of each individual test
Run test in parallel.
Shortening the duration of a test typically means being in complete control of the application under test resulting in the script knowing when the application has successfully loaded the moment it happens. This means having a a low or none Implicit wait and working exclusively with Fluent waits (waiting for a condition to occur). This will result in your tests running at the speed your application allows.
This may mean investing time understanding the application you test on a technical level. By using a custom locator you can still use all the regular SeleniumLibrary keywords and have a centralized waiting function.
Running tests in parallel starts with having tests that run standalone and have no dependencies on other tests. In Robot Framework this means having Test Suite Files that can run independently of each other. Most of us use Pabot to run our suites in parallel and merge the log file afterwards.
Running several browser application tests in parallel means running more than 1 browser at the same time. If you test in Chrome, this can be done on a single host - though it's not always recommended. When you run IE then you require multiple boxes/sessions. Then you start to require a Selenium Grid type solution to distribute the execution load across multiple machines.

Whether performance will impact when database procedure is called from application many times?

In my project, we are calling the oracle procedure from our c++ application with
the help of Pro *C/C++ library provided by the oracle.
We have one big procedure, and my idea is to split the procedure in to two for modularity. But their advice is to call the procedure for one time, and perform all jobs at a shot.
The reason which i got from them is it will cause performance impacts, since application program is interacting with database for multiple times.
I agree, the above scenario will happen when application connects the database, calls the procedure and finally disconnects the database for each procedure call. But, what we really do is we create a pool of connections at startup, and reuse that pre-connected database connections for interacting with the database.
Information about my application:
It is multi-threaded application, which handles about 1000 request per second with thread pool size as 20. Currently for each request we communicate with database for 4 times.
EDIT:
"the switch between PLSQL and SQL is much faster than the other way around".
Q1. How this is is related to my actual question? My question is about spiliting a procedure in to two equal parts. Say suppose i have 4 queries executed in procedure, i just split it in to two as procedure a and procedure b, and each procedure will have two queries.
"pro*c calls which call PLSQL is an performance hit".
Q2. Do you mean an communication between application(pro *C/C++) and database(oracle) here? If so, is communication was a big performance hit?
In the ask tom link you have attached, "But don't be afraid at all to invoke SQL from PLSQL - that is what PLSQL does best"
Q4. Wheather context switch will happen while we call SQL from PLSQL? Because, as per above statement, it seems to be no performanace impact.
Your advice is correct, it would be better to perform all database tasks at once. There are 2 major performance impacts in your scenario
The context switching of pro*c between the SQL engine and the PL/SQL engine to run your threads multiple times. Usually the biggest issue in many PL/SQL calls from a client application.
The network stack overhead (TNS) in the communications between your pro*c app and the database engine - particularly if your app is on a different physical host.
Having said that, you are creating a connection pool at the application end the TNS listener should also have a pool of bequeathed server shadow processes waiting for each network connection (this is setup at the listener.ora).
The OCI login/logoff when the shadow process is already waiting for connect is very quick and not a huge factor in latency - I don't worry about this unless a new shadow process on the server has to start up - then it can be a very expensive call. As you are using connection pooling on the client side, this is usually not an issue but just something to consider because of the threading in your calls. Once you exhaust the pool of server shadow processes, you will notice a huge degradation if the TNS listener has to start up more server shadow process.
Edit in answer to the new questions:
It is very related. As pointed out previously, you should minimise the amount of plsql and sql calls within your C++ app. Each PLSQL call within your C++ app call invokes the SQL engine which then invokes the PLSQL engine for the procedure call. So if you split your procedure into 2 - you are doubling the SQL to PLSQL context switches which is the more expensive switch as outlined by the Tom Kyte article and my own personal experience.
Is answered in 1. But as I previously said the communications overhead is second unless your hosts are on different physical networks and the types of data you are transferring. For instance large C++ object parameters and large Oracle result sets with many calls will obviously affect communications latency with round trips. Remember that with more PLSQL calls you are also adding more SQLNET traffic for the setup for each connection and result set.
there is no 3. question
PLSQL to SQL within the PLSQL engine is negligible so don't get hung up on it. Put all your SQL calls within 1 PLSQL call for maximum performance throughput. Don't split the calls just to be more eloquent at the expensive of performance.

Concurrency problems in netty application

I implemented a simple http server link, but the result of the test (ab -n 10000 -c 100 http://localhost:8080/status) is very bad (look through the test.png in the previous link)
I don't understand why it doesn't work correctly with multiple threads.
I believe that, by default, Netty's default thread pool is configured with as many threads as there are cores on the machine. The idea being to handle requests asynchronously and non-blocking (where possible).
Your /status test includes a database transaction which blocks because of the intrinsic design of database drivers etc. So your performance - at high level - is essentially a result of:-
a.) you are running a pretty hefty test of 10,000 requests attempting to run 100 requests in parallel
b.) you are calling into a database for each request so this is will not be quick (relatively speaking compared to some non-blocking I/O operation)
A couple of questions/considerations for you:-
Machine Spec.?
What is the spec. of the machine you are running your application and test on?
How many cores?
If you only have 8 cores available then you will only have 8 threads running in parallel at any time. That means those batches of 100 requests per time will be queueing up
Consider what is running on the machine during the test
It sound like you are running the application AND Apache Bench on the same machine so be aware that both your application and the testing tool will both be contending for those cores (this is in addition to any background processes going on also contending for those cores - such as the OS)
What will the load be?
Predicting load is difficult right. If you do think you are likely to have 100 requests into the database at any one time then you may need to think about:-
a. your production environment may need a couple of instance to handle the load
b. try changing the config. of Netty's default thread pool to increase the number of threads
c. think about your application architecture - can you cache any of those results instead of going to the database for each request
May be linked to the usage of Database access (synchronous task) within one of your handler (at least in your TrafficShappingHandler) ?
You might need to "make async" your database calls (other threads in a producer/consumer way for instance)...
If something else, I do not have enough information...

Locust.io: Controlling the request per second parameter

I have been trying to load test my API server using Locust.io on EC2 compute optimized instances. It provides an easy-to-configure option for setting the consecutive request wait time and number of concurrent users. In theory, rps = wait time X #_users. However while testing, this rule breaks down for very low thresholds of #_users (in my experiment, around 1200 users). The variables hatch_rate, #_of_slaves, including in a distributed test setting had little to no effect on the rps.
Experiment info
The test has been done on a C3.4x AWS EC2 compute node (AMI image) with 16 vCPUs, with General SSD and 30GB RAM. During the test, CPU utilization peaked at 60% max (depends on the hatch rate - which controls the concurrent processes spawned), on an average staying under 30%.
Locust.io
setup: uses pyzmq, and setup with each vCPU core as a slave. Single POST request setup with request body ~ 20 bytes, and response body ~ 25 bytes. Request failure rate: < 1%, with mean response time being 6ms.
variables: Time between consecutive requests set to 450ms (min:100ms and max: 1000ms), hatch rate at a comfy 30 per sec, and RPS measured by varying #_users.
The RPS follows the equation as predicted for upto 1000 users. Increasing #_users after that has diminishing returns with a cap reached at roughly 1200 users. #_users here isn't the independent variable, changing the wait time affects the RPS as well. However, changing the experiment setup to 32 cores instance (c3.8x instance) or 56 cores (in a distributed setup) doesn't affect the RPS at all.
So really, what is the way to control the RPS? Is there something obvious I am missing here?
(one of the Locust authors here)
First, why do you want to control the RPS? One of the core ideas behind Locust is to describe user behavior and let that generate load (requests in your case). The question Locust is designed to answer is: How many concurrent users can my application support?
I know it is tempting to go after a certain RPS number and sometimes I "cheat" as well by striving for an arbitrary RPS number.
But to answer your question, are you sure your Locusts doesn't end up in a dead lock? As in, they complete a certain number of requests and then become idle because they have no other task to perform? Hard to tell what's happening without seeing the test code.
Distributed mode is recommended for larger production setups and most real-world load tests I've run have been on multiple but smaller instances. But it shouldn't matter if you are not maxing out the CPU. Are you sure you are not saturating a single CPU core? Not sure what OS you are running but if Linux, what is your load value?
While there is no direct way of controlling rps, you can try constant_pacing and constant_throughput option in wait_time
From docs
https://docs.locust.io/en/stable/api.html#locust.wait_time.constant_throughput
In the following example the task will always be executed once every 1 seconds, no matter the task execution time:
class MyUser(User):
wait_time = constant_throughput(1)
constant_pacing is inverse of this.
So if you run with 100 concurrent users, test will run at 100rps (assuming each request takes less than 1 second in first place

Approach for REST request with long execution time?

We are building a REST service that will take about 5 minutes to execute. It will be only called a few times a day by an internal app. Is there an issue using a REST (ie: HTTP) request that takes 5 minutes to complete?
Do we have to worry about timeouts? Should we be starting the request in a separate thread on the server and have the client poll for the status?
This is one approach.
Create a new request to perform ProcessXYZ
POST /ProcessXYZRequests
201-Created
Location: /ProcessXYZRequest/987
If you want to see the current status of the request:
GET /ProcessXYZRequest/987
<ProcessXYZRequest Id="987">
<Status>In progress</Status>
<Cancel method="DELETE" href="/ProcessXYZRequest/987"/>
</ProcessXYZRequest>
when the request is finished you would see something like
GET /ProcessXYZRequest/987
<ProcessXYZRequest>
<Status>Completed</Status>
<Results href="/ProcessXYZRequest/Results"/>
</ProcessXYZRequest>
Using this approach you can easily imagine what the following requests would give
GET /ProcessXYZRequests/Pending
GET /ProcessXYZRequests/Completed
GET /ProcessXYZRequests/Failed
GET /ProcessXYZRequests/Today
Assuming that you can configure HTTP timeouts using whatever framework you choose, then you could request via a GET and just hang for 5 mins.
However it may be more flexible to initiate an execution via a POST, get a receipt (a number/id whatever), and then perform a GET using that 5 mins later (and perhaps retry given that your procedure won't take exactly 5 mins every time). If the request is still ongoing then return an appropriate HTTP error code (404 perhaps, but what would you return for a GET with a non-existant receipt?), or return the results if available.
As Brian Agnew points out, 5 minutes is entirely manageable, if somewhat wasteful of resources, if one can control timeout settings. Otherwise, at least two requests must be made: The first to get the result-producing process rolling, and the second (and third, fourth, etc., if the result takes longer than expected to compile) to poll for the result.
Brian Agnew and Darrel Miller both suggest similar approaches for the two(+)-step approach: POST a request to a factory endpoint, starting a job on the server, and later GET the result from the returned result endpoint.
While the above is a very common solution, and indeed adheres to the letter of the REST constraints, it smells very much of RPC. That is, rather than saying, "provide me a representation of this resource", it says "run this job" (RPC) and then "provide me a representation of the resource that is the result of running the job" (REST). EDIT: I'm speaking very loosely here. To be clear, none of this explicitly defies the REST constraints, but it does very much resemble dressing up a non-RESTful approach in REST's clothing, losing out on its benefits (e.g. caching, idempotency) in the process.
As such, I would rather suggest that when the client first attempts to GET the resource, the server should respond with 202 "Accepted" (http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html#sec10.2.3), perhaps with "try back in 5 minutes" somewhere in the response entity. Thereafter, the client can poll the same endpoint to GET the result, if available (otherwise return another 202, and try again later).
Some additional benefits of this approach are that single-use resources (such as jobs) are not unnecessarily created, two separate endpoints need not be queried (factory and result), and likewise the second endpoint need not be determined from parsing the response from the first, thus simpler. Moreover, results can be cached, "for free" (code-wise). Set the cache expiration time in the result header according to how long the results are "valid", in some sense, for your problem domain.
I wish I could call this a textbook example of a "resource-oriented" approach, but, perhaps ironically, Chapter 8 of "RESTful Web Services" suggests the two-endpoint, factory approach. Go figure.
If you control both ends, then you can do whatever you want. E.g. browsers tend to launch HTTP requests with "connection close" headers so you are left with fewer options ;-)
Bear in mind that if you've got some NAT/Firewalls in between you might have some drop connections if they are inactive for some time.
Could I suggest registering a "callback" procedure? The client issues the request with a "callback end-point" to the server, gets a "ticket". Once the server finishes, it "callbacks" the client... or the client can check the request's status through the ticket identifier.