pfSense Enterprise Firewall Testing - concurrency

I am working on Firewall performance testing. I need to know about pfSense Enterprise Firewall performance. How many maximum concurrent users and maximum sessions does it supports?
I have used JMeter for concurrent user but could not find any other tool to measure maximum sessions of a firewall, or both maximum concurrent users and maximum sessions of a firewall. Is there any tool to test the maximum concurrent users and maximum sessions of a firewall?

Depending on what type of traffic do you want to simulate the number of sessions may be equal or higher than the number of threads.
In the majority of cases number of users == number of sessions.
For the HTTP protocol if you enabled embedded resources downloading for the requests including the embedded resources the number of sessions will be 6x times higher. It can be visualized using i.e. Server Hits Per Second chart (can be installed using JMeter Plugins Manager)

Related

appConcurrentRequest limit exceed on IIS

I deployed .Netcore MVC on AWS windows server 2019(32gb RAM and 8 cores). 100k concurrent requests because its an online exam application. 100k concurrent request should be entertained. Which server should I use?
The concurrent connection configuration depend on the maximum concurrent connection in site's advanced setting, queue length and maximum worker process in application pool advanced setting, maximum thread in thread pool.Besides, I notice the serverruntime/httpruntime has a limit of appconcurrentRequestLimit with 5000. So if you need to achieve the high concurrent request, you could go to
IIS manager->site->configuration editor->system.webServer/serverRuntime->appConcurrentRequest.

Openshift roundrobin request across all pods

We want to have roundrobin of request across all pods deployed in openshift.
I have configured below annotations in Route config but the sequence of calls to all pods is random:
haproxy.router.openshift.io/balance : roundrobin
haproxy.router.openshift.io/disable_cookies: 'true'
We have spinup 3 pods. We want requests to have sequence
pod1,pod2,pod3,pod1,pod2,pod3,pod1....
But the real behaviour after setting above annotations in random like:
pod1,pod1,pod2,pod2,pod3,pod1,pod2,pod2.... which is incorrect.
Do we need to configure any openshift configuration make it perfect roundroubin?
If you want to access through pod1, pod2, pod3 in order, the you should use leastconn on the same pod group.
leastconn The server with the lowest number of connections receives the
connection. Round-robin is performed within groups of servers
of the same load to ensure that all servers will be used. Use
of this algorithm is recommended where very long sessions are
expected, such as LDAP, SQL, TSE, etc... but is not very well
suited for protocols using short sessions such as HTTP. This
algorithm is dynamic, which means that server weights may be
adjusted on the fly for slow starts for instance.
roundrobin of HAProxy would distribute the request equally, but it might not protect the accessing server order in the group.
roundrobin Each server is used in turns, according to their weights.
This is the smoothest and fairest algorithm when the server's
processing time remains equally distributed. This algorithm
is dynamic, which means that server weights may be adjusted
on the fly for slow starts for instance. It is limited by
design to 4095 active servers per backend. Note that in some
large farms, when a server becomes up after having been down
for a very short time, it may sometimes take a few hundreds
requests for it to be re-integrated into the farm and start
receiving traffic. This is normal, though very rare. It is
indicated here in case you would have the chance to observe
it, so that you don't worry.
Refer HAProxy balance (algorithm) for details of balance algorithm options.

jmeter how to produce large number of service reqest in a second like 100.000 req/per sec

I have been doing load test for very long in my company but tps never passed 500 transaction per minute. I have more challenging problem right now.
Problem:
My company will start a campaing and ask a questiong to it's customers and first correct answer will be rewarded. Analists expect 100.000 k request in a second at global maximum. (doesnt seem to me that realistic but this can be negotiable)
Resources:
Jmeter,
2 different service requests,
5 x slave with 8 gb ram,
80 mbps internet connection,
3.0 gigahertz
Master computer with same capabilities with slaves.
Question:
How to simulete this scenario, is it possible? What are the limitations. How should be the load model. Are there any alternative to do that?
Any comment is important..
Your load test always need to represent real usage of application by real users so first of all carefully implement your test scenario to mimic real human using a real browser with all its stuff like:
cookies
headers
embedded resources (proper handling of images, scripts, styles, fonts, etc.)
cache
think times
etc.
Make sure your test is following JMeter Best Practices, i.e.:
being run in non-GUI mode
all listeners are disabled
JVM settings are optimised for maximum performance
etc.
Once done you need to set up monitoring of your JMeter engines health metrics like CPU, RAM, Swap usage, Network and Disk IO, JVM stats, etc. in order to be able to see if there is a headroom to continue. JMeter PerfMon Plugin is very handy as its results can be correlated with the Test Metrics.
Start your test from 1 virtual user and gradually increase the load until you reach the target throughput / your application under test dies / JMeter engine(s) run out of resources, whatever comes the first. Depending on the outcome you will either report success or defect or will need to request more hosts to use as JMeter engines / upgrade existing hardware.

Max number for webservice to service

What is the maximum number that a webservice can take and process request at the same time? Is it possible to set some limit and what might be the problem if webservice gets too many request and how to solve this problem on that situation?
The number of the requests handled by the web service at a given time is depend on the architecture of your web server. If you want to improve the number of requests served by the server at a given time you should improve the architecture and the performance.
Please refer Microsoft article for more information on webservice performance.
You can set limits by implementing a model on top of that by controlling the user requests hit the server in a given time. the most recommended model is to implement this in your middleware platform That will be a security measure too. You can get rid of security threats like Denial-of-Service attack.
In middleware solutions like WSO2 API Manager, Throttling policies have implemented as a solution for access controlling. You can check the docuementations on Throttling Policies for more information on how number of hits for a server at a given time is controlled by using middleware logic.

Is this an excessive number of active sessions on a ColdFusion server?

I have recently been having some ColdFusion (10) stability issues on one of my servers. The basic server details are as follows:
2.6GHz dual processor, 6GB RAM, Windows Server 2008 R2
The hosting company who manage this server have said that there are too many active sessions on the server and that this could be the cause of the issue.
I have been monitoring the sessions on the server and at peak times there may be 1500-2000 active sessions taking up about 1.5-2mb of memory in total. The sessions are stored in memory and for the most part these are real user sessions as we run some code to detect bots and issue them with very short sessions.
Given the above information does this seem like an unreasonable number of active sessions and session memory usage? Any advice would be appreciated, please let me know if any more information would be helpful.