I have to do the performance testing in JMeter with concurrent users.Now I am using the synchronous timer and set the rampup time is zero.When i check the report it shows the start time.Milliseconds are varied for all the users.Is it correct or not.If its not correct , what to do?
Looking at your screenshot, it appears that you have the synchronizing timer set to 600. IF that's the case, then yes it is correct. JMeter will release the threads "at the same time", but it will never be the exact time.
I would recommend monitoring the server side of your application to see how many concurrent users are in the system.
Related
I'm calling ntp_gettime() and it is performing as expected however if I kill ntpd I'm still getting the correct behaviour with my return value showing no issues. This suggests ntp_gettime() does not call through to ntpd, which is what I believe was happening.
I'm trying to check that ntpd is still running correctly and that it still has a valid connection. I now assume that ntpd updates the system based on the defined interval and the ntp_gettime() call is calling just the system.
My question is can ntp_gettime() be used to determine if ntpd is running and that the server connection is still valid and have I just made a mistake somewhere?
If not, is there a way to do this?
The answer to your question is "no". This is not the interface to the NTP server, and cannot be used to determine the status of the NTP server. This is something completely different. According to the glibc documentation:
The ntp_gettime and ntp_adjtime functions provide an interface to
monitor and manipulate the system clock to maintain high accuracy
time. For example, you can fine tune the speed of the clock or
synchronize it with another time source.
A typical use of these functions is by a server implementing the
Network Time Protocol to synchronize the clocks of multiple systems
and high precision clocks.
In other words, this is what the NTP server itself uses to talk to the kernel, to take care of business.
Checking the daemon operation
ntpctl is a program to display information about the running ntpd daemon, the below command will give you info about how many servers the daemon is syncing with, the status of the system clock if it's synced/unsynced and the clock offset. For further information see the manpage of ntpctl https://man.openbsd.org/ntpctl. Openntpd package brings you the ntpctl program.
ntpctl -s all
The way I ended up doing it is by using a pipe to make a ntpstat call and processing the output.
That is I check for unsynchronised, synchronised, and the absence of synchronised (after checking for unsynchronised) and act accordingly based on the results above.
If there is a better way to get the same result, because I think this is messy, please let me know.
I have a certain program that recieves input and returnes an output with a run-time of about 2 seconds,
Now, i want to run this program online on a server that can handle multiple connections (lets say up to 100k),
on each client-server session the program will launch,
the client will hand the server the program's input and will wait for the program to end to recieve the server's respond (program's output),
Lets say the server's host is a very powerful machine - e.g 16 cores,
Can this work or it is to much runtime for each client?
What is the maximum runtime this kind of program can have?
I'm posting this as an answer because it's too large to place as a comment.
Can this work? It depends. It depends because there are a lot of variables in this problem. Let's look at some of them:
you say it takes 2 seconds to compute a result. Where and how are those seconds spent? Is this pure computation or are you accessing a database, or the file system? Is this CPU bound or I/O bound? If you run computations for the full 2 seconds then you are consuming CPU which means that you can simultaneously serve only 16 clients, one per core. Are you hitting a database? Is this on the powerful server or on some other machine? If the database is the bottleneck than move this to the powerful server and have SSD drives on it.
can you improve processing for one client? What's more efficient, doing the processing on one core or spread it across all the cores? If you can parallelize, can you limit thread contention?
is CPU all you need? How about memory? Any backend service you access? Are those over the network? Do you have enough bandwidth?
related to memory, what language/platform are you using? Does it have a garbage collector? Do you generate a lot of object to compute a result? Does the GC kick in and pauses your application so it cleans up and compacts the memory? Do you allocate enough memory for the application to run?
can you cache responses and serve them to other clients or are responses custom to each client? Can you precompute the results and then just serve them to clients or can't you predict the inputs?
Have you tried running some performance tests and profile the application to see where hotspots might show up? Can you do something about them?
have you any imposed performance criteria? How many clients do you want to support simultaneously? Is 2 seconds too much? Can clients live with more? How much more? How many seconds does it mean an unacceptable response time?
do you need a big server to run this setup or smaller ones work better (i.e. scale horizontally instead of vertically)?
etc
Nobody can answer this for you. You have to do an analysis of your application, run some tests, profile it, optimize it, then repeat until you are satisfied with the results.
I am using jmeter tool for load testing. I want to execute all threads are in simultaneously (at the same time).
So I configured
No of Threads:20
Ramp-up period:Empty
Loop Count:1.
Now I run the jmeter tool.
After getting the result, I saw the result in view results in Table.
From this start time is displayed.i.e Threads are executed one by one not simultaneously. I included the image also.
Could you tell me how to run the concurrent threads simultaneously?
Starting Time for threads.
Add Synchronizing Timer in your test plan
Just add a Synchronizing Timer to your Test Plan.
I implemented a simple http server link, but the result of the test (ab -n 10000 -c 100 http://localhost:8080/status) is very bad (look through the test.png in the previous link)
I don't understand why it doesn't work correctly with multiple threads.
I believe that, by default, Netty's default thread pool is configured with as many threads as there are cores on the machine. The idea being to handle requests asynchronously and non-blocking (where possible).
Your /status test includes a database transaction which blocks because of the intrinsic design of database drivers etc. So your performance - at high level - is essentially a result of:-
a.) you are running a pretty hefty test of 10,000 requests attempting to run 100 requests in parallel
b.) you are calling into a database for each request so this is will not be quick (relatively speaking compared to some non-blocking I/O operation)
A couple of questions/considerations for you:-
Machine Spec.?
What is the spec. of the machine you are running your application and test on?
How many cores?
If you only have 8 cores available then you will only have 8 threads running in parallel at any time. That means those batches of 100 requests per time will be queueing up
Consider what is running on the machine during the test
It sound like you are running the application AND Apache Bench on the same machine so be aware that both your application and the testing tool will both be contending for those cores (this is in addition to any background processes going on also contending for those cores - such as the OS)
What will the load be?
Predicting load is difficult right. If you do think you are likely to have 100 requests into the database at any one time then you may need to think about:-
a. your production environment may need a couple of instance to handle the load
b. try changing the config. of Netty's default thread pool to increase the number of threads
c. think about your application architecture - can you cache any of those results instead of going to the database for each request
May be linked to the usage of Database access (synchronous task) within one of your handler (at least in your TrafficShappingHandler) ?
You might need to "make async" your database calls (other threads in a producer/consumer way for instance)...
If something else, I do not have enough information...
I'm building my first web application after many years of desktop application development (I'm using Django/Python but maybe this is a completely generic question, I'm not sure). So please beware - this may be an ultra-newbie question...
One of my user processes involves heavy processing in the server (i.e. user inputs something, server needs ~10 minutes to process it). On a desktop application, what I would do it throw the user input into a queue protected by a mutex, and have a dedicated background thread running in low priority blocking on the queue using that mutex.
However in the web application everything seems to be oriented towards synchronization with the HTTP requests.
Assuming I will use the database as my queue, what is best practice architecture for running a background process?
There are two schools of thought on this (at least).
Throw the work on a queue and have something else outside your web-stack handle it.
Throw the work on a queue and have something else in your web-stack handle it.
In either case, you create work units in a queue somewhere (e.g. a database table) and let some process take care of them.
I typically work with number 1 where I have a dedicated windows service that takes care of these things. You could also do this with SQL jobs or something similar.
The advantage to item 2 is that you can more easily keep all your code in one place--in the web tier. You'd still need something that triggers the execution (e.g. loading the web page that processes work units with a sufficiently high timeout), but that could be easily accomplished with various mechanisms.
Since:
1) This is a common problem,
2) You're new to your platform
-- I suggest that you look in the contributed libraries for your platform to find a solution to handle the task. In addition to queuing and processing the jobs, you'll also want to consider:
1) status communications between the worker and the web-stack. This will enable web pages that show the percentage complete number for the job, assure the human that the job is progressing, etc.
2) How to ensure that the worker process does not die.
3) If a job has an error, will the worker process automatically retry it periodically?
Will you or an operations person be notified if a job fails?
4) As the number of jobs increase, can additional workers be added to gain parallelism?
Or, even better, can workers be added on other servers?
If you can't find a good solution in Django/Python, you can also consider porting a solution from another platform to yours. I use delayed_job for Ruby on Rails. The worker process is managed by runit.
Regards,
Larry
Speaking generally, I'd look at running background processes on a different server, especially if your web server has any kind of load.
Running long processes in Django: http://iraniweb.com/blog/?p=56