I'm creating a logistic project in C++ where I have to compare the execution time of a solver that I created with an open source solver.
So, what I need is to stop the solver that I created if it will run longer than the open source solver.
The problem is that I didn't find anything about a timer that stops the actual executing program.
Someone can help me?
You could just launch a future, that sleeps for a given time and then call std::exit.
Without further information about what you are solving I would suggest running both in a series of benchmarks using multiple objectives to solve if possible since both might perform differently in different situations. Running both in rigorous benchmarks will help make sure your results are valid. Also even if your solver takes longer knowing that time difference will be able to help you optimize it.
Related
I have a cilk program that is using the libpuzzle library. My task is to parallelize the sorting of images based on their resemblance and I am using a parallel cilk for loop to compare all the images with a reference image. What I noticed was that on the first run of the program the execution was slow, but after the second run it speeds up and I could see all the logical cores working at 100%... I repeated this every time i built the project and always ra two runs and this performance could be seen. Any ideas what might cause a parallel program to run slightly bad on the first run and good on the second run. I changed the image distributions as well and this pattern seems to hold. If anyone has had similar experiences could you please share what you did to fix the problem?
Thank you
According to the previous question How can I find ROI and detect markers inside, thank you for professional who's help me. I have already done that task. :)
My next question that related with previously,
Now I would like to track each blob individually by call the detecting function (named track(param)) with different parameters (as blobs number) in same time then functions will return me the output of blob position.
Which techniques that can I execute same function in the same time?
I have confused about OpenMP, OpenCL and some parallel programming that possible to return output on the same time or not?
Sorry for poorly in English. Thank you for all of helpers :)
It sounds to me like you just need multi-threading, which can be done by a library such as Boost Thread. You can spawn multiple threads to track each blob. You can then use join in the end to ensure all threads are completed and combine the results
I want to check the performance of an application (whose exe i have, no source code) by running it multiple times and possibly compare the results, dint find much on the internet regarding this topic,
Since i have to do it with multiple input times, i thought doing it through code(no bar on the language used) can make things easier, as i may have to repeat them many times,
can anyone help me start off???
Note: by Performance i mean the memory usage, cpu and possibly the time taken to do it!
(I'm currently using perfmon on windows by using necessary counters to check these parameters and manually noting it down)
Thanks
It strongly depends upon your operating system. On Linux, you could use the time utility. And strace might help you understanding the system calls that are used.
I have no idea of the equivalent on Windows systems.
I think that you could create a bash/batch script to call your program as many times as you need and with different inputs.
You could then have your script create a CSV file that contains the time it took to execute your program (start date and end date for example). CSV files are usually compatible with most spreadsheet programs like Excel, so I think that can make it easier for you to process your data, like creating means and standard deviations.
I don't have much to say regarding the memory and CPU usage, but if you are in Windows it wouldn't hurt to take a look at the Process Explorer and the Process Monitor (you can find them in this page). I think that they might help you in your task.
Finally if you are in Linux I think that you might be able to use grep with the top command to gather some statistics.
Regards,
Felipe
If you want exact results, Rational Purify (on Windows), or valgrind (on Linux) are the best tools; these run your application in a virtual machine that can be instructed to do exact cycle counting.
In another post an utility named timethis.exe was mentioned for measuring time under Windows. Maybe it is useful for your purposes.
I used the perform im using to manually note down in an automated way,
that is, i used the performance counter class available in dot net and obtained samples of the particular application at regular intervals and generated a graph with those values..
Thanks :)
I am trying to multiply 2 matrices together by using 1 thread for each cell of output.
I am using c++/g++ on unix.
How would I go about doing this?
Can I do this in a loop?
Here's my suggestion:
Write a function that will compute one output cell. Give it parameters indicating which cell to compute.
Write a single-threaded program that uses a loop to compute every cell (calling the function from "1"). Store all the results and don't write them out until you have finished computing all cells.
Modify the program so that instead of each loop calling the function, each loop creates a thread to execute the function.
Figure out how to make the "main" program wait until all threads have finished before writing out all the results.
I think that will give you a strategy to work out a solution, without me doing your homework for you.
If you have a go and it doesn't work, post your code on here and people will help you debug it. The important part is not for you to get a good answer, it is for you to learn how to solve this type of problem -- so it won't really help you if somebody just gives you the answer.
I am debugging with GDB a crunching number C++ program. It takes 10 minutes till I reach the interesting function to be debugged. Then I inspect variables, understand parts of the program and recompile again, and run again GDB till I reach the point again.
This procedure is sometimes a bit time consuming. I wonder if somehow can be accelerated. Any ideas?
Thanks
You definitely can't have your compiler optimize the code to make it run faster before running GDB. Have you written good unit tests? Having a decent test suite might save you considerable time and prevent you from spending undue amount in the debugger.
there are gdb canned instruction(a sort of minilanguage where you automate the debugging process). and there are also python bindings that can help you automate gdb. debugging should be last resort, you should write tests instead or think more about what you write, this would speed up the deubgging process considerably(as you would probably not need to debug anymore, or very seldom).
Write tests which run the interesting function with various inputs. Then you can debug the function without having to worry about the rest of the code.
Have you tried UndoDB: http://undo-software.com/
It allows you to step back and forth - reversible debugging. While gdb has it's own reversible debugging these days, running in that mode has massive slowdown -- 20,000x or worse. UndoDB will run with about 1.7x slow-down, so you can quickly get to the interesting part, and then step back and forth to home in on your bug.
(Disclosure: I work for Undo Software)
Under GNU/Linux, you can also try:
checkpoint
...
restore n
if your program is not multithreaded (checkpoint uses internally fork(), so the same limitations apply).
It should same you the 10min you need to reach the beginning of your debugging!