I have a C++ function that I want to profile and only that function. One possible way is to use chrono and just measure the time it takes to run that function and print it out, run the program a few times and then do stats on the samples.
I am wondering if I can skip having to explicitly code time measurements and just ask perf to focus on the time spent in a specified function.
Have a look at Google's benchmarking library to micro-benchmark the function of interest.
You can then profile the resulting the executable as usual using perf.
For example, let's say that following the basic usage, you generated an executable named mybenchmark. Then, you can run perf on the binary as usual
$ perf stat ./mybenchmark
You can build a flame graph of whole application in SVG format. With flame graph you can quickly see function that take most of the time when consuming CPU. SVG flame graph is interactive: you can click any function and see detailed flame graph only for that selected function. From description of flame graphs:
It is also interactive: mouse over the SVGs to reveal details, and click to zoom.
You can try it in action for sample bash flame graph:
http://www.brendangregg.com/FlameGraphs/cpu-bash-flamegraph.svg
Related
I hope this makes sense, I am not sure how exactly I should word this...
Hi, I am trying to write a program that will be able to monitor the audio output of certain processes. I am having a hell of a time finding a way to actually do this. I have only been able to find ways to get the current volume level, but not the actual output level. I've been searching through Stackoverflow, but everything I've found is only able to fetch the program's volume control level (like the slider in the Windows Volume mixer, however I am basically looking to get the value of the fluctuating green bar in the mixer)
I basically want to check the output level of a program every x milliseconds and if it is above a certain threshold, run a method to do something. How can I do this?
Thanks!
Quick edit to be clear: Win 7+ with C++
You probably want IAudioPeakMeter
This sample app here looks promising.
I am interested in testing speed of some function calls from the code written in C/C++. I searched and I was directed to use Valgrind platform with Callgrind tool.
I have briefly read the manual, but I am still wondering how I can utilize the functionality of the tool to evaluate the time of my function runtime speed.
I was wondering if I could get some pointers how I can achieve my goal.
Any help would be appreciated.
Compile your program with debug symbols (e.g. GDB symbols works fine, which are activated with the "-ggdb" flag).
If you are executing your program like this:
./program
Then run it with Valgrind+Callgrind with this command:
valgrind --tool=callgrind ./program
Callgrind will then produce a file called callgrind.out.1234 (1234 is the process ID and will probably be different when you run). Open this file with:
cg_annotate callgrind.out.1234
You may want to use grep to extract your function name. In the left column the number of instructions used for that function is displayed. Functions that use a comparatively low amount of instructions will be ignored, though.
If you want to see the output some nice graphics, I would recommend you to install KCachegrind.
Is it possible to get back traces with the profiling output from Callgrind?
If it is, would you be able to explain how that's done?
[update] It could be my terminology. What is the backtrace/callstack called and where does it reside when using Kcachegrind to view Callgrind profiling results?
When you launch Kcachegrind for the first time, you have three areas :
At left, you have a DockWidget entitled "Flat profile", there is the list of the functions sorted by the percentage of cost in the application, including all sub-calls. (that's why the main usualy cost almost 100%).
Then in the bottom-right area, you have another dockwidget which have a "call graph" tab, there is here you have the tree of all the calls, and maybe what you are looking for ;)
But if you want a backtrace at a specific point, with more informations about the context, I advise you to use gdb with a breakpoint here, and continue the execution until you reached the context you want.
Profiling is mainly used to locate what function cost the most in your application, and then see if you can optimize it.
I'm looking for advices, for a personal project.
I'm attempting to create a software for creating customized voice commands. The goal is to allow user/me to record some audio data (2/3 secs) for defining commands/macros. Then, when the user will speak (record the same audio data), the command/macro will be executed.
The software must be able to detect a command in less than 1 second of processing time in a low-cost computer (RaspberryPi, for example).
I already searched in two ways :
- Speech Recognition (CMU-Sphinx, Julius, simon) : There is good open-source solutions, but they often need large database files, and speech recognition is not really what I'm attempting to do. Speech Recognition could consume too much power for a small feature.
- Audio Fingerprinting (Chromaprint -> http://acoustid.org/chromaprint) : It seems to be almost what I'm looking for. The principle is to create fingerprint from raw audio data, then compare fingerprints to determine if they can be identical. However, this kind of software/library seems to be designed for song identification (like famous softwares on smartphones) : I'm trying to configure a good "comparator", but I think I'm going in a bad way.
Do you know some dedicated software or parcel of code doing something similar ?
Any suggestion would be appreciated.
I had a more or less similar project in which I intended to send voice commands to a robot. A speech recognition software is too complicated for such a task. I used FFT implementation in C++ to extract Fourier components of the sampled voice, and then I created a histogram of major frequencies (frequencies at which the target voice command has the highest amplitudes). I tried two approaches:
Comparing the similarities between histogram of the given voice command with those saved in the memory to identify the most probable command.
Using Support Vector Machine (SVM) to train a classifier to distinguish voice commands. I used LibSVM and the results are considerably better than the first approach. However, one problem with SVM method is that you need a rather large data set for training. Another problem is that, when an unknown voice is given, the classifier will output a command anyway (which is obviously a wrong command detection). This can be avoided by the first approach where I had a threshold for similarity measure.
I hope this helps you to implement your own voice activated software.
Song fingerprint is not a good idea for that task because command timings can vary and fingerprint expects exact time match. However its very easy to implement matching with DTW algorithm for time series and features extracted with CMUSphinx library Sphinxbase. See Wikipedia entry about DTW for details.
http://en.wikipedia.org/wiki/Dynamic_time_warping
http://cmusphinx.sourceforge.net/wiki/download
I have an application that I want to profile wrt how much time is spent in various activities. Since this application is I/O intensive, I want to get a report that will summarize how much time is spent in every library/system call (wall time).
I've tried oprofile, but it seems it gives time in terms of Unhalted CPU cycles (thats cputime, not real time)
I've tried strace -T, which gives wall time, but the data generated is huge and getting the summary report is difficult (and awk/py scripts exist for this ?)
Now I'm looking upto SystemTap, but I don't find any script that is close enough and can be modified, and the onsite tutorial didn't help much either. I am not sure if what I am looking for can be done.
I need someone to point me in the right direction.
Thanks a lot!
Judging from this commit, the recently released strace 4.9 supports this with:
strace -w -c
They call it "syscall latency" (and it's hard to see from the manpage alone that's what -w does).
Are you doing this just out of measurement curiosity, or because you want to find time-drains that you can fix to make it run faster?
If your goal is to make it run as fast as possible, then try random-pausing.
It doesn't measure anything, except very roughly.
It may be counter-intuitive, but what it does is pinpoint the code that will result in the greatest speed-up.
See the fntimes.stp systemtap sample script. https://sourceware.org/systemtap/examples/index.html#profiling/fntimes.stp
The fntimes.stp script monitors the execution time history of a given function family (assumed non-recursive). Each time (beyond a warmup interval) is then compared to the historical maximum. If it exceeds a certain threshold (250%), a message is printed.
# stap fntimes.stp 'kernel.function("sys_*")'
or
# stap fntimes.stp 'process("/path/to/your/binary").function("*")'
The last line of that .stp script demonstrates the way to track time consumed in a given family of functions
probe $1.return { elapsed = gettimeofday_us()-#entry(gettimeofday_us()) }