As MachineFunctionPass::runOnMachineFunction runs on MachineFunction in llvm, what's does MachineFunction mean here? Is the earliest time to run such pass after the code generation of the function in IR format?
MachineFunction is after instruction selection and "scheduling", i.e after SelectionDAG.
See: http://llvm.org/docs/CodeGenerator.html#high-level-design-of-the-code-generator ; MachineInstrs are created step 2 and MachinFunctionPass can start to run step 3.
Related
I am interested in using the differential code coverage functionality in ifort. The documentation appears to address this thoroughly but I have failed to apply it to my reduced example. Heres what I have:
program test
integer :: userinput
print *, 'enter 1 or 0'
read *, userinput
if (userinput.eq.1) then
print *, 'You have entered ONE'
else
print *, 'You have not entered ONE'
end if
end program test
A simple program that can take one of two paths. If the user enters 1 then it goes into the if ... then statement, if the user enters 0 then it goes into the else... statement.
The goal of differential code coverage (as stated by intel docs) is as follows:
compare the profiles from two runs of an application: a reference run
and a new run identifying the code that is covered by the new run but
not covered by the reference run
So if we take a reference run where the user enters 0 and a new run where the user enters 1, the differential code coverage should be able to identify that the new run covers the if statement whereas the reference run does not (reference run goes into the else statement). I followed the docs as closely as possible. The source file is called test.f90. Here are the compile lines i'm using:
ifort test.f90 /Qcov-gen
Which generates PGOPTI.SPI, PGOPTI, test.exe and test.obj. I then run the executable and enter 0, I get the correct message "You have not entered ONE". This causes a .dyn file to be created (due to the Qcov-gen option). I then do the following:
profmerge
Which generates additional files pgopti.dpi, pgopti.dpi.lock. At this point I think I have enough material to generate my reference data. This I attempt using the following:
codecov -prj Project_Name -dpi pgopti.dpi -ref pgopti.dpi
Which generates html files similar to the ones displayed when code coverage is run in Visual Studio for Intel Fortran. I also get 100% code coverage which seems incorrect. The docs then show this command:
codecov -prj Project_Name -spi pgopti.spi -dpi pgopti.dpi
Which does not appear to provide an opportunity for a new run.
Could someone please explain how to do a simple differential code coverage on this particular example? I'm eventually trying to extrapolate this to a larger project but I'm trying to take baby steps to get there.
I have some code that calculates the price of a stock option using Monte Carlo and returns a discounted price. The final few lines of the relevant method look like this:
if(payoffType == pt.LongCall or payoffType == pt.LongPut):
discountedPrice=discountedValue
elif(payoffType == pt.ShortCall or payoffType == pt.ShortPut):
discountedPrice=(-1.0)*discountedValue
else:
raise Exception
#endif
print "dv:", discountedValue, " px:", discountedPrice
return discountedPrice
At a higher level of the program, I create four pricers, which are passed to instances of a portfolio class that calls the price() method on the pricer it has received.
When I set the breakpoint on the if statement or the print statement, the breakpoints work as expected. When I set the breakpoint on the return statement, the breakpoint is interpreted correctly on the first pass through the pricing code, but then skipped on subsequent passes.
On occasion, if I have set a breakpoint somewhere in the flow of execution between the first pass through the pricing code and the second pass, the breakpoint will be picked up.
I obviously have a workaround, but I'm curious if anyone else has observed this behavior in the PyDev debugger, and if so, does anyone know the root cause?
The issues I know of are:
If you have a StackOverflowError anywhere in the code, Python will disable the tracing that the debugger uses.
I know there are some issues with asynchronous code which could make the debugger break.
A workaround is using a programmatic breakpoint (i.e.: pydevd.settrace -- the remote debugger session: http://www.pydev.org/manual_adv_remote_debugger.html has more details on it) -- it resets the tracing even if Python broke it in a stack overflow error and will always be hit to (the issue on asynchronous code is that the debugger tries to run with untraced threads, but sometimes it's not able to restore it on some conditions when dealing with asynchronous code).
BACKGROUND
We have testers for our embedded GUI product and when a tester declares "test failed", sometimes it's hard for us developers to reproduce the exact problem because we don't have the exact trace of what happened.
We do currently have a logging framework but us devs have to manually input those logging statements in the code which is fine . . . except when a hard-to-reproduce bug occurs and we didn't have a log statement at the 'right' location and then when we re-build, re-run the test with the same steps, we get a different result.
ISSUE
We would like a solution wherein the compiler produces extra instrumentation code that allows us to see the exact sequence of events including, at the minimum:
function enter/exit (already provided by -finstrument-functions)
control-flow statement enter i.e. entering if/else, which case statement we jumped to
The log would look like this:
int main() entered
if-line 5 entered
else-line 10 entered
void EventLoop() entered
. . .
Some additional nice-to-haves are
Parameter values on function entry AND exit (for pass-by-reference types)
Function return value
QUESTION
Are there any gcc tools or even paid tools that can do this instrumentation automatically?
You can either use gdb for that, and you can automate that (I've got a tool for that in the works, you find it here or you can try to use gcov.
The gcov implementation is so, that it loads the latest gcov data when you start the program. But: you can manually dump and load the data. If you call __gcov_flush it will dump the current data and reset the current state. However: if you do that several times it will always merge the data, so you'd also need to rename the gcov data file then.
I am new to this MapReduce. I want to process a log file that has data in below format
EXECUTED: 2016-05-19 07:11:15
.AAAAA
EXECUTED: 2016-05-19 07:11:27
EXECUTED: 2016-05-20 08:11:20
.BBBBB
EXECUTED: 2016-05-20 07:11:27
I need to calculate execution time of a command e.g. .AAAAA / .BBBBB.
First line shows execution started time and last line shows the time of completion.
I want to write a MapReduce program to calculate exe time. How can I preserve time from first line, and use later when second EXECUTED: will encounter?
Is there any other way to process it?
Thanks,
Sanjay
When the Map method is run to read the value from first line, store the required value in a static variable.
When the Map method reads the next line, you can use the static variable to compare the data, perform the necessary calculations and pass it on to Reducer.
I've written a C++ program of which i would like to time the length of time it takes to complete - is there some terminal command i could use?
You can use the "time" command available in most (maybe all) the linux distributions. It will print the time spent as system, as user, and the total time.
For example
bash-4.1$ time (sleep 1; sleep 1)
will output something like
real 0m2.020s
user 0m0.014s
sys 0m0.005s
As you can see with the parenthesis you can launch every command chain you wish.
It's called time in *nix
Iterate over the function several times (1000's probably) so you can get a large enough number. Then use time.h to create two variables of type time_t - one before execution, one after. Subtract the two and divide by the iterations.
Or Measure-Command in PowerShell.
I try to better explain :)
If you have compiled your code using g++, for example:
g++ -std=c++14 c++/dijkstra_shortest_reach_2.cpp -o dsq
In order to run it, you type:
./dsq
In order to run it with a file content as an input, you type:
./dsq < input07Dijkstra.txt
Now for the answer.
In order to get the duration of the program output to the screen, just type:
time(./dsq < input07Dijkstra.txt)
Or without an input:
time(./dsq)
For the first command my output is:
real 0m16.082s
user 0m15.968s
sys 0m0.089s
Hope it helps!