TRACE32 CMM script - trace32

I would like to run the t32 cmm script for accessing memory and read/write test. And tried below code.
&srcadd=0x50000000
&destadd=0x50700000
REPEAT 100.
//WHILE (&srcadd<&destadd)
(
D.S ND:0x10:&srcadd %LE %Long 0xdeadface
&srcadd=&srcadd+0x4
)
This code worked for first time and not working on second run, seen different behavior that some of the location not written... random failures if I run the script on X86 core.
Can you please provide any insights on this ?
Also can you tell me how we can store the data from the memory to compare whether its written properly. I am validating internal memory and read/write test. It would be great if we have any sample t32 script for the same.

Related

Can I use temporary breakpoints to implement line step of a remote target

I'm working on an implement of gdb server. The server talks with gdb using RSP protocol and drive a CPU model. While developing the line step function (range step mode enabled) i found that my CPU model need to take some time to finish just one instruction step. The reason is the CPU model run in individual process. I have to pass the package (instruction step) through IPC. With thousands of instructions in the source line it spend too much time compared with normal running of that line.
Can I ask gdb using temporary breakpoints ( set on any possible instruction can go out of the range corresponding to the source line) to assist with step functionality? Does gdb really knows where to set the required breakpoints? If the answer is false, is there a good way to deal with this problem?
Thank you for your time!
Ted CH
You say that you are using range-stepping mode, which would imply your gdb server already supports the vCont packet, and the r action. See this page for all the details.
The r action gives a range start and end, and you are free to step until the program counter leaves the specified range.
Your server is free to implement this as you like, so, you could place a temporary breakpoint, then resume execution until the breakpoint is hit. But, you need to think about both control flow, and traps, both of which might cause a thread to leave the range without passing through the end point.
If passing each single step through to your simulator is slow, could you not just pass the vCont range information directly through to the simulator, and have the simulator stop as soon as the program counter leaves the range? Surely this would be quicker than having multiple round trips between simulator and server.

Benchmarking Code Runtime with Trace32

I have an embedded system with code that I'd like to benchmark. In this case, there's a single line I want to know the time spent on (it's the creation of a new object that kicks off the rest of our application).
I'm able to open Trace->Chart->Symbols and see the time taken for the region selected with my cursor, but this is cumbersome and not as accurate as I'd like. I've also found Perf->Function Runtime, but I'm benchmarking the assignment of a new object, not of any particular function call (new is called in multiple places, not just the line of interest).
Is there a way to view the real-world time taken on a line of code with Trace32? Going further than a single line: would there be a way to easily benchmark the time between two breakpoints?
The solution by codehearts, which uses the RunTime commands, is just fine if you don't have a real-time trace. It works with any Lauterbach tool and any target CPU.
However if you have a real-time trace (e.g. CPU with ETM and Lauterbach PowerTrace hardare), I recommend to use the command Trace.STATistic.AddressDURation <start-addr> <end-addr> instead. This command opens a window which shows the average time between two addresses. You get best results, if you execute the code between the two addresses several times.
If you are using an ARM Cortex CPU, which supports cycle-accurate timing information (usually all Cortex-A, Cortex-R and Cortex-M7) you can improve the accuracy of the result dramatically by using the setting ETM.TImeMode.CycleAccurate (together with ETM.CLOCK <core-frequency>).
If you are using a Lauterbach CombiProbe or uTrace (and you can't use the ETM.TImeMode.CycleAccurate) I recommend the setting Trace.PortFilter.ON. (By default the port-filter is set to PACK, which allows to record more data and program flow, but with a slightly worse timing accuracy.)
Opening the Misc->Runtime window shows you the total time taken since "laststart." By setting a breakpoint on the first line of your code block and another after the last line, you can see the time taken from the first breakpoint to the second under the "actual" column.

Linux process allocated memory usage

how can I measure memory consumed by process? process quits really quickly so utilities like top are useless. I tried using massif by valgrind, but it measures only memory allocated via malloc/new + stack, and not static variables for example. --pages-as-heap doesn't help as well because it shows mapped memory as well.
Something that might work for you is using a script that will repeatedly run 'ps' immediately after your program starts. I've written up the following script that should work for your purposes, just replace the variables at the top with your specific details. It currently runs 'netstat' in the background (notice the & symbol) and samples the memory 10 times with 0.1 second intervals between the samples, writing the results of the memory checking to a file as it goes. I've run this on cygwin and it works (minus the -o rss,vsz parameters), I don't have access to a linux machine at the moment but it should be simple to adapt if for some reason it doesn't immediately work.
#! /bin/bash
saveFileName=saveFile.txt
userName=jacob
programName=netstat
numberOfSamples="10"
delayBetweenSamples="0.1"
saveFileName=saveFile
i="0"
$programName &
while [ $i -lt $numberOfSamples ]
do
ps -u $userName -o rss,vsz | grep $programName >> $saveFileName
i=$[$i+1]
sleep $delayBetweenSamples
done
If your program completes so fast that the delay between executing it and running ps in the script is too long you might consider running your program with a delay and using a very high sample frequency to try and catch it. You can do that by using 'sleep' and two ampersands like sleep 2 && netstat . That will wait 2 seconds and then run netstat.
If none of this sounds good to you, perhaps try running your program within a debugger. I believe gdb has some memory tracking options you could look into.

Memory profiling for a daemon process

I have a daemon process on which I want to perform a memory profile. So I took valgrind as a choice and ran it using massif tool, but since the process never dies, massif never returns the output file. Even I try to send a TERM signal to the process, I am not receiving any output from massif.
So now I tried installing a plugin of valgrind in my eclipse and started trying to run the profile on an already created binary of my daemon process, but when I start the profiler, it says 2 kinds of errors:
failing saying not able to load a library. I didn't find any way to set the library path in the profile configuration.
failing bad permissions to read a memory address.
So I am not even able to run the profiler in eclipse.
I tried gdb, I tried getting the memory info, but that is what "/proc//maps" would give. So of no use.
Finally here is my use case:
I have a daemon process that never quits and I want to perform memory profiling on it.
I want to get snapshots of no of memory allocations happened, max memory allocations, which instruction is trying to allocate the most number of allocations etc etc.
Better if I could get a visual interface for the memory profiling so that I can even share it with my manager.
So please suggest me is there any such profiler that helps and any pointers to where to get the documentation etc.
Thanks in Advance!
Vinay.
When running your program under valgrind, various commands
(depending on the tool) can be executed from the shell, using
vgdb in standalone mode.
When running with --tool=massif, you can do on demand snapshot, while
your program is running.
See http://www.valgrind.org/docs/manual/manual-core-adv.html#manual-core-adv.valgrind-monitor-commands for more information.

How to use libumem to find heap corruption, without relying on a 'core' file?

I want to know how to use libumem on solaris. If I follow http://www.unix.com/man-page/OpenSolaris/3malloc/umem_debug/ and start the process with all the options, how will I get the output?
Can I get a text file of the results?
I have used wdb on HP-UX for the same. This generates a text file after the program exits, that I can analyze later. Can I do that same for libumem?
Note: This is a remote debugging, I will not have access to the system until afterwards.
You can create a core file of the process before it exits and examine the code with mdb later. One way to generate that core file at the right moment could be a dtrace script that will trigger the gcore just when exit is called.
I think libumem will generate a core when things goes wrong , you can analyze this core using mdb , certain commands like ::umem_status , umem_verify will help you in finding the corruption