trace32 “access timeout,target running” error in the device - trace32

I am getting this error whenever I am trying to run the device to debug the C code, what does this mean? Does it mean stack or heap memory error? Initially it was running fine when I again tried to run it, I am getting this error again and again.

"access timeout, target running" means usually, that you can't access memory, because your CPU (aka. "the target") is running.
To avoid that, either break target program execution or enable run-time memory access.
By default TRACE32 does not access memory, while the CPU is running, because accessing memory from the debugger usually has some influence on the execution performance of the CPU. (Consider that any memory usually has only one single interface, which means if debugger and CPU wants to access is at the same time, either of them has to stall until the other has finished its access.) This influence can be very small and might not cause any problems, but to be on the save side, run-time memory access is blocked by default.
To enable run-time memory access use command SYStem.CPU.MemAccess.CPU (with ARM Cortex CPU it is SYStem.CPU.MemAccess.DAP instead) and open the memory dump window with the address access class E:. E.g.:
Data.dump E:0x1000
Data.dump E:myvariable
Var.AddWatch %E myvariable
With some CPUs (e.g. Cortex-M) TRACE32 offers the option SYStem.Option.DUALPORT.ON, which causes all memory windows to open with address access class E: automatically.

This Error can sometimes be reported by trace32 as your path from where you are fetching the executable to flash has a space in it due to folder name having space.
Eg: D:\Embedded training
replace the spaces with an underscore as D:\Embedded_training.

Related

Buffer Overflow into a different exe's memory? Or onto csrss.exe from a remote desktop prog?

Short, Question Form:
I did some googling but wasn't able to come up with the answer to this: is it possible to buffer overflow memory into another exe's memory? And/or, is it possible to overflow csrss.exe's memory from an exe running on a remote desktop session?
Longer Story - Here's Our Situation:
We've got a server with an always-running remote desktop session that has a 24/7 program running - a C++ .exe. To make things worse, the C++ exe was programmed using all sorts of unsafe memory operations (raw strcpy, sprintf, etc) You don't need to tell me how bad this is structurally - I completely agree.
Recently, our server's been having Blue Screen Of Death, and the dumpfile is indicating that csrss.exe is being terminated by our C++ exe (which will cause a BSOD, and csrss.exe is also responsible for managing remote desktop sessions.
So I wanted to know if anyone knew whether it was possible for one app to do a memory buffer overflow that overflowed onto another app's memory space, or whether it'd be possible for an app on a remote desktop session to do so onto csrss.exe?
Any help would be greatly appreciated!
Short answer no it is not.
Simplified explanation of why. Each program runs in it's own virtual address space. This virtual address space is controlled by the page table which is essentially a lookup table to map virtual addresses (the addresses in the pointers of the executable) onto physical memory addresses. When the OS switches to a task it hands the correct table to the cpu/core running the task. Any physical address not mentioned in this table will not be accessible from the program. Physical addresses belonging to another application should not appear in this table so it would be impossible to access memory belonging to another application. When a program misbehaves and accesses invalid memory location it will attempt to use virtual addresses not mentioned in the table. This will trigger an exception/fault on the cpu which is normally reported in windows as an "Access violation".
Of course the OS and the CPU can contain bugs so it is impossible to guarantee that it doesn't happen. But if your C++ program misbehaves then still most of the time this would be caught by the CPU and reported as an access violation and not result in a BSOD. If you do not see your C++ program generating access violations I would expect it to be much more likely that the problem is caused by faulty memory or a buggy driver (drivers run at a higher privilege and can do things normal programs can't).
I would say start with doing an extensive memory test with a program like memtest86. BTW if the server is a "real" server with ECC memory, faulty memory shouldn't be the problem as this should have been reported by the system.
Update
Doesn't matter how the memory access happens underflow, overflow, uninitialized pointer. The virtual address used is either mapped to a physical memory location reserved for the program or it is not mapped at all. BTW the checking is done by the CPU the OS only maintains the tables used to do the lookups.
However this doesn't mean every error by the program will be detected because as long as it is accessing addresses for which it was assigned memory the access is ok as far as the CPU is concerned. The heap manager in your program might think otherwise but has no way of detecting this. So even a buffer overflow at the end of the address space doesn't always cause an access violation because memory is assigned to the program in pages of atleast 4kB and the heap manager subdivides those pages into the smaller chunks the program asks it for. So your small 10 byte buffer can be at the start of such a page and writing a thousand bytes to it will be perfectly fine as far as the cpu is concerned. Because all that memory was setup for use by the program. However when your 10 byte buffer is at the end of the page and the next entry is not assigned to a physical address location an access violation will occur.

Reading a crash report first time

Using Steam Crash reporting service and we have an error that is
Win32 StructuredException at 00C6A290 : Attempt to read from virtual
address 5467 without appropriate access rights.
I think I understand the second part (the program read memory from an area that it should not have). But the first part I don't understand. Is 00C6A290 the same each time the program is executed (and does I can backtrace it somehow) or is it assigned by the program at runtime.
It looks like 00C6A290 is an address in memory (within the executable of your program or some code called from it). To my understanding this address is the address of the instruction which caused the exception. In general it may be different each time you run your program since it can be loaded to different memory regions by the OS.
Run your program in a debugger to see backtrace. Do you have the source code?

read memory outside a program without segment faults

Is it possible to read memory addresses (real, not virtual) without throwing a segment fault? I wish to read all live, used memory addresses and log findings.
It depends on the OS you are using.
It should be possible, but you will need to write a kernel driver to interface between the OS and the hardware, and this code will have to run as a driver (assuming Windows, since users cannot directly interface with physical memory).

Program gets aborted before new throws bad_alloc

below is a small C++-program that apparently gets aborted in a number of cases before "new" throws an exception:
int main(){
try{
while(true)
new char[2];
}
catch(...){
while(true);
}
}
The program was first compiled with MinGW/g++ 4.6.1 and then executed on a 32-bit Windows 7 system via the shell. No serious other programs (in terms of memory/CPU consumption) were running at the time. The program terminated before entering the catch-block. When compiling and running the program under Linux (Debian 7.3, gcc/c++ 4.7.2, 24GB memory) the program behaved similarly. (The reason for the infinite loop in the catch-block is to avoid anything there that might throw exceptions - particularly I/O.)
Something surprising (to me at least) happened when launching the program twice on the Windows system: If the program was launched in two different shells (almost) simultaneously, then neither of the two processes terminated before a new-exception was thrown.
Also unexpected to me was the observation that only a moderate enlargement of the size of the allocated chunks of memory (by replacing "2" in the fourth line with "9") made the premature termination disappear on the Windows system. On the Linux machine a much more drastic enlargement was needed to avoid the termination: approx. 40,000,000 bytes per block were necessary to prevent termination.
What am I missing here? Is this normal/intended behavior of the involved operating systems? And if so, doesn't this undermine the usefulness of exceptions - at least in the case of dynamic allocation failure? Can the OS settings be modified somehow (by the user) to prevent such premature terminations? And finally, regarding "serious" applications: At what point (w.r. to dynamic memory allocation) do I have to fear my application getting abruptly aborted by the OS?
Is this normal/intended behavior of the involved operating systems?
Yes, it's known as "overcommit" or "lazy allocation". Linux (and I think Windows, but I never program for that OS) will allocate virtual memory to the process when you request it, but won't try to allocate physical memory until you access it. That's the point where, if there is no available RAM or swap space, the program will fail. Or, in the case of Linux at least, other processes might be randomly killed so you can loot their memory.
Note that, when doing small allocations like this, the process will allocate larger lumps and place them in a heap; so the allocated memory will typically be accessed immediately. A large allocation will be allocated directly from the OS and so your test program won't access that memory - which is why you observed that the program didn't abort when you allocated large blocks.
And if so, doesn't this undermine the usefulness of exceptions - at least in the case of dynamic allocation failure?
Yes, it does rather.
Can the OS settings be modified somehow (by the user) to prevent such premature terminations?
On Linux, there's a system variable to control the overcommit policy:
echo 2 > /proc/sys/vm/overcommit_memory
The value 2 means to never overcommit - allocations will fail if they ask for more than the currently uncommitted RAM plus swap. 1 means to never fail an allocation. 0 (the default) means to guess whether an allocation request is reasonable.
I've no idea whether Windows is similarly configurable.

getting std::bad_alloc error ; How to cross-verify that OS is really running out of memory

I have a C++ program/Linux, which within 2-3 seconds of running starts spitting error std::bad alloc on a 32GB RAM (and gets restarted by wrapper caller). What I really care about is to solve this problem, but I would like to go step by step and build up my confidence in my understanding of the problem.
It looks like the system is not able to allocate memory for a new request (this would happen when the OS has run out of memory). While the program is running, on another terminal I run the sar command with the smallest interval possible (1 second), but I see that kbcached is ~24GB memory. Why is the OS not able to release the caching and make that memory available to the new request? Either 1 sec is too much time (in comparison to how fast programs run) or I am doing something wrong here.
Basically I would like to cross-verify and pin-point that the OS is indeed running out of memory and thus is not able to allocate memory, and then take things from this point on. How to do it?
Ideally, I would like to have the system statistics right at the point when memory allocation fails, like how much caching, total used up memory etc.
If you actually want to see how your process's memory is allocated, you could set a breakpoint with gdb for when the exception is thrown. When it is, inspect the process with a tool like pmap, which can show you additional information about how the process uses memory.
If that's too primitive (and it quickly will be, pmap is pretty primitive), valgrind includes Massif and many other utilities for diagnosing memory usage, CPU utilization, and other runtime problems.