Is there a way to capture the MMU context during a breakpoint (and eventually obtain fancy picture to understand it) using QEMU console and/or GDB connected to a QEMU instance ?
Related
We have an unmanaged C++ application (MFC framework, Windows CE) that closes on us at seemingly random moments.
There is no error message and no C++ exception, it just vanishes.
I presume something bad must have happened and the C run-time or OS decided to kill the program. But I'm not sure where to continue searching.
Question: is it possible to see somewhere in Windows CE why the application terminated in the first place?
Does Windows CE collect basic crash information? Perhaps then one can at least see if it was an Access Violation, an Out of Memory situation, kernel/driver panic or perhaps some other kind of internal or external event that forced the application to close?
On a regular x86 PC one would break out the debugger, application verifier, Windows Error Reporting tools, WinDbg etc. But how to (start) analyze a Windows CE application crash?
What I've tried so far:
Creating global C++ exception handlers at tactical locations in the application. These build and transmit a simple UDP packet containing minimal exception info. these are then viewed on another machine running Wireshark.
Adding the SEH exception compiler switch (/EHa), to be able to catch even those non C++ exceptions, like Access Violation and such.
Connecting the Visual Studio 2008 debugger via TCP/IP with the Smart Device (connects to smart device successfully MSVS says, but debugger doesn't see any remote processes. The Attach to process VS window gives the following error: Unable to Connect to ''.)
Retarget the application so it runs on a regular x86 PC (but then it runs fine, so no "luxury" of debugging the issue there either)
I've tested the exception handler by forcing an Access Violation.
The expected UDP message the arrives at the machine running Wireshark perfectly. But when the real issue occurs, it stays completely silent.
The platform: MS Windows Embedded Compact 7.02 running on a Texas Instruments processor (ARM A8).
The application itself implements a rudimentary VNC viewer. It uses sockets and relies on a third party binary called zlib CE (ZLIBCE.DLL) for decompressing VNC data.
It hasn't been verified if the zlib binary has been built against the exact same compiler (and/or compiler settings).
Settled for a poor man's debugging solution. Now sending application state values to a memory mapped file. The idea is that a specially crafted helper program runs along the main application. This program opens the memory mapped file, displays values it reads from it. And the main program writes to it. If anything fatal happens to the main app, the helper program has the latest state info. Since it is shared memory it doesn't impact performance much. This way found the section of the program where the fault occurs.
New to CUDA, but have some time spending on computing, and I have geforces at home and tesla (same generation) in the office.
At home I have two gpus installed in the same computer, one is GK110 (compute capability 3.5), the other is GF110 (compute capability 2.0), I perfer to use GK110 for computation task ONLY and GF110 for display UNLESS I tell it to do computation, is there a way to do this through driver setting or I still need to rewrite some of my codes?
Also, if I understand correctly, if the display port of GK110 is not being connected, then the annoying windows timeout detection will not try to reset it even if the computation time is very long?
Btw my CUDA codes are compiled with both compute_35 and compute20, so the codes can be run on both GPUs, however I plan to use features that being exclusive to GK110 so the codes in the future may not being able to run on GF110 at all, and the OS is windows 7.
With a GeForce GTX Titan (or any GeForce product) on Windows, I don't believe there is a way to prevent the GPU from appearing in the system in WDDM mode, which means that windows will build a display driver stack on it, even if the card has no physical display attached to it. So you may be stuck with the windows TDR mechanism. You could try experimenting with it to confirm that. (The windows TDR behavior can be modified via registry hacking).
Regarding steering CUDA tasks to the GTX Titan, the display driver control panel should have a selectable setting for this. It may be in the "Manage 3D settings" area or some other area depending on which driver you have. When you find the appropriate settings area, there will be a selection entitled something like CUDA - GPUs which will probably be set to "All". If you change the "Global Presets" selection to "Base Profile" you should be able to change this CUDA-GPUs setting. Clicking on it should give you a selection of "All" or a set of checkboxes for each GPU detected. If you uncheck the GF110 device and check the GK110 device, then CUDA programs that do not select a particular GPU via cudaSetDevice() should be steered to the GK110 device based on this checkbox selection. You may want to experiment with this as well to confirm.
Other than that, as mentioned in the comments, using a programmatic method, you can always query device properties and then select the device that reports itself as a cc3.5 device.
If we assume we have only the binary, we can use windbg to drop into assembly and see what’s going on. Since windows guests run in fully emulated mode, it should be straightforward to trace in the guest.
If we want to trace what is happening in the virtualization layer, i.e. hypervisor, it will be a bit difficult. It depends on what kind of machine we are running on. These days all machines are 64 bit with VMX enabled which allow the hypervisor to intercept guest instructions on the fly, since processor virtualization is implemented in hardware.
Since it is just a trap by which the guest drops into hypervisor it is almost impossible to tell when the guest has entered the hypervisor and when its back. However we probably will not achieve much by tracing any code in the hypervisor.
In a VMX enabled machine only page table write changes and IOPL changes will go to hypervisor. Everything else is handled in the guest itself.
For all practical application debugging windbg should be fine.
Can we Trace the running Process(.exe) & its Instructions at guest OS using WinDbg??
Please help on this...
Im really appreciating your time on this ..
Thank you .. :)
I'm not entirely sure what you're asking, but if you're asking if you can run windbg on a virtualized machine, then yes, it works just like it does on a physical machine.
If you want to attach to the process that is running the virtual machine itself, but look at a process that is inside of the guest os, then the answer is no.
i'm working on debugging with gdb. i wanted to know how gdb works internally to set a brekpoint on an embedded processor through JTAG.
It either programs a hardware breakpoint register or places a software breakpoint by replacing the instruction at the breakpoint with an instruction that will cause an exception.
It is different for every processor, you have to look up the jtag debugger details for the specific processor. not just processor family necessarily but specific processor. The datasheets/users guides, if available, are normally available from the chip vendor or core vendor depending on the product.
The few times I've used GDB in conjunction with a JTAG unit, GDB communicated over the JTAG by using a gdbserver program that handled the details of the JTAG. For example, using an OpenOCD JTAG unit: http://openocd.sourceforge.net/doc/html/GDB-and-OpenOCD.html
This essentially means that GDB doesn't know much of anything about the JTAG unit - it relies on the gdbserver interface and that server then does whatever it needs to behind the scenes to do what GDB requests.
I'm developing an application that runs on a small Linux-based SBC (~32MB RAM). Sadly, my app recently became too large to run under GDB anymore. Does anyone know of any good, lightweight debugging methods that I can use in embedded Linux? Even being able to view a thread's stack trace would be extremely helpful.
I should mention that this application is written in C++ and runs multiple threads, so gdbserver is a no-go as it doesn't work with multithreaded apps.
Thanks in advance,
Maha
gdbserver definitely works with multi-threaded applications, I'm working on an embedded project right now with >25 threads and we use gdbserver all the time.
info threads
lists all the threads in the system
thread <thread number from info threads>
switches to that thread of execution.
thread apply XXX <command>
Runs on the thread designated by XXX, which can also be 'all'. So if you want the back trace from all running threads do
thread apply all bt
Once you're in the execution flow of a given threads all your typical commands work as they would in a single threaded process.
I've heard of people doing hacks like running the application in an emulator like QEMU and then running GDB (or things like valgrind) on that. It sounds painful, but if it works....
Would you get anywhere with libunwind (to get stack traces) and printf-style logging?
Serial port printing is the most light weight I can think of ~~~
Easily seen in a Host PC, and simple and light weight code inside your app~~
If you do not have a serial port, once we used an GPIO port and simulated a serial port using it. It worked perfectly well, but was a bit slow :-( ~~~
Is there a reason why you have built your own debugger? I am developing a Linux system using an ARM processor (AT91SAM926x) and we are using both compiler and debugger from CodeSourcery. I do not think that they have released a version with GDB 7 yet but I am debugging multithreaded C++ applications using the gdbserver tool without any problems.
Gdbserver does indeed work with multithreaded applications. However you do need to compile a cross target debugger for your host to get it to work with your target gdb.
See this article for a detailed description of how to do it:
Remote cross-target debugging with GDB and GDBserver