Trace32 Execute script when program stops - trace32

Is there a way to execute a script every time the program stops?
I need to do something like this:
Var.Set \myvar=list[Id]
where list and Id are variables from the program.
However when the Id changes \myvar doesn't change, it remains with the value which corresponds to the old Id.
This is why I want to run a script that does that command every time a program stops from running.

To run a script every time the program stops use the command ON or GLOBALON. ON is suitable when there is still an active PRACTICE script controlling your test case. GLOBALON is suitable if you want a reaction on an event until you disable it again.
General syntax is:
ON|GLOBALON <event> <action>
where "event" can be PBREAK for "program-break" and "action" can be "DO <script.cmm>" to execute an individual script on the event.
Putting all together you get:
GLOBALON PBREAK DO syncmyvar.cmm
where script syncmyvar.cmm contains your command Var.Set \myvar=list[Id]
To disable the event handler on a program break use
GLOBALON PBREAK inherit
If I understand your use case correctly, you'd like to execute the command Var.Set \myvar=list[Id] every time Id changes. In this case I would consider to use a write-breakpoint with a command action. E.g. like this:
Var.Break.Set Id /Write /CMD "Var.Set \myvar=list[Id]" /RESUME
However this gives a quite different behavior: With this breakpoint, you application gets stopped shortly for every write access to Id and \myvar is always in sync with Id, while with GLOBALON PBREAK DO syncmyvar.cmm the variable \myvar gets only updated when the application completely stops.

Related

Is there a way to execute a set of GDB commands when a break point is hit?

I'm trying to achieve the "Semihosting" like feature by registering a set of commands associated with a specific break point, like:
print buffer[0]
cont
So the plan is having analog values dumped into the GDB client console in realtime, thus making the development process easier.
Is it possible for GDB to execute above example commands when the breakpoint on line 38 (eg.) is hit? (I will need to run different set of commands on another break point)
Each breakpoint you add in gdb has a number. You can see the numbers with i b (short for info breakpoints). Suppose you want to add commands to breakpoint number 2, just type commands 2 and press ENTER. Now, type the commands you want gdb to run when breakpoint 2 is hit (one per line). When you want to finish entering the commands, type end.
Tip: You can add the continue command before end if you want gdb to continue execution instead of stopping. That is, if you only added to breakpoint to add commands to it but you don't want execution to stop there. For instance, if you just want to print the value of some variable, or even if you want to create another breakpoint but only if some specific code path is reached first. The possibilities are endless.

Gammu-smsd runonreceive returns 0 but no program output

I've written a C application that grabs some sensor data and puts it into a string. This string gets passed to gammu-smsd-inject for transmission by SMSD. For reference, my application launches gammu-smsd-inject using fork() & wait(). The program waits for gammu-smsd-inject to terminate and then exits itself.
My program works just fine: if I run it manually from a bash prompt it grabs the sensor data, calls gammu-smsd-inject and quits. The sms appears in the database outbox and shortly after I receive an sms on my phone.
I've added the absolute path to my program into the runonreceive directive of SMSD. When I send a text to SMSD, it is received in the inbox and from the log file I can see the daemon running my program. The logfile then states that the process (my program) successfully exited (0), but I never receive any sms and nothing is added to the database's outbox or sentitems tables.
Any idea what might be going on? I haven't posted a code listing as it's quite long,but it is available.
The only think I could think of that might be happening is that gammu-smsd-inject is perhaps being terminated (by a parent process somewhere up the tree) BEFORE it gets a chance to do any SQL stuff. Wouldn't this create a non zero exit code though?
So the problem was which user was running the program. When I ran my application manually from bash, it launch it with my user ID, but when the SMSD daemon ran it, it was launched with a different ID which was causing issues for some reason. I thought it was a problem with the user ID being used to access the mysql database, but apparently not. In short, I don't actually know what the problem was, but by assigning my login's UID to the child process, everything suddenly worked.

C++ executing a bash script which terminates and restarts the current process

So here is the situation, we have a C++ datafeed client program which we run ~30 instances of with different parameters, and there are 3 scripts written to run/stop them: start.sh stop.sh and restart.sh (which runs stop.sh and then start.sh).
When there is a high volume of data the client "falls behind" real time. We test this by comparing the system time to the most recent data entry times listed. If any of the clients falls behind more than 10 minutes or so, I want to call the restart script to start all the binaries fresh so our data is as close to real time as possible.
Normally I call a script using System(script.sh), however the restart script looks up and kills the process using kill, BUT calling System() also makes the current program execution ignore SIGQUIT and SIGINT until system() returns.
On top of this if there are two concurrent executions with the same arguments they will conflict and the program will hang (this stems from establishing database connections), so I can not start the new instance until the old one is killed and I can not kill the current one if it ignores SIGQUIT.
Is there any way around this? The current state of the binary and missing some data does not matter at all if it has reached the threshold, I also can not just have the program restart itself, since if one of the instances falls behind, we want to restart all 30 of the instances (so gaps in the data are at uniform times). Is there a clean way to call a script from within C++ which hands over control and allows the script to restart the program from scratch?
FYI we are running on CentOS 6.3
Use exec() instead of system(). It will replace your process with the new one. Note there is a significant different in how exec() is called and how it behaves: system() passes its string argument to the system shell to run. exec() actually executes an executable file, and you need to supply the arguments to the process one at a time, instead of letting the shell parse them apart for you.
Here's my two cents.
Temporary solution: Use SIGKILL.
Long-term solution: Optimize your code or the general logic of your service tree, using other system calls like exec or by rewritting it to use threads.
If you want better answers maybe you should post some code and or degeneralize the issue.

In GDB, how do I execute a command automatically when program stops? (like display)

I want some commands to be automatically executed each time the program stops, just like what display does with x. How do I do that?
Here's the easy way I found out:
define hook-stop
...commands to be executed when execution stops
end
Refer to this page for details: http://sourceware.org/gdb/current/onlinedocs/gdb/Hooks.html#Hooks
Another "new" way to do it is with the Python Event interface:
def stop_handler (event):
print "event type: stop"
gdb.events.stop.connect (stop_handler)
which will trigger the stop_handler function each the the inferior stops.
There are two other similar events type:
events.cont
events.exited
respectively triggered when the inferior is continued or exists.

Kill Bash copy child process to simulate crash

I'm trying to test a Bash script which copies files individually and does some stuff to each file. It is meant to be resumable, so I'd like to make sure to test this properly. What is an elegant solution to kill or otherwise abort the script which does the copying from the test script, making sure it does not have time to copy and process all the files?
I have the PID of the child process, I can change the source code of both scripts, and I can create arbitrarily large files to test on.
Clarification: I start the script in the background with &, get the PID as $!, then I have a loop which checks that there is at least one file in the target directory (the test script copies three files). At that point I run kill -9 $PID, but the process is not interrupted - The files are copied successfully. This happens even if the files are big enough that creating them (with dd and /dev/urandom) takes a couple seconds.
Could it be that the files are only visible to the shell when cp has finished? It would be a bit strange, but it would explain why the kill command is too late.
Also, the idea is not to test resuming the same process, but cutting off the first process (simulate a system crash) and resuming with another invocation.
Send a KILL signal to the child process:
kill -KILL $childpid
You can try an play the timing game by using large files and sleeps. You may have an issue with the repeatability of the test.
You can add throttling code to the script your testing and then just throttle it all the way down. You can do throttling code by passing in a value which is:
a sleep value for sleeping in the loop
the number of files to process
the number of seconds after which the script will die
a nice value to execute the script at
Some of these may work better or worse from a testing point of view. nice'ing may get you variable results, as will setting up a background process to kill your script after N seconds. You can also try more than one of these at the same time which may give you the control you want. For example, accepting both a sleep value and the kill seconds could give you fine grained throttling control.