gdb step until source is available again - gdb

I'm running GNU grep under gdb on linux and single stepping it. After about 12 steps, control is transferred to setlocale.c, for which no source code is available.
Example session, after step 12 no source code information is available and the list command just shows the file.
Is there a way of getting gdb to keep stepping until a file with source code is available again. Alternatively, is there a way of telling gdb to keep stepping until control is transferred to a different file?
example session, showing source code initially available and then unavaiable for setlocale.c?
(gdb) start
Temporary breakpoint 1 at 0x402e50: file grep.c, line 2415.
Starting program: ~/ws/opt/grep/out/bin/grep --context=20 -r --line-number --byte-offset --include=\*.c int .
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Temporary breakpoint 1, main (argc=0x8, argv=0x7fffffffdaa8) at grep.c:2415
2415 {
(gdb) l
2410 return result;
2411 }
2412
2413 int
2414 main (int argc, char **argv)
2415 {
2416 char *keys = NULL;
2417 size_t keycc = 0, oldcc, keyalloc = 0;
2418 int matcher = -1;
2419 bool with_filenames = false;
(gdb) s 12
__GI_setlocale (category=category#entry=0x6, locale=locale#entry=0x420b7b "") at setlocale.c:220
220 setlocale.c: No such file or directory.
(gdb) l
215 in setlocale.c

You need gdb finish command. With this command you can go out of current stack frame which has no source code available. You can use it as many times as you want until you are again in stack frame with source code. See documentation.

I ended up writing a simple gdb script using the Python API to do this. It will keep stepping until control is transfered to the next file, regardless of whether that involves adding a new stack frame or leaving the current one.
The script can be loaded with source leave_this_file.py. It defines a command called leave_this_file that can be invoked with no arguments, or given a number of times to repeat.
The script is a little bit makeshift and ends up parsing the result of the gdb command frame 0 rather than using one of gdb's proper APIs for inspecting frames.
MAX_STEPS = 10000
def get_file_name():
"""extract the file name for the bottommost frame"""
# example string
#0 main (argc=0x7, argv=0x7fffffffdaa8) at grep.c:2415
# <source fragment>
where_str = gdb.execute("frame 0", from_tty=False, to_string=True)
# last word of first line is file:line
file_line = where_str.splitlines()[0].split()[-1]
filename, _, line = file_line.rpartition(":")
# confirm that line number is an int, raise otherwise
int(line)
return filename
def step_out_of_file_once():
orig_file_name = get_file_name()
current_file_name = orig_file_name
counter = 0
for x in range(MAX_STEPS):
gdb.execute("step", from_tty=False, to_string=True)
counter += 1
current_file_name = get_file_name()
if orig_file_name != current_file_name:
break
print("%s: %30s, %s: %s" % ("new", current_file_name, "steps", counter))
class LeaveThisFile(gdb.Command):
"""step out of the current file"""
def __init__(self):
gdb.Command.__init__(
self, "leave_this_file", gdb.COMMAND_DATA, gdb.COMPLETE_SYMBOL, True
)
def invoke(self, arg, from_tty):
# interpret the arg as a number of times to execute the command
# 1 by default
if arg:
arg = int(arg)
else:
arg = 1
for x in range(arg):
step_out_of_file_once()
LeaveThisFile()
Here's some example output when running GNU grep under gdb
2415 {
(gdb) startQuit
(gdb) source leave_this_file.py
(gdb) leave_this_file 15
new: setlocale.c, steps: 18
new: pthread_rwlock_wrlock.c, steps: 8
new: ../sysdeps/unix/sysv/linux/x86/hle.h, steps: 3
new: pthread_rwlock_wrlock.c, steps: 1
new: setlocale.c, steps: 7
new: ../sysdeps/x86_64/multiarch/../strcmp.S, steps: 1
new: setlocale.c, steps: 48
new: getenv.c, steps: 4
new: ../sysdeps/x86_64/strlen.S, steps: 2
new: getenv.c, steps: 16
new: ../sysdeps/x86_64/multiarch/../strcmp.S, steps: 64
new: getenv.c, steps: 53
new: setlocale.c, steps: 16
new: ../sysdeps/x86_64/multiarch/../strchr.S, steps: 5
new: setlocale.c, steps: 23

Related

Why does '$' for funcname in git log -L cause an infinite search?

You can search for a filename and function name in git log with git log -L :funcname:filename.
I ran into an issue where we had been running this search programmatically and the funcname was set to '$', which caused an endless search. (e.g. git log -L :$:somefile.py)
'$' means end of string in regex, but why does this cause an endless search loop when other regex characters like '^' or '?' don't? What unique effects does the '$' character have?
I was able to reproduce this locally and step through the code using gdb.
This looks like a bug in the find_funcname_match_regexp function. It eventually ends up matching $ against the empty string, which matches successfully but causes no changes in the pointers used to mark the position in the file, resulting in an infinite loop.
Here's a walk through of the reproducer. In this example, we're running git log against the file main.go which has the following content:
package main
import "fmt"
func main() {
fmt.Println("example repository for demonstrating git log bug")
}
Start gdb and set a breakpoint at the beginning of the while loop in line-range.c. Arrange to print the value of start after each break:
(gdb) break line-range.c:140
Breakpoint 1 at 0x5a7a89: file line-range.c, line 140.
(gdb) commands
Type commands for breakpoint(s) 1, one per line.
End with a line saying just "end".
>p start
>end
(gdb)
Run git log -L :$:main.go under the control of gdb:
(gdb) run log -L :$:main.go
Starting program: /home/lars/src/git/git log -L :$:main.go
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
[Detaching after fork from child process 2469232]
Breakpoint 1, find_funcname_matching_regexp (xecfg=0x0, start=0x8477c0 "package main\n\nimport \"fmt\"\n\nfunc main() {\n\tfmt.Println(\"example repository for demonstrating git log bug\")\n}\n", regexp=0x7fffffffc020) at line-range.c:140
140 reg_error = regexec(regexp, start, 1, match, 0);
$1 = 0x8477c0 "package main\n\nimport \"fmt\"\n\nfunc main() {\n\tfmt.Println(\"example repository for demonstrating git log bug\")\n}\n"
In this output, we can see that start is pointing at the beginning of the file:
$1 = 0x8477c0 "package main\n\nimport \"fmt\"\n\nfunc main() {\n\tfmt.Println(\"example repository for demonstrating git log bug\")\n}\n"
Skip a few iterations:
(gdb) c 7
Will ignore next 6 crossings of breakpoint 1. Continuing.
Breakpoint 1, find_funcname_matching_regexp (xecfg=0x0, start=0x84782b "}\n", regexp=0x7fffffffc020) at line-range.c:140
140 reg_error = regexec(regexp, start, 1, match, 0);
$2 = 0x84782b "}\n"
Here we see that start now points at the last line in the file.
Watch what happens if we iterate a few more times:
(gdb) c
Continuing.
Breakpoint 1, find_funcname_matching_regexp (xecfg=0x0, start=0x84782d "", regexp=0x7fffffffc020) at line-range.c:140
140 reg_error = regexec(regexp, start, 1, match, 0);
$3 = 0x84782d ""
(gdb) c
Continuing.
Breakpoint 1, find_funcname_matching_regexp (xecfg=0x0, start=0x84782d "", regexp=0x7fffffffc020) at line-range.c:140
140 reg_error = regexec(regexp, start, 1, match, 0);
$4 = 0x84782d ""
(gdb) c
Continuing.
Breakpoint 1, find_funcname_matching_regexp (xecfg=0x0, start=0x84782d "", regexp=0x7fffffffc020) at line-range.c:140
140 reg_error = regexec(regexp, start, 1, match, 0);
$5 = 0x84782d ""
After one more iteration of the loop, start now points at the empty string. It keeps this value in every subsequent iteration, and we never break out of the while loop.
I've submitted a patch to git that should correct this behavior.
You can follow the discussion there to see if they like my patch or if they decide there is a more appropriate way to resolve the problem.
With the patched version of the code, we see the following behavior instead:
$ git log -L :$:main.go
fatal: -L parameter '$' starting at line 1: no match

Execute groovy inside nextflow process using process inputs

I have a nextflow script with a channel for paired file inputs. I am trying to extract a substring from the file inputs to use as part of the shell call. I am trying to use Groovy's regex matching to extract the substring, but since it is based on an input value, I am having trouble executing the matching. An alternative would be to perform the regex in bash as part of the process shell call, but I am interested in figuring out how to manipulate inputs within a process anyways, as I feel it would be useful for other things too. How can I perform intermediate Groovy code with the process inputs prior to the shell call?
process alignment {
input:
val files
output:
stdout
def matcher = "${files[1][0]}" =~ /.+\/bcl2fastq_out\/([^\/]+)\/.+/
# this is the culprit, as if I hardcode the first string it works
def project = matcher.findAll()[0][1]
"""
echo ${project}
"""
}
workflow {
files = Channel
.fromFilePairs("${params.out_dir}/**{_R1,_R2}_00?.fastq.gz", checkIfExists:true, size: 2)
alignment(files)
}
when I execute this, I get the error
No such variable: files
an example input string would look like extractions/test/bcl2fastq_out/project1/example_L001_R1_001.fastq.gz where I'm trying to extract the project1 substring
As you've already discovered, you can declare variables in the script block, before the command string. For example:
process alignment {
input:
tuple val(sample), path(fastq_files)
output:
tuple val(sample), path(output_file)
script:
output_file = "${sample}.bam"
"""
>&2 echo "Aligning ${sample}..."
touch "${output_file}"
"""
}
Note that these are global (within the process scope) unless you define them using the def keyword. If you don't need these elsewhere in your process definition, like in your example, a local variable (using def) is usually preferable. If, however, you need to access these in your output declaration, for example, then they will need to be global.
Note that the fromFilePairs factory method emits a tuple, where the first element is a group key and the second element is a list of files. The problem with just using val to declare the inputs is that the files in the second element will not be localized to the working directory when your script is run. To fix this, you can just change your input definition to something like:
input:
tuple val(sample), path(fastq_files)
The problem with this approach, is that we're unable to extract the parent directory name from the localized filenames. So you will need to pass this in somehow. Usually, you could just get the parent name from the first file in the tuple, using:
params.input_dir = './path/to/files'
params.pattern = '**_R{1,2}_00?.fastq.gz'
process alignment {
debug true
input:
tuple val(sample), val(project), path(fastq_files)
"""
echo "${sample}: ${project}: ${fastq_files}"
"""
}
workflow {
Channel
.fromFilePairs( "${params.input_dir}/${params.pattern}" )
.map { sample, reads ->
def project = reads[0].parent.name
tuple( sample, project, reads )
}
.set { reads }
alignment( reads )
}
But since the glob pattern has an additional wildcard, i.e. _00?, you may not necessarily get the results you expect. For example:
$ mkdir -p path/to/files/project{1,2,3}
$ touch path/to/files/project1/sample1_R{1,2}_00{1,2,3,4}.fastq.gz
$ touch path/to/files/project2/sample2_R{1,2}_00{1,2,3,4}.fastq.gz
$ touch path/to/files/project3/sample3_R{1,2}_00{1,2,3,4}.fastq.gz
$ nextflow run main.nf
N E X T F L O W ~ version 22.04.0
Launching `main.nf` [determined_roentgen] DSL2 - revision: f80ab33ac8
executor > local (12)
[a8/9235cc] process > alignment (12) [100%] 12 of 12 ✔
sample2: project2: sample2_R1_001.fastq.gz sample2_R1_004.fastq.gz
sample1: project1: sample1_R1_003.fastq.gz sample1_R2_001.fastq.gz
sample1: project1: sample1_R1_004.fastq.gz sample1_R2_003.fastq.gz
sample3: project3: sample3_R1_001.fastq.gz sample3_R2_001.fastq.gz
sample1: project1: sample1_R1_001.fastq.gz sample1_R2_004.fastq.gz
sample1: project1: sample1_R1_002.fastq.gz sample1_R2_002.fastq.gz
sample2: project2: sample2_R1_002.fastq.gz sample2_R2_002.fastq.gz
sample2: project2: sample2_R2_001.fastq.gz sample2_R2_004.fastq.gz
sample2: project2: sample2_R1_003.fastq.gz sample2_R2_003.fastq.gz
sample3: project3: sample3_R2_002.fastq.gz sample3_R2_004.fastq.gz
sample3: project3: sample3_R1_003.fastq.gz sample3_R1_004.fastq.gz
sample3: project3: sample3_R1_002.fastq.gz sample3_R2_003.fastq.gz
Fortunately, we can supply a custom file pair grouping strategy using a closure. This uses the readPrefix helper function:
workflow {
Channel
.fromFilePairs( "${params.input_dir}/${params.pattern}" ) { file ->
prefix = Channel.readPrefix(file, params.pattern)
suffix = file.simpleName.tokenize('_').last()
"${file.parent.name}/${prefix}_${suffix}"
}
.map { key, reads ->
def (project, sample) = key.tokenize('/')
tuple( sample, project, reads )
}
.set { reads }
alignment( reads )
}
Results:
$ nextflow run main.nf
N E X T F L O W ~ version 22.04.0
Launching `main.nf` [loving_cantor] DSL2 - revision: 5a76ac712f
executor > local (12)
[f4/74edbc] process > alignment (12) [100%] 12 of 12 ✔
sample1_003: project1: sample1_R1_003.fastq.gz sample1_R2_003.fastq.gz
sample2_002: project2: sample2_R1_002.fastq.gz sample2_R2_002.fastq.gz
sample1_002: project1: sample1_R1_002.fastq.gz sample1_R2_002.fastq.gz
sample2_003: project2: sample2_R1_003.fastq.gz sample2_R2_003.fastq.gz
sample2_004: project2: sample2_R1_004.fastq.gz sample2_R2_004.fastq.gz
sample2_001: project2: sample2_R1_001.fastq.gz sample2_R2_001.fastq.gz
sample1_001: project1: sample1_R1_001.fastq.gz sample1_R2_001.fastq.gz
sample1_004: project1: sample1_R1_004.fastq.gz sample1_R2_004.fastq.gz
sample3_001: project3: sample3_R1_001.fastq.gz sample3_R2_001.fastq.gz
sample3_004: project3: sample3_R1_004.fastq.gz sample3_R2_004.fastq.gz
sample3_002: project3: sample3_R1_002.fastq.gz sample3_R2_002.fastq.gz
sample3_003: project3: sample3_R1_003.fastq.gz sample3_R2_003.fastq.gz
I figured it out, if instead of just jumping into the shell script with the triple quotes, you can start specifying the process execution script with "script:" then run Groovy using the process inputs
process alignment {
input:
val files
output:
stdout
script:
test = (files[1][0] =~ '.+/test/([^/]+)/.+').findAll()[0][1]
"""
echo $test
"""

how to convert address to lines and file names on windows with PDB files?

I have PDB files and callstack like this
#1 - wseclient.dll+0x121170
#2 - wseclient.dll+0x120024
#3 - wseclient.dll+0x25fbf
I used to use add2line.exe to convert address to lines, but it doesn't work on vs2019, does anybody know is there any other way to convert address on callstack?
Setting up an example
Some sample code to be compiled with Visual Studio 2019:
#include <iostream>
class Test {
public:
__declspec(noinline) static int one()
{
throw std::exception("maximize debugging fun");
}
};
__declspec(noinline) int main()
{
return Test::one();
}
This will create the following call stack when being run in WinDbg:
0:000> k
# ChildEBP RetAddr
...
02 00affa6c 00661316 CallStackDecodingExample!Test::one+0x1d [C:\...\CallStackDecodingExample.cpp # 6]
...
And we can use some calculations to get the numbers back and forth:
0:000> ? CallStackDecodingExample!Test::one+0x1d
Evaluate expression: 6689037 = 0066110d
0:000> ? CallStackDecodingExample
Evaluate expression: 6684672 = 00660000
0:000> ? CallStackDecodingExample+0x110d
Evaluate expression: 6689037 = 0066110d
0:000> ln 0066110d
[C:\...\CallStackDecodingExample.cpp # 6]
(006610f0) CallStackDecodingExample!Test::one+0x1d |
(00661110) CallStackDecodingExample!main
Given the debugger was able to resolve the PDBs correctly, we now have the expected result. Let's try to get this without a debugging session, i.e. neither with live debugging nor with crash dump analysis, but by PDBs + text input.
Getting the address from a DLL + PDB
In WinDbg, use "Open dump file", although you don't have a crash dump file. Instead, open the DLL (wseclient.dll or CallStackDecodingExample.exe for this example).
Then use ln:
0:000> ln CallStackDecodingExample+0x110d
[C:\...\CallStackDecodingExample.cpp # 6]
(004010f0) CallStackDecodingExample!Test::one+0x1d
| (00401110) CallStackDecodingExample!main

name input/output files in snakemake according to variable (not wildcard) in config.yaml

I am trying to edit and run a snakemake pipeline. In a nutshell, the snakemake pipeline calls a default genome aligner (minimap) and produces output files with this name. I am trying to add a variable aligner to config.yaml to specify the aligner I want to call. Also (where I am actually stuck), the output files should have the name of the aligner specified in config.yaml.
My config.yaml looks like this:
# this config.yaml is passed to Snakefile in pipeline-structural-variation subfolder.
# Snakemake is run from this pipeline-structural-variation folder; it is necessary to
# pass an appropriate path to the input-files (the ../ prefix is sufficient for this demo)
aligner: "ngmlr" # THIS IS THE VARIABLE I AM ADDING TO THIS FILE. VALUES COULD BE minimap or ngmlr
# FASTQ file or folder containing FASTQ files
# check if this has to be gzipped
input_fastq: "/nexusb/Gridion/20190917PGD2staal2/PD170815/PD170815_cat_all.fastq.gz" # original is ../RawData/GM24385_nf7_chr20_af.fastq.gz
# FASTA file containing the reference genome
# note that the original reference sequence contains only the sequence of chr20
reference_fasta: "/nexus/bhinckel/19/ONT_projects/PGD_breakpoint/ref_hg19_local/hg19_chr1-y.fasta" # original is ../ReferenceData/human_g1k_v37_chr20_50M.fasta
# Minimum SV length
min_sv_length: 300000 # original value was 40
# Maximum SV length
max_sv_length: 1000000 # original value was 1000000. Note that the value I used to run the pipeline for the sample PD170677 was 100000000000, which will be coerced to NA in the R script (/home/bhinckel/ont_tutorial_sv/ont_tutorial_sv.R)
# Min read length. Shorter reads will be discarded
min_read_length: 1000
# Min mapping quality. Reads will lower mapping quality will be discarded
min_read_mapping_quality: 20
# Minimum read support required to call a SV (auto for auto-detect)
min_read_support: 'auto'
# Sample name
sample_name: "PD170815" # original value was GM24385.nf7.chr20_af. Note that this can be a list
I am posting below the sections of my snakefile which generate output files with the extension _minimap2.bam, which I would like to replace by either _minimap2.bam or _ngmlr.bam, depending on aligner on config.yaml
# INPUT BAM folder
bam = None
if "bam" in config:
bam = os.path.join(CONFDIR, config["bam"])
# INPUT FASTQ folder
FQ_INPUT_DIRECTORY = []
if not bam:
if not "input_fastq" in config:
print("\"input_fastq\" not specified in config file. Exiting...")
FQ_INPUT_DIRECTORY = os.path.join(CONFDIR, config["input_fastq"])
if not os.path.exists(FQ_INPUT_DIRECTORY):
print("Could not find {}".format(FQ_INPUT_DIRECTORY))
MAPPED_BAM = "{sample}/alignment/{sample}_minimap2.bam" # Original
#MAPPED_BAM = "{sample}/alignment/{sample}_{alignerName}.bam" # this did not work
#MAPPED_BAM = f"{sample}/alignment/{sample}_{config['aligner']}.bam" # this did nor work either
else:
MAPPED_BAM = find_file_in_folder(bam, "*.bam", single=True)
...
if config['aligner'] == 'minimap':
rule index_minimap2:
input:
REF = FA_REF
output:
"{sample}/index/minimap2.idx"
threads: config['threads']
conda: "env.yml"
shell:
"minimap2 -t {threads} -ax map-ont --MD -Y {input.REF} -d {output}"
rule map_minimap2:
input:
FQ = FQ_INPUT_DIRECTORY,
IDX = rules.index_minimap2.output,
SETUP = "init"
output:
BAM = "{sample}/alignment/{sample}_minimap2.bam",
BAI = "{sample}/alignment/{sample}_minimap2.bam.bai"
conda: "env.yml"
threads: config["threads"]
shell:
"cat_fastq {input.FQ} | minimap2 -t {threads} -K 500M -ax map-ont --MD -Y {input.IDX} - | samtools sort -# {threads} -O BAM -o {output.BAM} - && samtools index -# {threads} {output.BAM}"
else:
print(f"Aligner is {config['aligner']} - skipping indexing step for minimap2")
rule map_ngmlr:
input:
REF = FA_REF,
FQ = FQ_INPUT_DIRECTORY,
SETUP = "init"
output:
BAM = "{sample}/alignment/{sample}_minimap2.bam",
BAI = "{sample}/alignment/{sample}_minimap2.bam.bai"
conda: "env.yml"
threads: config["threads"]
shell:
"cat_fastq {input.FQ} | ngmlr -r {input.REF} -t {threads} -x ont - | samtools sort -# {threads} -O BAM -o {output.BAM} - && samtools index -# {threads} {output.BAM}"
I initially tried to create a alignerName parameter, similar to the sample parameter, as shown below:
# Parameter: sample_name
sample = "sv_sample01"
if "sample_name" in config:
sample = config['sample_name']
###############
#
# code below created by me
#
###############
# Parameter: aligner_name
alignerName = "defaultAligner"
if "aligner" in config:
alignerName = config['aligner']
Then I tried to input {alignerName} wherever I have minimap2 on my input/ output files (see commented MAPPED_BAM variable definition above), though this is throwing an error. I guess snakemake will interpret {alignerName} as a wildcard, though what I want is simply to pass the variable name defined in config['aligner'] to input/ output files. I also tried with f-string (MAPPED_BAM = f"{sample}/alignment/{sample}_{config['aligner']}.bam"), though I guess this it did not work either.
You are close!
The way wildcards work in snakemake is they get interpreted 'last', while f-strings get interpreted first. To not interpret a curly brace in an f-string you can escape it with another curly brace, like so:
print(f"{{keep curly}}")
>>> {keep curly}
So all we need to do is
MAPPED_BAM = f"{{sample}}/alignment/{{sample}}_{config['aligner']}.bam"

How to log all commands run By system() System Call

I am trying to debug a C++ application which invokes many command line applications such as grep, etc through a the system() system call. I need to see all the commands the application is executing through the system() call.
I tried to view these commands by enabling history and view the .history file. But these commands are not executed through a terminal. The history file has only the commands executed interactively.
Any idea how this can be done?
Define a new macro with similar name:
#define system(_x) std::cout << _x << std::endl; (system)(_x);
The system macro replaces the system function and:
It prints the command to the standard output (or elsewhere).
It calls the system function.
Thanks to Hasturkun's suggestion, the following is better:
#define system(_x) (std::cout << (_x) << std::endl, system(_x))
That returns the result of system function call, too ;-)
To trace every command executed by "yourProgram":
truss -s!all -daDf -t exec yourProgram
eg:
$ truss -s!all -daDf -t exec sh -c "/bin/echo hello world;/bin/date"
Base time stamp: 1282164973.7245 [ Wed Aug 18 22:56:13 CEST 2010 ]
5664: 0.0000 0.0000 execve("/usr/bin/i86/ksh93", 0x080471DC, 0x080471EC) argc = 3
5664: argv: sh -c /bin/echo hello world;/bin/date
5665: 0.0106 0.0106 execve("/bin/echo", 0x08067484, 0x080674F8) argc = 3
5665: argv: /bin/echo hello world
hello world
5664: 0.0126 0.0126 execve("/bin/date", 0x080674E0, 0x080674F8) argc = 1
5664: argv: /bin/date
Wed Aug 18 22:56:13 CEST 2010
If you want to correlate these execs to system() calls, you can use that command:
truss -t execve -f -u 'libc:system' yourProgram
eg:
$ cat a.c
main()
{
system("echo a b c");
system("pwd");
}
$ truss -t execve -f -u 'libc:system' ./a
20073: execve("a", 0x08047240, 0x08047248) argc = 1
20073/1#1: -> libc:system(0x8050a5c, 0x0)
20074/1: execve("/bin/sh", 0x080471BC, 0x08047248) argc = 3
a b c
20073/1#1: <- libc:system() = 0
20073/1#1: -> libc:system(0x8050a68, 0x0)
20076/1: execve("/bin/sh", 0x080471BC, 0x08047248) argc = 3
/tmp
20073/1#1: <- libc:system() = 0
Finally, if you are using Solaris 10 or newer, you can use Dtrace for this task like this:
dtrace -Z -q -c yourProgram -n ' pid$target:libc:system:entry { printf("system(\"%s\")\n", copyinstr(arg0)); } '
which will give that output with the same "a" code:
a b c
/tmp
system("echo a b c")
system("pwd")
PS: By the way system() isn't a system call but a standard library function.
You can use truss or strace (Not sure which one comes with Solaris) to run the program and trace the calls to system.
For truss the relevant command will be something like truss -caf program_name