I have a client server system, completely written in c++. server runs as /etc/init.d/serverd with start/stop options. Client.exe executes any command as client.exe --options. With each client call, daemon hits.
I want to attach valgrind with /etc/init.d/serverd to detect leak.
I tried below options but failed.
/usr/local/bin/valgrind --log-file=valgrind_1.log -v --trace-children=yes --leak-check=full --tool=memcheck --vgdb=yes --vgdb-error=0 /etc/init.d/ serverd start
Each time it fails to attached with daemon.
What we want is to attach valgrind with daemon at starting time [ the exact point is , I will stop daemon , attach valgrind with it and then start it again ] so that each time , execution of client.exe --options, logs should be generated for daemon in --log-file=valgrind_1.log
Does anyone have any idea about how to do the same?
It seems not possible to attach valgrind to an existing process:
http://valgrind.org/docs/manual/faq.html#faq.attach
It seems to me the best practice is to kill the daemon process, and run by yourself the executable in valgrind.
For systemd managed daemon, you can change the ExecStart= to run valgrind like following:
ExecStart={valgrind-command-with-flags} /usr/sbin/foo-daemon
Do make sure to redirect the output to a well defined location.
Caution: Daemon running with valgrind could be extremely slow and could potentially not run as expected
Related
I start a gdb session in the background with a command like this:
gdb --batch --command=/tmp/my_automated_breakpoints.gdb -p pid_of_proces> &> /tmp/gdb-results.log &
The & at the end lets it run in the background (and the shell is immediately closed afterwards as this command is issued by a single ssh command).
I can find out the pid of the gdb session with ps -aux | grep gdb.
However: How can I gracefully detach this gdb session from the running process just like I would if I had the terminal session in front of me with the (gdb) detach command?
When I kill the gdb session (and not the running process itself) with kill -9 gdb_pid, I get unwanted SIGABRTs afterwards in the running program.
A restart of the service is too time consuming for my purpose.
In case of a successful debugging session with this automated script I could use a detach command inside the batch script. This is however not my case: I want to detach/quit the running gdb session when there are some errors during the session, so I would like to gracefully detach gdb by hand from within another terminal session.
If you run the gdb command from terminal #1 in the background, you can always bring gdb back into foreground by running the command fg. Then, you can simply CTRL+C and detach as always to stop the debugging session gracefully.
Assuming that terminal #1 is now occupied by something else and you cannot use it, You can send a SIGHUP signal to the gdb process to detach it:
sudo kill -s SIGHUP $(pidof gdb)
(Replace the $(pidof gdb) with the actual PID if you have more than one gdb instance)
I have many EC2 instances that retain Celery jobs for processing. To efficiently start the overall task of completing the queue, I have tested AWS-RunBashScript in AWS' SSM with a BASH script that calls a Python script. For example, for a single instance this begins with sh start_celery.sh.
When I run the command in SSM, this is the following output (compare to other output below, after reading on):
/home/ec2-user/dh2o-py/venv/local/lib/python2.7/dist-packages/celery/utils/imports.py:167:
UserWarning: Cannot load celery.commands extension u'flower.command:FlowerCommand':
ImportError('No module named compat',)
namespace, class_name, exc))
/home/ec2-user/dh2o-py/tasks/task_harness.py:49: YAMLLoadWarning: calling yaml.load() without
Loader=... is deprecated, as the default Loader is unsafe. Please read https://msg.pyyaml.org/load for full details.
task_configs = yaml.load(conf)
Running a worker with superuser privileges when the worker accepts messages serialized with pickle is a very bad idea!
If you really want to continue then you have to set the C_FORCE_ROOT
environment variable (but please think about this before you do).
User information: uid=0 euid=0 gid=0 egid=0
failed to run commands: exit status 1
Note that only warnings are thrown. When I SSH to the same instance and run the same command (i.e. sh start_celery.sh), the following (same) output results BUT the process runs:
I have verified that the process does NOT run when doing this via SSM, and I have no idea why. As a work-around, I tried running the sh start_celery.sh command with bootstrapping in user data for each EC2, but that failed too.
So, why does SSM fail to actually run the process that I succeed in doing by actually via SSH to each instance running identical commands? The details below relate to machine and Python configuration:
I want to debug the very initial startup of a daemon started as a service under linux (centos 7).
My service is started as: "service mydaemon start"
I know about attaching gdb to a running process, but, unfortunately that technique is too slow, the initial execution of mydaemon is important.
mydaemon is written in C++ and full debug info is available.
unfortunately that technique is too slow
There are two general solutions to this problem.
The first one is described here: you make your target executable wait for GDB to attach (this requires building a special version of the daemon).
The second is to "wrap" your daemon in gdbserver (as root):
mv mydaemon mydaemon.exe
echo > mydaemon <<EOF
#!/bin/sh
exec gdbserver :1234 /path/to/mydaemon.exe "$#"
EOF
chmod +x mydaemon
Now execute service mydaemon start, and your process will be stopped by gdbserver and will wait for connection from GDB.
gdb /path/to/mydaemon.exe
(gdb) target remote :1234
# You should now be looking at the mydaemon process stopped in `_start`.
At that point you can set beakpoints, and use continue or next or step as appropriate.
First, my ultimate goal is to cross compile OpenCV for arm so I have tried 2 approaches, but no success so far.
This question is related to using distcc for compiling, using the target to run the make command but taking advantage of a beefy server to speed things up.
Basically, the target doesn't seem to be sending jobs to the slave server.
I installed distcc on both machines (apt-get install distcc)
As I understand it, the daemon only needs to run on the slave.
I set up hosts in /etc/distcc/hosts: In that file I have the IPs of both the target at 192.168.10.45 and slave at 192.168.10.34
I run the daemon with
distccd --daemon --allow 192.168.10.45
to allow the target
with ps aux | grep distcc
I can see the 32 instances of distccd running.
If I use
netstat -pant | grep distcc
I see the daemon listening
Now, if I tail the log file at /var/log/distccd.log, there is nothing there, and nothing happening
When I run a job on the target with
make -j33 CC=distcc
it seems to run fine, but I see nothing happening on the slave
ufw is disabled, the 2 machines ping and can talk to each other via ssh.
What am I missing here?
You must define the list of compilation hosts (through the /etc/distcc/hosts file or through the DISTCC_HOSTS environment variable) on the master (target) machine. Check the host list by running on the master distcc --show-hosts.
Specify distcc as a compiler for C++ as well:
make -j33 CC=distcc CXX=distcc
Did you run:
sudo update-distcc-symlinks
The official installation documentation currently omits this step. I had the same symptoms and had some trouble finding the log, but eventually saw that I had to specify logging in an environment variable:
DISTCCD_OPTS="${DISTCCD_OPTS} --log-file /dev/shm/distccd.log"
Which said:
(dcc_warn_masquerade_whitelist) CRITICAL! /usr/local/lib/distcc not found. You must see up masquerade (see distcc(1)) to list whitelisted compilers or pass --enable-tcp-insecure. To set up masquerade automatically run update-distcc-symlinks.
I have a script that takes a lot of time to complete.
Instead of waiting for it to finish, I'd rather just log out and retrieve its output later on.
I've tried;
at -m -t 03030205 -f /path/to/./thescript.pl
nohup /path/to/./thescript.pl &
And I have also verified that the processes actually exist with ps and at -l depending on which scheduling syntax i used.
Both these processes die when I exit out of the shell. Is there a way to keep a script from terminating when I close the connection?
We have crons here and they are set up and are working properly, but I would like to use at or nohup for single-use scripts.
Is there something wrong with my syntax? Are there any other methods to producing the desired outcome?
EDIT:
I cannot use screen or disown - they aren't installed in my HP Unix setup and i am not in the position to install them either
Use screen. It creates a terminal which keeps going when you log out. When you log back in you can switch back to it.
If you want to keep a process running after you log out:
disown -h <pid>
is a useful bash built-in. Unlike nohup, you can run disown on an already-running process.
First, stop your job with control-Z, get the pid from ps (or use echo $!), use bg to send it to the background, then use disown with the -h flag.
Don't forget to background your job or it will be killed when you logout.
This is just a guess, but something I've seen with some versions of ssh and nohup: if you've logged in with ssh then you may need to need to redirect stdout, stderr and stdin to avoid having the session hang when you exit. (One of those may still be attached to the terminal.) I would try:
nohup /path/to/./thescript.pl > whatever.stdout 2> whatever.stderr < /dev/null &
(This is no longer the case with my current versions of ssh and nohup - the latter redirects them if it detects that any is attached to a terminal - but you may be using different versions.)
Syntax for nohup looks ok, but your account may not allow for processes to run after logout. Also, try redirecting the stdout/stderr to a log file or /dev/null.
Run your command in background.
/path/to/./thescript.pl &
To get lits of your background jobs
jobs
Now you can selectively disown any of the above jobs, with its jobid.
disown <jobid>
All the disowned process should be keep on running even after you logged out.