How to run a C++, PortAudio application on startup on Angstrom Linux on a BeagleBoard? - c++

I have a command-line application called xooky_nabox that was programmed using c++. It reads a puredata patch, processes signals from the audio in jack of a beagleboard and outputs signals through the audio out jack.
I want the application to run wen the beagleoard starts up and stay running until the board is shut down. There is no GUI and no keyboard or monitor attached to it, just the audio in and out jacks.
If I run the application manually everything works fine:
xooky_nabox -audioindev 1 -audiooutdev 1 /var/xooky/patch.pd
And it also runs fine if I run it in the background:
xooky_nabox -audioindev 1 -audiooutdev 1 /var/xooky/patch.pd &
Now, let me show the code layout of two versions of the program (The full thing is at https://github.com/rvega/XookyNabox):
Version 1, main thread is kept alive:
void sighandler(int signum){
time_t rawtime;
time(&rawtime);
std::ofstream myfile;
myfile.open ("log.txt",std::ios::app);
myfile << ctime(&rawtime) << " Caught signal:" << signum << " " << strsignal(signum) << "\n";
myfile.close();
if(signum == 15 || signum == 2){
exit(0);
}
}
int main (int argc, char *argv[]) {
// Subscribe to all system signals for debugging purposes.
for(int i=0; i<64; i++){
signal(i, sighandler);
}
// Sanity checks, error and help messages, etc.
parseParameters(argc, argv);
//Start Signal processing and Audio
initZenGarden();
initAudioIO();
// Keep the program alive.
while(1){
sleep(10);
}
// This is obviously never reached, so far no problems with that...
stopAudioIO();
stopZengarden();
return 0;
}
static int paCallback( const void *inputBuffer, void *outputBuffer, unsigned long framesPerBuffer, const PaStreamCallbackTimeInfo* timeInfo, PaStreamCallbackFlags statusFlags, void *userData ){
// This is called by PortAudio when the output buffer is about to run dry.
}
Version 2, execution is forked and detached from the terminal that launched it:
void go_daemon(){
// Run the program as a daemon.
pid_t pid, sid;
pid = fork(); // Fork off the parent process
if (pid < 0) {
exit(EXIT_FAILURE);
}
if (pid > 0) {
exit(EXIT_SUCCESS); // If child process started ok, exit the parent process
}
umask(0); // Change file mode mask
sid = setsid(); // Create a new session ID for the child process
if (sid < 0) {
// TODO: Log failure
exit(EXIT_FAILURE);
}
if((chdir("/")) < 0){ //Change the working directory to "/"
//TODO: Log failre
exit(EXIT_FAILURE);
}
close(STDIN_FILENO);
close(STDOUT_FILENO);
close(STDERR_FILENO);
}
int main (int argc, char *argv[]) {
go_daemon();
// Subscribe to all system signals for debugging purposes.
for(int i=0; i<64; i++){
signal(i, sighandler);
}
// Sanity checks, error and help messages, etc.
parseParameters(argc, argv);
//Start Signal processing and Audio
initZenGarden();
initAudioIO();
// Keep the program alive.
while(1){
sleep(10);
}
// This is obviously never reached, so far no problems with that...
stopAudioIO();
stopZengarden();
return 0;
}
Trying to run it at startup
I've tried running both versions of the program at startup using a few methods. The outcome is always the same. When the beagle starts up, I can hear sound beign output for a fraction of a second, the sound then stops and the login screen is presented (I have a serial terminal attached to the board and minicom running on my computer). The weirdest thing to me is that the xooky_nabox process is actually kept running after login but there is no sound output...
Here's what I've tried:
Adding a #reboot entry to crontab and launching the program with a trailing ampersand (version 1 of the program):
#reboot xooky_nabox <params> &
Added a start-stop-daemon to crontab (version 1):
#reboot start-stop-daemon -S -b --user daemon -n xooky_nabox -a /usr/bin/xooky_nabox -- <params>
Created a script at /etc/init.d/xooky and did
$chmod +x xooky
$update-rc.d xooky defaults
And tried different versions of the startup script: start-stop-daemon with version 1, calling the program directly with a trailing ampersand (version 1), calling the program directly with no trailing ampersand (version 2).
Also, if I run the program manually from the serial terminal or from a ssh session (usb networking); and then I run top, the program will run fine for a few seconds consuming around 15% cpu. It will then stop outputing sound, and it's cpu consumption will rise to around 30%. My log.txt file shows no signal sent to the program by the OS in this scenario.
When version 2 of the program is ran at startup, the log wil show something like:
Mon Jun 6 02:44:49 2011 Caught signal:18 Continued
Mon Jun 6 02:44:49 2011 Caught signal:15 Terminated
Does anyone have any ideas on how to debug this? Suggestions on how to launch my program at startup?

In version 2,
I think you should open (and dup2) /dev/null to STDIN/STDOUT/STDERR. Just closing the handle would cause problem.
something like this:
int fd = open("/dev/null", O_RDWR);
dup2( fd, STDOUT_FILENO );
(I have no idea what start-stop-daemon do. Can't help version 1, sorry)

There is C function to create a daemon
#include <unistd.h>
int daemon(int nochdir, int noclose);
More information can be found in man pages for daemon(3)
Maybe it will help.
And if you want to launch you daemon when you linux start, you should find out which init version you are using in you distro, but usually, you can just add command to execute you daemon to /etc/init.d/rc (but it seems to be no so good idea). This file is executed by init when linux is starting.

I ended up ditching PortAudio and implementing a JACK client which runs it's own server so this issue was not relevant for me anymore.

Related

Is it possible a kill a command launched using system api in C? If not any alternatives?

I am launching a command using system api (I am ok with using this api with C/C++). The command I pass may hang at times and hence I would like to kill after certain timeout.
Currently I am using it as:
system("COMMAND");
I want to use it something like this:
Run a command using a system independent API (I don't want to use CreateProcess since it is for Windows only) Kill the process if it does not exit after 'X' Minutes.
Since system() is a platform-specific call, there cannot be a platform-independent way of solving your problem. However, system() is a POSIX call, so if it is supported on any given platform, the rest of the POSIX API should be as well. So, one way to solve your problem is to use fork() and kill().
There is a complication in that system() invokes a shell, which will probably spawn other processes, and I presume you want to kill all of them, so one way to do that is to use a process group. The basic idea is use fork() to create another process, place it in its own process group, and kill that group if it doesn't exit after a certain time.
A simple example - the program forks; the child process sets its own process group to be the same as its process ID, and uses system() to spawn an endless loop. The parent process waits 10 seconds then kills the process group, using the negative value of the child process PID. This will kill the forked process and any children of that process (unless they have changed their process group.)
Since the parent process is in a different group, the kill() has no effect on it.
#include <unistd.h>
#include <stdlib.h>
#include <signal.h>
#include <stdio.h>
int main() {
pid_t pid;
pid = fork();
if(pid == 0) { // child process
setpgid(getpid(), getpid());
system("while true ; do echo xx ; sleep 5; done");
} else { // parent process
sleep(10);
printf("Sleep returned\n");
kill(-pid, SIGKILL);
printf("killed process group %d\n", pid);
}
exit(0);
}
There is no standard, cross-platform system API. The hint is that they are system APIs! We're actually "lucky" that we get system, but we don't get anything other than that.
You could try to find some third-party abstraction.
Check below C++ thread based attempt for linux. (not tested)
#include <iostream>
#include <string>
#include <thread>
#include <stdio.h>
using namespace std;
// execute system command and get output
// http://stackoverflow.com/questions/478898/how-to-execute-a-command-and-get-output-of-command-within-c
std::string exec(const char* cmd) {
FILE* pipe = popen(cmd, "r");
if (!pipe) return "ERROR";
char buffer[128];
std::string result = "";
while(!feof(pipe)) {
if(fgets(buffer, 128, pipe) != NULL)
result += buffer;
}
pclose(pipe);
return result;
}
void system_task(string& cmd){
exec(cmd.c_str());
}
int main(){
// system commad that takes time
string command = "find /";
// run the command in a separate thread
std::thread t1(system_task, std::ref(command));
// gives some time for the system task
std::this_thread::sleep_for(chrono::milliseconds(200));
// get the process id of the system task
string query_command = "pgrep -u $LOGNAME " + command;
string process_id = exec(query_command.c_str());
// kill system task
cout << "killing process " << process_id << "..." << endl;
string kill_command = "kill " + process_id;
exec(kill_command.c_str());
if (t1.joinable())
t1.join();
cout << "continue work on main thread" << endl;
return 0;
}
I had a similar problem, in a Qt/QML development: I want to start a bash command, while continuing to process events on the Qt Loop, and killing the bash command if it takes too long.
I came up with the following class that I'm sharing here (see below), in hope it may be of some use to people with a similar problem.
Instead of calling a 'kill' command, I call a cleanupCommand supplied by the developper. Example: if I'm to call myscript.sh and want to check that it won't last run for more than 10 seconds, I'll call it the following way:
SystemWithTimeout systemWithTimeout("myScript.sh", 10, "killall myScript.sh");
systemWithTimeout.start();
Code:
class SystemWithTimeout {
private:
bool m_childFinished = false ;
QString m_childCommand ;
int m_seconds ;
QString m_cleanupCmd ;
int m_period;
void startChild(void) {
int rc = system(m_childCommand.toUtf8().data());
if (rc != 0) SYSLOG(LOG_NOTICE, "Error SystemWithTimeout startChild: system returned %d", rc);
m_childFinished = true ;
}
public:
SystemWithTimeout(QString cmd, int seconds, QString cleanupCmd)
: m_childFinished {false}, m_childCommand {cmd}, m_seconds {seconds}, m_cleanupCmd {cleanupCmd}
{ m_period = 200; }
void setPeriod(int period) {m_period = period;}
void start(void) ;
};
void SystemWithTimeout::start(void)
{
m_childFinished = false ; // re-arm the boolean for 2nd and later calls to 'start'
qDebug()<<"systemWithTimeout"<<m_childCommand<<m_seconds;
QTime dieTime= QTime::currentTime().addSecs(m_seconds);
std::thread child(&SystemWithTimeout::startChild, this);
child.detach();
while (!m_childFinished && QTime::currentTime() < dieTime)
{
QTime then = QTime::currentTime();
QCoreApplication::processEvents(QEventLoop::AllEvents, m_period); // Process events during up to m_period ms (default: 200ms)
QTime now = QTime::currentTime();
int waitTime = m_period-(then.msecsTo(now)) ;
QThread::msleep(waitTime); // wait for the remaning of the 200 ms before looping again.
}
if (!m_childFinished)
{
SYSLOG(LOG_NOTICE, "Killing command <%s> after timeout reached (%d seconds)", m_childCommand.toUtf8().data(), m_seconds);
int rc = system(m_cleanupCmd.toUtf8().data());
if (rc != 0) SYSLOG(LOG_NOTICE, "Error SystemWithTimeout 164: system returned %d", rc);
m_childFinished = true ;
}
}
I do not know any portable way to do that in C nor C++ languages. As you ask for alternatives, I know it is possible in other languages. For example in Python, it is possible using the subprocess module.
import subprocess
cmd = subprocess.Popen("COMMAND", shell = True)
You can then test if COMMAND has ended with
if cmd.poll() is not None:
# cmd has finished
and you can kill it with :
cmd.terminate()
Even if you prefere to use C language, you should read the documentation for subprocess module because it explains that internally it uses CreateProcess on Windows and os.execvp on Posix systems to start the command, and it uses TerminateProcess on Windows and SIG_TERM on Posix to stop it.

Launching a service from c++ with execv

I'm trying to launch a linux service from a c++ and I do it successfully but one of my process is marked as "defunct" and I don't want that my parent process dies.
My code is (testRip.cpp):
int main()
{
char* zebraArg[2];
zebraArg[0] = (char *)"zebra";
zebraArg[1] = (char *)"restart";
char* ripdArg[2];
ripdArg[0] = (char *)"ripd";
ripdArg[1] = (char *)"restart";
pid_t ripPid;
pid_t zebraPid;
zebraPid = fork();
if(zebraPid == 0)
{
int32_t iExecvRes = 0;
iExecvRes = execv("/etc/init.d/zebra", zebraArg);
return 0;
if(iExecvRes == -1)
{
::syslog((LOG_LOCAL0 | LOG_ERR),
"zebra process failed \n");
}
}
else
{
while(1)
{
::syslog((LOG_LOCAL0 | LOG_ERR),
"running\n");
sleep(2);
}
}
}
The exit of ps -e command is:
9411 pts/1 00:00:00 testRip
9412 pts/1 00:00:00 testRip <defunct>
9433 ? 00:00:00 zebra
The /etc/init.d/zebra launches the service as daemon or something like that so I think this is the trick but:
Why there are 3 processes and one of them is marked as defunct?
What is wrong in my code?
How can I fix it?
Thanks in advance.
To remove zombies you the parent process must wait() its child or dies. If you need to make a non blocking wait() look at waitpid() with W_NOHANG flag.
Correctly forking a daemon process is hard in Unix and Linux because there are a lot of details to get right, and order is also important. I would suspect a combination of open file descriptors and not detatching the controlling terminal, in this case.
I would strongly suggest using a well-debugged implementation from another program - one of the reduced-functionality command line shells such as rsh or ksh may be a good choice, rather than trying to bake your own version.

Linux Pipe replace stdio - issues with MPI

This question is the next step after resolving the issue discussed in:
Piping for input/output
I use pipes to pass a string via stdin to an external program called GULP, and receive the stdout of GULP as input for my program. This works fine on one processor, but on two or more processors there's a problem (let's say it's just 2 cores). The program GULP uses a temporary file and it seems that the two processors launch GULP simultaneously and then GULP tries to perform multiple operations on the same file at the same time (maybe simultaneous writes). GULP reports "error opening file".
I am testing this code on a laptop with multiple cores running Ubuntu, but the code is intended for a distributed-memory HPC (I'm using OpenMPI). Assume for the sake of this discussion that I cannot modify GULP.
I'm hoping that there's some straightforward way to get GULP to create two independent temporary files and continue functioning as normal. Am I asking for too much?
Hopefully this pseudo code will help (assume 2 processors):
int main()
{
MPI_Init(&argc,&argv);
MPI_Comm_rank(…);
MPI_Comm_size(…);
int loopmin, loopmax;//distributes the loop among each processor
for (int i = loopmin; i < loopmax; i++)
{
Launch_GULP(…);//launches external program
}
return 0;
}
Launch_GULP(…)
{
int fd_p2c[2], fd_c2p[2];
pipe(fd_p2c);
pipe(fd_c2p);
childpid = fork();
//the rest follows as in accepted answer in above link
//so i'll highlight the interesting stuff
if (childpid < 0)
{
perror("bad");
exit(-1);
}
else if (childpid == 0)
{
//call dup2, etc
execl( …call the program… );
}
else
{
//the interesting stuff
close(fd_p2c[0]);
close(fd_c2p[1]);
write(fd_p2c[1],…);
close(fd_p2c[1]);
while(1)
{
bytes_read = read(fd_c2p[0],…);//read GULP output
if (bytes_read <= 0)
break;
//pass info to read buffer & append null terminator
}
close(fd_c2p[0]);
if(kill(childpid,SIGTERM) != 0)
{
perror("Failed to kill child… tragic");
exit(1);
}
waitpid(childpid, NULL, 0);
}
//end piping… GULP has reported an error via stdout
//that error is stored in the buffer string
//consequently an error is triggered in my code and the program exits
}

Subprocess icon bounces in dock

My application starts a subprocess program to read video using the QuickTime framework via fork() and pipes. The subprocess goes into a wait loop when it is not busy, i.e. it does usleep until there is input. The subprocess is not a GUI application and it is written in C++.
When opening AVI video coded using the MSVC codec, a second copy of the application icon shows in the dock and bounces. After about 30 seconds in the Activity Monitor I can see that the subprocess changes to "not responding" even though CPU appears to be ~0%. The subprocess is still running and responding; it's just that Activity Monitor says otherwise.
If I look at the state of the subprocess, via gdb attach or check its output; everything looks fine. I can tell the subprocess to close the file and open another one and continue using it at which point the bouncing dock icon disappears and the process is not marked as not responding.
It's as if OSX thinks my subprocess has crashed (?) but I cannot detect an exception.
How can I stop the subprocess showing an icon in the dock, bouncing and being marked as not responding ?
This is how I set up communication with the subprocess:
#include <unistd.h>
#define READ 0
#define WRITE 1
// Start process
pid_t popen2(const char *command, char * const argv[], int *infp, int *outfp)
{
int p_stdin[2], p_stdout[2];
pid_t pid;
// Set up pipes
if(pipe(p_stdin) != 0 || pipe(p_stdout) != 0)
return(-1);
pid = fork();
if(pid < 0)
return(pid);
else if(pid == 0)
{
// Set up communication via stdin/out
close(p_stdin[WRITE]);
dup2(p_stdin[READ], READ);
close(p_stdout[READ]);
dup2(p_stdout[WRITE], WRITE);
execvp(command, argv); // run subprocess
perror("execvp");
exit(1);
}
// Provide pointers to the file descriptors to the caller
if(infp == NULL)
close(p_stdin[WRITE]);
else
*infp = p_stdin[WRITE];
if(outfp == NULL)
close(p_stdout[READ]);
else
*outfp = p_stdout[READ];
return(pid);
}
See this SO question for more discussion of popen2().
Note: this code may or may not be the cause of my problem. As a first step, I would really like to prove what is the cause.
The “not responding” part is simple: Your subprocess is not running a runloop of any type, so from the POV of the system, it’s not handling events.
I’m a bit hazier on why your new process is getting a dock icon, but it basically boils down to fork() creating a process that inherits the attributes of the parent process (in this case, of it being a foreground application). OS X has a number of mechanisms to launch subprocesses in more sensible ways than fork(). If your app is in Cocoa, use NSTask, otherwise, take a look at posix_spawn(2).
This should be a drop in replacement for your routine:
#include <stdio.h>
#include <unistd.h>
#include <stdlib.h>
#include <spawn.h>
#include <signal.h>
#include <crt_externs.h>
#define READ 0
#define WRITE 1
#define environ (*_NSGetEnviron())
pid_t popen2(const char *command, char * const argv[], int *infp, int *outfp)
{
int p_stdin[2], p_stdout[2];
pid_t pid;
if(pipe(p_stdin) != 0 || pipe(p_stdout) != 0)
return(-1);
posix_spawn_file_actions_t file_actions;
posix_spawn_file_actions_init(&file_actions);
posix_spawn_file_actions_adddup2(&file_actions, p_stdin[READ], 0);
posix_spawn_file_actions_adddup2(&file_actions, p_stdout[WRITE], 1);
posix_spawn_file_actions_adddup2(&file_actions, 2, 2);
posix_spawnattr_t spawnAttributes;
posix_spawnattr_init(&spawnAttributes);
sigset_t no_signals;
sigset_t all_signals;
sigemptyset (&no_signals);
sigfillset (&all_signals);
posix_spawnattr_setsigmask(&spawnAttributes, &no_signals);
posix_spawnattr_setsigdefault(&spawnAttributes, &all_signals);
short flags = POSIX_SPAWN_CLOEXEC_DEFAULT | POSIX_SPAWN_SETSIGMASK | POSIX_SPAWN_SETSIGDEF;
posix_spawnattr_setflags(&spawnAttributes, flags);
if (posix_spawn(&pid, command, &file_actions, &spawnAttributes, argv, environ)) {
perror("posix_spawn");
exit(1);
}
close(p_stdin[READ]);
if(infp == NULL)
close(p_stdin[WRITE]);
else
*infp = p_stdin[WRITE];
close(p_stdout[WRITE]);
if(outfp == NULL)
close(p_stdout[READ]);
else
*outfp = p_stdout[READ];
return(pid);
}
If you're using a compiler that supports c++11, I'd recommend checking out packaged_tasks.
Alternatively, you could also use condition_variables, but I'd try to get it working with packaged tasks first. Condition variables are more of the primitive than the higher-level packaged tasks. Either way, it's a lot easier (and standards compliant) to use these mechanisms than tradition IPC techniques.
That is, of course, if you don't have to have a separate process.
For further information, I'd highly recommend checking out The C++ Programming Language, 4th Edition. It has simple example of a producer/consumer mechanism with a simple vector that may suit you well. If you don't spring for the book, I'm sure you can find similar examples online for using condition_variables, futures or promises.
HTH
I ended up rewriting the subprocess as a MacOS XPC service. In the XPC service's property list I added LSBackgroundOnly to get Launch Services to ignore system event handling.

What's so special about file descriptor 3 on linux?

I'm working on a server application that's going to work on Linux and Mac OS X. It goes like this:
start main application
fork of the controller process
call lock_down() in the controller process
terminate main application
the controller process then forks again, creating a worker process
eventually the controller keeps forking more worker processes
I can log using several of methods (e.g. syslog or a file) but right now I'm pondering about syslog. The "funny" thing is that no syslog output is ever seen in the controller process unless I include the #ifdef section below.
The worker processes logs flawlessly in Mac OS X and linux with or without the ifdef'ed section below. The controller also logs flawlessly in Mac OS X without the #ifdef'ed section, but on linux the ifdef is needed if I want to see any output into syslog (or the log file for that matter) from the controller process.
So, why is that?
static int
lock_down(void)
{
struct rlimit rl;
unsigned int n;
int fd0;
int fd1;
int fd2;
// Reset file mode mask
umask(0);
// change the working directory
if ((chdir("/")) < 0)
return EXIT_FAILURE;
// close any and all open file descriptors
if (getrlimit(RLIMIT_NOFILE, &rl))
return EXIT_FAILURE;
if (RLIM_INFINITY == rl.rlim_max)
rl.rlim_max = 1024;
for (n = 0; n < rl.rlim_max; n++) {
#ifdef __linux__
if (3 == n) // deep magic...
continue;
#endif
if (close(n) && (EBADF != errno))
return EXIT_FAILURE;
}
// attach file descriptors 0, 1 and 2 to /dev/null
fd0 = open("/dev/null", O_RDWR);
fd1 = dup2(fd0, 1);
fd2 = dup2(fd0, 2);
if (0 != fd0)
return EXIT_FAILURE;
return EXIT_SUCCESS;
}
camh was close, but using closelog() was the idea that did the trick so the honor goes to jilles. Something else, aside from closing a file descriptor from under syslogs feet must go on though. To make the code work I added a call to closelog() just before the loop:
closelog();
for (n = 0; n < rl.rlim_max; n++) {
if (close(n) && (EBADF != errno))
return EXIT_FAILURE;
}
I was relying on a verbatim understanding of the manual page, saying:
The use of openlog() is optional; it will automatically be called by syslog() if necessary...
I interpreted this as saying that syslog would detect if the file descriptor was closed under it. Apparently it did not. An explicit closelog() on linux was needed to tell syslog that the descriptor was closed.
One more thing that still perplexes me is that not using closelog() prevented the first forked process (the controller) from even opening and using a log file. The following forked processes could use syslog or a log file with no problems. Maybe there are some caching effect in the filesystem that make the first forked process having an unreliable "idea" of which file descriptors are available, while the next set of forked process are sufficiently delayed to not be affected by this?
The special aspect of file descriptor 3 is that it will usually be the first file descriptor returned from a system call that allocates a new file descriptor, given that 0, 1 and 2 are usually set up for stdin, stdout and stderr.
This means that if any library function you have called allocates a file descriptor for its own internal purposes in order to perform its functions, it will get fd 3.
The openlog(3) library call will need to open /dev/log to communicate with the syslog daemon. If you subsequently close all file descriptors, you may break the syslog library functions if they are not written in a way to handle that.
The way to debug this on Linux is to use strace to trace the actual system calls that are being made; the use of a file descriptor for syslog then becomes obvious:
$ cat syslog_test.c
#include <stdio.h>
#include <syslog.h>
int main(void)
{
openlog("test", LOG_PID, LOG_LOCAL0);
syslog(LOG_ERR, "waaaaaah");
closelog();
return 0;
}
$ gcc -W -Wall -o syslog_test syslog_test.c
$ strace ./syslog_test
...
socket(PF_FILE, SOCK_DGRAM, 0) = 3
fcntl64(3, F_SETFD, FD_CLOEXEC) = 0
connect(3, {sa_family=AF_FILE, path="/dev/log"}, 16) = 0
send(3, "<131>Aug 21 00:47:52 test[24264]"..., 42, MSG_NOSIGNAL) = 42
close(3) = 0
exit_group(0) = ?
Process 24264 detached
syslog(3) may keep a file descriptor to syslogd's socket open; closing this under its feet is likely to cause problems. A closelog(3) call may help.
Syslog binds on a given descriptor at startup. Most of the time descriptor 3. If you close it no logs.
syslog-ng -d -v
Gives you more info about what it's doing behind the scenes.
The output should look like something like this:
binding fd 3, inetaddr: 0.0.0.0, port: 514
io.c: Preparing fd 3 for reading
io.c: Preparing fd 4 for reading
binding fd 5, unixaddr: /dev/log
io.c: listening on fd 5