select() from other thread not handle incoming data - c++

I have multithread applications. In one thread I want to waiting for data, and if this datas will not appear through some time, I just close whole application. Function select() is runs in another thread and thread is detach
A piece sorce some class
#define STDIN 0
A::A(){
runFunction1();
runFunction2();
thread listen(&A::isData , this);
listen.detach();
}
// and function with select
void A::isData{
struct timeval tv;
fd_set readfds;
tv.tv_sec = 50; // wait to 50 seconds
tv.tv_usec = 500000;
FD_ZERO(&readfds);
FD_SET(STDIN, &readfds);
for(;;) {
select(STDIN+1, &readfds, NULL, NULL, &tv);
if (FD_ISSET(STDIN, &readfds)){
cout << "I something catch :)" << endl;
// reset time
}else{
cout << "I nothing catch ! By By :(" << endl;
exit(1);
break;
}
}
}
If my program run then I get pid, so I have tryed write to file descriptor some datas in below way:
$ cd /proc/somePID/fd
$ echo 1 >> 0
Then I should be get info I something catch :) but in console IDE I only get 1 , however if time is out I get I nothing catch ! By By :( to console in IDE.
EDIT: SOLVED
#Vladimir Kunschikov gave me correct tip. Basic on this I show how I do it in C/C++
It will be posible send something to process using command echo we must create pipe. Pipe using function
A::A(){
fd[2]; //two file descriptor for pipes write/read
pipe(fd); // c function to create pipes
runFunction1();
runFunction2();
thread listen(&A::isData , this);
listen.detach();
}
Then in our proces will create other two new file descriptor as below:
lrwx------ 1 user user 64 gru 15 14:02 0 -> /dev/pts/0
lrwx------ 1 user user 64 gru 15 14:02 1 -> /dev/pts/0
lrwx------ 1 user user 64 gru 15 14:02 2 -> /dev/pts/1
lr-x------ 1 user user 64 gru 15 14:02 3 -> pipe:[1335197]
lrwx------ 1 user user 64 gru 15 14:02 4 -> socket:[1340788]
l-wx------ 1 user user 64 gru 15 14:02 5 -> pipe:[1335197]
pipe:[1335197] is descriptor to read in this proces. Now command echo will be works if we will write some to descriptor 3. Using is simply:
$ cd /proc/PID/fd
$ echo 1 > 3
Then we also can using this in select() function but descriptor number is 3 so define should be looks like .
#define STDIN 3
And works

You have wrong assumption that writing to the /proc/pidof application/fd/0 will put data to the stdin stream of the application.
Just read answers to this question: Sending command to java -jar using stdin via /proc/{pid}/fd/0

Related

Understanding dup2 and closing file descriptors

I'm posting my code simply for context of my question. I'm not explicitly looking for you to help fix it, I'm more so looking to understand the dup2 system call that I'm just not picking up from the man page and the numerous other stackoverflow questions.
pid = fork();
if(pid == 0) {
if(strcmp("STDOUT", outfile)) {
if (command->getOutputFD() == REDIRECT) {
if ((outfd = open(outfile, O_CREAT | O_WRONLY | O_TRUNC)) == -1)
return false;
command->setOutputFD(outfd);
if (dup2(command->getOutputFD(), STDOUT_FILENO) == -1)
return false;
pipeIndex++;
}
else if (command->getOutputFD() == REDIRECTAPPEND) {
if ((outfd = open(outfile, O_CREAT | O_WRONLY | O_APPEND)) == -1)
return false;
command->setOutputFD(outfd);
if (dup2(command->getOutputFD(), STDOUT_FILENO) == -1)
return false;
pipeIndex++;
}
else {
if (dup2(pipefd[++pipeIndex], STDOUT_FILENO) == -1)
return false;
command->setOutputFD(pipefd[pipeIndex]);
}
}
if(strcmp("STDIN", infile)) {
if(dup2(pipefd[pipeIndex - 1], STDIN_FILENO) == -1)
return false;
command->setOutputFD(pipefd[pipeIndex - 1]);
pipeIndex++;
}
if (execvp(arguments[0], arguments) == -1) {
std::cerr << "Error!" << std::endl;
_Exit(0);
}
}
else if(pid == -1) {
return false;
}
For context to you, that code represents the execution step of a basic linux shell. The command object contains the commands arguments, IO "name", and IO descriptors (I think I might get rid of the file descriptors as fields).
What I'm having the most difficultly understanding is when and which file descriptors to close. I guess I'll just ask some questions to try and improve my understanding of the concept.
1) With my array of file descriptors used for handling pipes, the parent has a copy of all those descriptors. When are the descriptors held by the parent closed? And even more so, which descriptors? Is it all of them? All of the ones left unused by the executing commands?
2) When handling pipes within the children, which descriptors are left open by which processes? Say if I execute the command: ls -l | grep
"[username]", Which descriptors should be left open for the ls process? Just the write end of the pipe? And if so when? The same question applies to the grep command.
3) When I handle redirection of IO to a file, a new file must be opened and duped to STDOUT (I do not support input redirection). When does this descriptor get closed? I've seen in examples that it gets closed immediately after the call to dup2, but then how does anything get written to the file if the file has been closed?
Thanks ahead of time. I've been stuck on this problem for days and I'd really like to be done with this project.
EDIT I've updated this with modified code and sample output for anyone interested in offering specific help to my issue. First I have the entire for loop that handles execution. It has been updated with my calls to close on various file descriptors.
while(currCommand != NULL) {
command = currCommand->getData();
infile = command->getInFileName();
outfile = command->getOutFileName();
arguments = command->getArgList();
pid = fork();
if(pid == 0) {
if(strcmp("STDOUT", outfile)) {
if (command->getOutputFD() == REDIRECT) {
if ((outfd = open(outfile, O_CREAT | O_WRONLY | O_TRUNC)) == -1)
return false;
if (dup2(outfd, STDOUT_FILENO) == -1)
return false;
close(STDOUT_FILENO);
}
else if (command->getOutputFD() == REDIRECTAPPEND) {
if ((outfd = open(outfile, O_CREAT | O_WRONLY | O_APPEND)) == -1)
return false;
if (dup2(outfd, STDOUT_FILENO) == -1)
return false;
close(STDOUT_FILENO);
}
else {
if (dup2(pipefd[pipeIndex + 1], STDOUT_FILENO) == -1)
return false;
close(pipefd[pipeIndex]);
}
}
pipeIndex++;
if(strcmp("STDIN", infile)) {
if(dup2(pipefd[pipeIndex - 1], STDIN_FILENO) == -1)
return false;
close(pipefd[pipeIndex]);
pipeIndex++;
}
if (execvp(arguments[0], arguments) == -1) {
std::cerr << "Error!" << std::endl;
_Exit(0);
}
}
else if(pid == -1) {
return false;
}
currCommand = currCommand->getNext();
}
for(int i = 0; i < numPipes * 2; i++)
close(pipefd[i]);
for(int i = 0; i < commands->size();i++) {
if(wait(status) == -1)
return false;
}
When executing this code I receive the following output
ᕕ( ᐛ )ᕗ ls -l
total 68
-rwxrwxrwx 1 cook cook 242 May 31 18:31 CMakeLists.txt
-rwxrwxrwx 1 cook cook 617 Jun 1 22:40 Command.cpp
-rwxrwxrwx 1 cook cook 9430 Jun 8 18:02 ExecuteExternalCommand.cpp
-rwxrwxrwx 1 cook cook 682 May 31 18:35 ExecuteInternalCommand.cpp
drwxrwxrwx 2 cook cook 4096 Jun 8 17:16 headers
drwxrwxrwx 2 cook cook 4096 May 31 18:32 implementation files
-rwxr-xr-x 1 cook cook 25772 Jun 8 18:12 LeShell
-rwxrwxrwx 1 cook cook 243 Jun 5 13:02 Makefile
-rwxrwxrwx 1 cook cook 831 Jun 3 12:10 Shell.cpp
ᕕ( ᐛ )ᕗ ls -l > output.txt
ls: write error: Bad file descriptor
ᕕ( ᐛ )ᕗ ls -l | grep "cook"
ᕕ( ᐛ )ᕗ
The output of ls -l > output.txt implies that I'm closing the wrong descriptor, but closing the other related descriptor, while rendering no error, provides no output to the file. As demonstrated by ls -l, grep "cook", should generate output to the console.
With my array of file descriptors used for handling pipes, the parent
has a copy of all those descriptors. When are the descriptors held by
the parent closed? And even more so, which descriptors? Is it all of
them? All of the ones left unused by the executing commands?
A file descriptor may be closed in one of 3 ways:
You explicitly call close() on it.
The process terminates, and the operating system automatically closes every file descriptor that was still open.
When the process calls one of the seven exec() functions and the file descriptor has the O_CLOEXEC flag.
As you can see, most of the times, file descriptors will remain open until you manually close them. This is what happens in your code too - since you didn't specify O_CLOEXEC, file descriptors are not closed when the child process calls execvp(). In the child, they are closed after the child terminates. The same goes for the parent. If you want that to happen any time before terminating, you have to manually call close().
When handling pipes within the children, which descriptors are left
open by which processes? Say if I execute the command: ls -l | grep
"[username]", Which descriptors should be left open for the ls
process? Just the write end of the pipe? And if so when? The same
question applies to the grep command.
Here's a (rough) idea of what the shell does when you type ls -l | grep "username":
The shell calls pipe() to create a new pipe. The pipe file descriptors are inherited by the children in the next step.
The shell forks twice, let's call these processes c1 and c2. Let's assume c1 will run ls and c2 will run grep.
In c1, the pipe's read channel is closed with close(), and then it calls dup2() with the pipe write channel and STDOUT_FILENO, so as to make writing to stdout equivalent to writing to the pipe. Then, one of the seven exec() functions is called to start executing ls. ls writes to stdout, but since we duplicated stdout to the pipe's write channel, ls will be writing to the pipe.
In c2, the reverse happens: the pipe's write channel is closed, and then dup2() is called to make stdin point to the pipe's read channel. Then, one of the seven exec() functions is called to start executing grep. grep reads from stdin, but since we dup2()'d standard input to the pipe's read channel, grep will be reading from the pipe.
When I handle redirection of IO to a file, a new file must be opened
and duped to STDOUT (I do not support input redirection). When does
this descriptor get closed? I've seen in examples that it gets closed
immediately after the call to dup2, but then how does anything get
written to the file if the file has been closed?
So, when you call dup2(a, b), either one of these is true:
a == b. In this case, nothing happens and dup2() returns prematurely. No file descriptors are closed.
a != b. In this case, b is closed if necessary, and then b is made to refer to the same file table entry as a. The file table entry is a structure that contains the current file offset and file status flags; multiple file descriptors can point to the same file table entry, and that's exactly what happens when you duplicate a file descriptor. So, dup2(a, b) has the effect of making a and b share the same file table entry. As a consequence, writing to a or b will end up writing to the same file. So the file that is closed is b, not a. If you dup2(a, STDOUT_FILENO), you close stdout and you make stdout's file descriptor point to the same file table entry as a. Any program that writes to stdout will then be writing to the file instead, since stdout's file descriptor is pointing to the file you dupped.
UPDATE:
So, for your specific problem, here's what I have to say after briefly looking through the code:
You shouldn't be calling close(STDOUT_FILENO) in here:
if (command->getOutputFD() == REDIRECT) {
if ((outfd = open(outfile, O_CREAT | O_WRONLY | O_TRUNC)) == -1)
return false;
if (dup2(outfd, STDOUT_FILENO) == -1)
return false;
close(STDOUT_FILENO);
}
If you close stdout, you will get an error in the future when you try to write to stdout. This is why you get ls: write error: Bad file descriptor. After all, ls is writing to stdout, but you closed it. Oops!
You're doing it backwards: you want to close outfd instead. You opened outfd so that you could redirect STDOUT_FILENO to outfd, once the redirection is done, you don't really need outfd anymore and you can close it. But you most definitely don't want to close stdout because the idea is to have stdout write to the file that was referenced by outfd.
So, go ahead and do that:
if (command->getOutputFD() == REDIRECT) {
if ((outfd = open(outfile, O_CREAT | O_WRONLY | O_TRUNC)) == -1)
return false;
if (dup2(outfd, STDOUT_FILENO) == -1)
return false;
if (outfd != STDOUT_FILENO)
close(outfd);
}
Note the final if is necessary: If outfd by any chance happens to be equal to STDOUT_FILENO, you don't want to close it for the reasons I just mentioned.
The same applies to the code inside else if (command->getOutputFD() == REDIRECTAPPEND): you want to close outfd rather than STDOUT_FILENO:
else if (command->getOutputFD() == REDIRECTAPPEND) {
if ((outfd = open(outfile, O_CREAT | O_WRONLY | O_APPEND)) == -1)
return false;
if (dup2(outfd, STDOUT_FILENO) == -1)
return false;
if (outfd != STDOUT_FILENO)
close(STDOUT_FILENO);
}
This should at least get you ls -l to work as expected.
As for the problem with the pipes: your pipe management is not really correct. It's not clear from the code you showed where and how pipefd is allocated, and how many pipes you create, but notice that:
A process will never be able to read from a pipe and write to another pipe. For example, if outfile is not STDOUT and infile is not STDIN, you end up closing both the read and the write channels (and worse yet, after closing the read channel, you attempt to duplicate it). There is no way this will ever work.
The parent process is closing every pipe before waiting for the termination of the children. This provokes a race condition.
I suggest redesigning the way you manage pipes. You can see an example of a working bare-bones shell working with pipes in this answer: https://stackoverflow.com/a/30415995/2793118

Cleaning up children processes asynchronously

This is an example from <Advanced Linux Programming>, chapter 3.4.4. The programs fork() and exec() a child process. Instead of waiting for the termination of the process, I want the parent process to clean up the children process (otherwise the children process will become a zombie process) asynchronously. The can be done using the signal SIGCHLD. By setting up the signal_handler we can make the clean-up work done when the child process ends. And the code the following:
#include <stdio.h>
#include <stdlib.h>
#include <sys/types.h>
#include <unistd.h>
#include <sys/wait.h>
#include <signal.h>
#include <string.h>
int spawn(char *program, char **arg_list){
pid_t child_pid;
child_pid = fork();
if(child_pid == 0){ // it is the child process
execvp(program, arg_list);
fprintf(stderr, "A error occured in execvp\n");
return 0;
}
else{
return child_pid;
}
}
int child_exit_status;
void clean_up_child_process (int signal_number){
int status;
wait(&status);
child_exit_status = status; // restore the exit status in a global variable
printf("Cleaning child process is taken care of by SIGCHLD.\n");
};
int main()
{
/* Handle SIGCHLD by calling clean_up_process; */
struct sigaction sigchld_action;
memset(&sigchld_action, 0, sizeof(sigchld_action));
sigchld_action.sa_handler = &clean_up_child_process;
sigaction(SIGCHLD, &sigchld_action, NULL);
int child_status;
char *arg_list[] = { //deprecated conversion from string constant to char*
"ls",
"-la",
".",
NULL
};
spawn("ls", arg_list);
return 0;
}
However, When I run the program in the terminal, the parent process never ends. And it seems that it doesn't execute the function clean_up_child_process (since it doesn't print out "Cleaning child process is taken care of by SIGCHLD."). What's the problem with this snippet of code?
The parent process immediately returns from main() after the child pid is returned from fork(), it never has the opportunity to wait for the child to terminate.
for GNU/Linux users
I already read this book. Although the book talked about this mechanism as a:
quote from 3.4.4 page 59 of the book:
A more elegant solution is to notify the parent process when a child terminates.
but it just said that you can use sigaction to handle this situation.
Here is a complete example of how to handle processes in this way.
First why do ever we use this mechanism? Well, since we do not want to synchronize all processes together.
real example
Imagine that you have 10 .mp4 files and you want to convert them to .mp3 files. Well, I junior user does this:
ffmpeg -i 01.mp4 01.mp3
and repeats this command 10 times. A little higher users does this:
ls *.mp4 | xargs -I xxx ffmpeg -i xxx xxx.mp3
This time, this command pipes all 10 mp4 files per line, each one-by-one to xargs and then they one by one is converted to mp3.
But I senior user does this:
ls *.mp4 | xargs -I xxx -P 0 ffmpeg -i xxx xxx.mp3
and this means if I have 10 files, create 10 processes and run them simultaneously. And there is BIG different. In the two previous command we had only 1 process; it was created then terminated and then continued to another one. But with the help of -P 0 option, we create 10 processes at the same time and in fact 10 ffmpeg commands are running.
Now the purpose of cleaning up children asynchronously becomes cleaner. In fact we want to run some new processes but the order of those process and maybe the exit status of them is not matter for us. In this way we can run them as fast as possible and reduce the time.
First you can see man sigaction for any more details you want.
Second seeing this signal number by:
T ❱ kill -l | grep SIGCHLD
16) SIGSTKFLT 17) SIGCHLD 18) SIGCONT 19) SIGSTOP 20) SIGTSTP
sample code
objective: using the SIGCHLD to clean up child process
#include <stdio.h>
#include <stdlib.h>
#include <signal.h>
#include <string.h>
#include <wait.h>
#include <unistd.h>
sig_atomic_t signal_counter;
void signal_handler( int signal_number )
{
++signal_counter;
int wait_status;
pid_t return_pid = wait( &wait_status );
if( return_pid == -1 )
{
perror( "wait()" );
}
if( WIFEXITED( wait_status ) )
{
printf ( "job [ %d ] | pid: %d | exit status: %d\n",signal_counter, return_pid, WEXITSTATUS( wait_status ) );
}
else
{
printf( "exit abnormally\n" );
}
fprintf( stderr, "the signal %d was received\n", signal_number );
}
int main()
{
// now instead of signal function we want to use sigaction
struct sigaction siac;
// zero it
memset( &siac, 0, sizeof( struct sigaction ) );
siac.sa_handler = signal_handler;
sigaction( SIGCHLD, &siac, NULL );
pid_t child_pid;
ssize_t read_bytes = 0;
size_t length = 0;
char* line = NULL;
char* sleep_argument[ 5 ] = { "3", "4", "5", "7", "9" };
int counter = 0;
while( counter <= 5 )
{
if( counter == 5 )
{
while( counter-- )
{
pause();
}
break;
}
child_pid = fork();
// on failure fork() returns -1
if( child_pid == -1 )
{
perror( "fork()" );
exit( 1 );
}
// for child process fork() returns 0
if( child_pid == 0 ){
execlp( "sleep", "sleep", sleep_argument[ counter ], NULL );
}
++counter;
}
fprintf( stderr, "signal counter %d\n", signal_counter );
// the main return value
return 0;
}
This is what the sample code does:
create 5 child processes
then goes to inner-while loop and pauses for receiving a signal. See man pause
then when a child terminates, parent process wakes up and calls signal_handler function
continue up to the last one: sleep 9
output: (17 means SIGCHLD)
ALP ❱ ./a.out
job [ 1 ] | pid: 14864 | exit status: 0
the signal 17 was received
job [ 2 ] | pid: 14865 | exit status: 0
the signal 17 was received
job [ 3 ] | pid: 14866 | exit status: 0
the signal 17 was received
job [ 4 ] | pid: 14867 | exit status: 0
the signal 17 was received
job [ 5 ] | pid: 14868 | exit status: 0
the signal 17 was received
signal counter 5
when you run this sample code, on the other terminal try this:
ALP ❱ ps -o time,pid,ppid,cmd --forest -g $(pgrep -x bash)
TIME PID PPID CMD
00:00:00 5204 2738 /bin/bash
00:00:00 2742 2738 /bin/bash
00:00:00 4696 2742 \_ redshift
00:00:00 14863 2742 \_ ./a.out
00:00:00 14864 14863 \_ sleep 3
00:00:00 14865 14863 \_ sleep 4
00:00:00 14866 14863 \_ sleep 5
00:00:00 14867 14863 \_ sleep 7
00:00:00 14868 14863 \_ sleep 9
As you can see a.out process has 5 children. And They are running simultaneously. Then whenever each of them terminates, kernel sends the signal SIGCHLD to their parent that is: a.out
NOTE
If we do not use pause or any mechanism so that the parent can wait for its children, then we will abandon the created processes and the upstart (= on Ubuntu or init) becomes parent of them. You can try it if you remove pause()
I'm using Mac, so my answer may be not quite relevant, but still. I compile without any options, so executable name is a.out.
I have the same experience with the console (the process doesn't seem to terminate), but I noticed that it's just terminal glitch, because you actually can just press Enter and your command line will be back, and actually ps executed from other terminal window doesn't show a.out, nor ls which it launched.
Also if I run ./a.out >/dev/null it finishes immediately.
So the point of the above is that everything actually terminates, just the terminal freezes for some reason.
Next, why it never prints Cleaning child process is taken care of by SIGCHLD.. Simply because the parent process terminates before child. The SIGCHLD signal can't be delivered to already terminated process, so the handler is never invoked.
In the book it's said that the parent process contiunes to do some other things, and if it really does then everything works fine, for example if you add sleep(1) after spawn().

ACE C++ Log in multiple files

I'm diving through ACE and, I'm logging message in a file using the ACE_ERROR macro.
And AFAIK, ACE_ERROR logs all the messages in the same file, regardless of their error level.
However, I actually need to write messages, according to their error level.
I did see the ACE_LOG_MSG->open() function however, what i understand is that when you already calling this function twice, the second time it will close the file you opened when you called the function at the beginning.
Suppose I have a list and I want to log it and, in this list, two adjacent items don't have the same error level.
Then I would be opening and closing files, wouldn't than affect my apps performance ?
So is there a way to keep those files open ?
Thanks !
Not closing the files you log to is particularly bad in debugging. If the application crashes with an open file, its contents may (and that happens rather often) get corrupted, leaving you with absolutely no information.
If you close the file properly, though, you're guaranteed to find at least some info there, possibly closer to the real issue. If you are concerned with performance, you should simply reduce log level, or if it's not feasible, you could perhaps offload the logging to the other process via (for example) TCP connection.
Anyway, don't optimize until you've measured! It might just be there'll be no impact, performance is a complicated problem which depends on lot of factors.
Another example to re-direct logging as per their logging priority, using a simple wrapper class.
Hope this is useful to someone.
Example program
#include "ace/Log_Msg.h"
#include "ace/streams.h"
// #Author: Gaurav A
// #Date: 2019OCT11
//
// Log each logging statement
// in file based on its priority
//
// eg: INFO logs goes to INFO.log
// DEBUG logs goes to DEBUG.log
class Logger
{
private:
ACE_OSTREAM_TYPE* m_infolog=nullptr;
ACE_OSTREAM_TYPE* m_debuglog=nullptr;
public:
Logger(void)
: m_infolog (new std::ofstream ("INFO.log")),
m_debuglog (new std::ofstream ("DEBUG.log"))
{
}
~Logger(void)
{
delete m_infolog;
delete m_debuglog;
}
int log (ACE_Log_Priority p,
const ACE_TCHAR* fmt,
...)
{
ssize_t final_result=0;
if (p == LM_DEBUG)
{
va_list argp;
va_start (argp, fmt);
ACE_LOG_MSG->msg_ostream (m_debuglog);
ACE_LOG_MSG->set_flags (ACE_Log_Msg::OSTREAM);
final_result = ACE_LOG_MSG->log (fmt, LM_DEBUG, argp);
va_end (argp);
}
else if (p == LM_INFO)
{
va_list argp;
va_start (argp, fmt);
ACE_LOG_MSG->msg_ostream (m_infolog);
ACE_LOG_MSG->set_flags (ACE_Log_Msg::OSTREAM);
final_result = ACE_LOG_MSG->log (fmt, LM_INFO, argp);
va_end (argp);
}
return final_result;
}
};
int
ACE_TMAIN (void)
{
Logger logger;
logger.log (LM_DEBUG, "I am a debug message no %d\n", 1);
logger.log (LM_INFO, "I am a info message no %d\n", 2);
logger.log (LM_DEBUG, "I am a debug message no %d\n", 3);
logger.log (LM_INFO, "I am a info message no %d\n", 4);
return 0;
}
Sample Output
[07:59:10]Host#User:~/acedir
$: ./logging_each_priority_in_its_own_file
I am a debug message no 1
I am a info message no 2
I am a debug message no 3
I am a info message no 4
[07:59:10]Host#User:~/acedir
$: ls -lrth
total 464K
-rw-r--r-- 1 aryaaur devusers 231 Oct 11 07:09 logging_each_priority_in_its_own_file.mpc
-rw-r--r-- 1 aryaaur devusers 5.6K Oct 11 07:29 GNUmakefile.logging_each_priority_in_its_own_file
-rw-r--r-- 1 aryaaur devusers 1.5K Oct 11 07:47 main_logging_each_priority_in_its_own_file_20191011.cpp
-rwxr-xr-x 1 aryaaur devusers 65K Oct 11 07:47 logging_each_priority_in_its_own_file
-rw-r--r-- 1 aryaaur devusers 50 Oct 11 07:59 INFO.log
-rw-r--r-- 1 aryaaur devusers 52 Oct 11 07:59 DEBUG.log
[07:59:10]Host#User:~/acedir
$: cat INFO.log
I am a info message no 2
I am a info message no 4
[07:59:10]Host#User:~/acedir
$: cat DEBUG.log
I am a debug message no 1
I am a debug message no 3
[07:59:10]Host#User:~/acedir
$:

How to run a C++, PortAudio application on startup on Angstrom Linux on a BeagleBoard?

I have a command-line application called xooky_nabox that was programmed using c++. It reads a puredata patch, processes signals from the audio in jack of a beagleboard and outputs signals through the audio out jack.
I want the application to run wen the beagleoard starts up and stay running until the board is shut down. There is no GUI and no keyboard or monitor attached to it, just the audio in and out jacks.
If I run the application manually everything works fine:
xooky_nabox -audioindev 1 -audiooutdev 1 /var/xooky/patch.pd
And it also runs fine if I run it in the background:
xooky_nabox -audioindev 1 -audiooutdev 1 /var/xooky/patch.pd &
Now, let me show the code layout of two versions of the program (The full thing is at https://github.com/rvega/XookyNabox):
Version 1, main thread is kept alive:
void sighandler(int signum){
time_t rawtime;
time(&rawtime);
std::ofstream myfile;
myfile.open ("log.txt",std::ios::app);
myfile << ctime(&rawtime) << " Caught signal:" << signum << " " << strsignal(signum) << "\n";
myfile.close();
if(signum == 15 || signum == 2){
exit(0);
}
}
int main (int argc, char *argv[]) {
// Subscribe to all system signals for debugging purposes.
for(int i=0; i<64; i++){
signal(i, sighandler);
}
// Sanity checks, error and help messages, etc.
parseParameters(argc, argv);
//Start Signal processing and Audio
initZenGarden();
initAudioIO();
// Keep the program alive.
while(1){
sleep(10);
}
// This is obviously never reached, so far no problems with that...
stopAudioIO();
stopZengarden();
return 0;
}
static int paCallback( const void *inputBuffer, void *outputBuffer, unsigned long framesPerBuffer, const PaStreamCallbackTimeInfo* timeInfo, PaStreamCallbackFlags statusFlags, void *userData ){
// This is called by PortAudio when the output buffer is about to run dry.
}
Version 2, execution is forked and detached from the terminal that launched it:
void go_daemon(){
// Run the program as a daemon.
pid_t pid, sid;
pid = fork(); // Fork off the parent process
if (pid < 0) {
exit(EXIT_FAILURE);
}
if (pid > 0) {
exit(EXIT_SUCCESS); // If child process started ok, exit the parent process
}
umask(0); // Change file mode mask
sid = setsid(); // Create a new session ID for the child process
if (sid < 0) {
// TODO: Log failure
exit(EXIT_FAILURE);
}
if((chdir("/")) < 0){ //Change the working directory to "/"
//TODO: Log failre
exit(EXIT_FAILURE);
}
close(STDIN_FILENO);
close(STDOUT_FILENO);
close(STDERR_FILENO);
}
int main (int argc, char *argv[]) {
go_daemon();
// Subscribe to all system signals for debugging purposes.
for(int i=0; i<64; i++){
signal(i, sighandler);
}
// Sanity checks, error and help messages, etc.
parseParameters(argc, argv);
//Start Signal processing and Audio
initZenGarden();
initAudioIO();
// Keep the program alive.
while(1){
sleep(10);
}
// This is obviously never reached, so far no problems with that...
stopAudioIO();
stopZengarden();
return 0;
}
Trying to run it at startup
I've tried running both versions of the program at startup using a few methods. The outcome is always the same. When the beagle starts up, I can hear sound beign output for a fraction of a second, the sound then stops and the login screen is presented (I have a serial terminal attached to the board and minicom running on my computer). The weirdest thing to me is that the xooky_nabox process is actually kept running after login but there is no sound output...
Here's what I've tried:
Adding a #reboot entry to crontab and launching the program with a trailing ampersand (version 1 of the program):
#reboot xooky_nabox <params> &
Added a start-stop-daemon to crontab (version 1):
#reboot start-stop-daemon -S -b --user daemon -n xooky_nabox -a /usr/bin/xooky_nabox -- <params>
Created a script at /etc/init.d/xooky and did
$chmod +x xooky
$update-rc.d xooky defaults
And tried different versions of the startup script: start-stop-daemon with version 1, calling the program directly with a trailing ampersand (version 1), calling the program directly with no trailing ampersand (version 2).
Also, if I run the program manually from the serial terminal or from a ssh session (usb networking); and then I run top, the program will run fine for a few seconds consuming around 15% cpu. It will then stop outputing sound, and it's cpu consumption will rise to around 30%. My log.txt file shows no signal sent to the program by the OS in this scenario.
When version 2 of the program is ran at startup, the log wil show something like:
Mon Jun 6 02:44:49 2011 Caught signal:18 Continued
Mon Jun 6 02:44:49 2011 Caught signal:15 Terminated
Does anyone have any ideas on how to debug this? Suggestions on how to launch my program at startup?
In version 2,
I think you should open (and dup2) /dev/null to STDIN/STDOUT/STDERR. Just closing the handle would cause problem.
something like this:
int fd = open("/dev/null", O_RDWR);
dup2( fd, STDOUT_FILENO );
(I have no idea what start-stop-daemon do. Can't help version 1, sorry)
There is C function to create a daemon
#include <unistd.h>
int daemon(int nochdir, int noclose);
More information can be found in man pages for daemon(3)
Maybe it will help.
And if you want to launch you daemon when you linux start, you should find out which init version you are using in you distro, but usually, you can just add command to execute you daemon to /etc/init.d/rc (but it seems to be no so good idea). This file is executed by init when linux is starting.
I ended up ditching PortAudio and implementing a JACK client which runs it's own server so this issue was not relevant for me anymore.

What's so special about file descriptor 3 on linux?

I'm working on a server application that's going to work on Linux and Mac OS X. It goes like this:
start main application
fork of the controller process
call lock_down() in the controller process
terminate main application
the controller process then forks again, creating a worker process
eventually the controller keeps forking more worker processes
I can log using several of methods (e.g. syslog or a file) but right now I'm pondering about syslog. The "funny" thing is that no syslog output is ever seen in the controller process unless I include the #ifdef section below.
The worker processes logs flawlessly in Mac OS X and linux with or without the ifdef'ed section below. The controller also logs flawlessly in Mac OS X without the #ifdef'ed section, but on linux the ifdef is needed if I want to see any output into syslog (or the log file for that matter) from the controller process.
So, why is that?
static int
lock_down(void)
{
struct rlimit rl;
unsigned int n;
int fd0;
int fd1;
int fd2;
// Reset file mode mask
umask(0);
// change the working directory
if ((chdir("/")) < 0)
return EXIT_FAILURE;
// close any and all open file descriptors
if (getrlimit(RLIMIT_NOFILE, &rl))
return EXIT_FAILURE;
if (RLIM_INFINITY == rl.rlim_max)
rl.rlim_max = 1024;
for (n = 0; n < rl.rlim_max; n++) {
#ifdef __linux__
if (3 == n) // deep magic...
continue;
#endif
if (close(n) && (EBADF != errno))
return EXIT_FAILURE;
}
// attach file descriptors 0, 1 and 2 to /dev/null
fd0 = open("/dev/null", O_RDWR);
fd1 = dup2(fd0, 1);
fd2 = dup2(fd0, 2);
if (0 != fd0)
return EXIT_FAILURE;
return EXIT_SUCCESS;
}
camh was close, but using closelog() was the idea that did the trick so the honor goes to jilles. Something else, aside from closing a file descriptor from under syslogs feet must go on though. To make the code work I added a call to closelog() just before the loop:
closelog();
for (n = 0; n < rl.rlim_max; n++) {
if (close(n) && (EBADF != errno))
return EXIT_FAILURE;
}
I was relying on a verbatim understanding of the manual page, saying:
The use of openlog() is optional; it will automatically be called by syslog() if necessary...
I interpreted this as saying that syslog would detect if the file descriptor was closed under it. Apparently it did not. An explicit closelog() on linux was needed to tell syslog that the descriptor was closed.
One more thing that still perplexes me is that not using closelog() prevented the first forked process (the controller) from even opening and using a log file. The following forked processes could use syslog or a log file with no problems. Maybe there are some caching effect in the filesystem that make the first forked process having an unreliable "idea" of which file descriptors are available, while the next set of forked process are sufficiently delayed to not be affected by this?
The special aspect of file descriptor 3 is that it will usually be the first file descriptor returned from a system call that allocates a new file descriptor, given that 0, 1 and 2 are usually set up for stdin, stdout and stderr.
This means that if any library function you have called allocates a file descriptor for its own internal purposes in order to perform its functions, it will get fd 3.
The openlog(3) library call will need to open /dev/log to communicate with the syslog daemon. If you subsequently close all file descriptors, you may break the syslog library functions if they are not written in a way to handle that.
The way to debug this on Linux is to use strace to trace the actual system calls that are being made; the use of a file descriptor for syslog then becomes obvious:
$ cat syslog_test.c
#include <stdio.h>
#include <syslog.h>
int main(void)
{
openlog("test", LOG_PID, LOG_LOCAL0);
syslog(LOG_ERR, "waaaaaah");
closelog();
return 0;
}
$ gcc -W -Wall -o syslog_test syslog_test.c
$ strace ./syslog_test
...
socket(PF_FILE, SOCK_DGRAM, 0) = 3
fcntl64(3, F_SETFD, FD_CLOEXEC) = 0
connect(3, {sa_family=AF_FILE, path="/dev/log"}, 16) = 0
send(3, "<131>Aug 21 00:47:52 test[24264]"..., 42, MSG_NOSIGNAL) = 42
close(3) = 0
exit_group(0) = ?
Process 24264 detached
syslog(3) may keep a file descriptor to syslogd's socket open; closing this under its feet is likely to cause problems. A closelog(3) call may help.
Syslog binds on a given descriptor at startup. Most of the time descriptor 3. If you close it no logs.
syslog-ng -d -v
Gives you more info about what it's doing behind the scenes.
The output should look like something like this:
binding fd 3, inetaddr: 0.0.0.0, port: 514
io.c: Preparing fd 3 for reading
io.c: Preparing fd 4 for reading
binding fd 5, unixaddr: /dev/log
io.c: listening on fd 5