I'm posting my code simply for context of my question. I'm not explicitly looking for you to help fix it, I'm more so looking to understand the dup2 system call that I'm just not picking up from the man page and the numerous other stackoverflow questions.
pid = fork();
if(pid == 0) {
if(strcmp("STDOUT", outfile)) {
if (command->getOutputFD() == REDIRECT) {
if ((outfd = open(outfile, O_CREAT | O_WRONLY | O_TRUNC)) == -1)
return false;
command->setOutputFD(outfd);
if (dup2(command->getOutputFD(), STDOUT_FILENO) == -1)
return false;
pipeIndex++;
}
else if (command->getOutputFD() == REDIRECTAPPEND) {
if ((outfd = open(outfile, O_CREAT | O_WRONLY | O_APPEND)) == -1)
return false;
command->setOutputFD(outfd);
if (dup2(command->getOutputFD(), STDOUT_FILENO) == -1)
return false;
pipeIndex++;
}
else {
if (dup2(pipefd[++pipeIndex], STDOUT_FILENO) == -1)
return false;
command->setOutputFD(pipefd[pipeIndex]);
}
}
if(strcmp("STDIN", infile)) {
if(dup2(pipefd[pipeIndex - 1], STDIN_FILENO) == -1)
return false;
command->setOutputFD(pipefd[pipeIndex - 1]);
pipeIndex++;
}
if (execvp(arguments[0], arguments) == -1) {
std::cerr << "Error!" << std::endl;
_Exit(0);
}
}
else if(pid == -1) {
return false;
}
For context to you, that code represents the execution step of a basic linux shell. The command object contains the commands arguments, IO "name", and IO descriptors (I think I might get rid of the file descriptors as fields).
What I'm having the most difficultly understanding is when and which file descriptors to close. I guess I'll just ask some questions to try and improve my understanding of the concept.
1) With my array of file descriptors used for handling pipes, the parent has a copy of all those descriptors. When are the descriptors held by the parent closed? And even more so, which descriptors? Is it all of them? All of the ones left unused by the executing commands?
2) When handling pipes within the children, which descriptors are left open by which processes? Say if I execute the command: ls -l | grep
"[username]", Which descriptors should be left open for the ls process? Just the write end of the pipe? And if so when? The same question applies to the grep command.
3) When I handle redirection of IO to a file, a new file must be opened and duped to STDOUT (I do not support input redirection). When does this descriptor get closed? I've seen in examples that it gets closed immediately after the call to dup2, but then how does anything get written to the file if the file has been closed?
Thanks ahead of time. I've been stuck on this problem for days and I'd really like to be done with this project.
EDIT I've updated this with modified code and sample output for anyone interested in offering specific help to my issue. First I have the entire for loop that handles execution. It has been updated with my calls to close on various file descriptors.
while(currCommand != NULL) {
command = currCommand->getData();
infile = command->getInFileName();
outfile = command->getOutFileName();
arguments = command->getArgList();
pid = fork();
if(pid == 0) {
if(strcmp("STDOUT", outfile)) {
if (command->getOutputFD() == REDIRECT) {
if ((outfd = open(outfile, O_CREAT | O_WRONLY | O_TRUNC)) == -1)
return false;
if (dup2(outfd, STDOUT_FILENO) == -1)
return false;
close(STDOUT_FILENO);
}
else if (command->getOutputFD() == REDIRECTAPPEND) {
if ((outfd = open(outfile, O_CREAT | O_WRONLY | O_APPEND)) == -1)
return false;
if (dup2(outfd, STDOUT_FILENO) == -1)
return false;
close(STDOUT_FILENO);
}
else {
if (dup2(pipefd[pipeIndex + 1], STDOUT_FILENO) == -1)
return false;
close(pipefd[pipeIndex]);
}
}
pipeIndex++;
if(strcmp("STDIN", infile)) {
if(dup2(pipefd[pipeIndex - 1], STDIN_FILENO) == -1)
return false;
close(pipefd[pipeIndex]);
pipeIndex++;
}
if (execvp(arguments[0], arguments) == -1) {
std::cerr << "Error!" << std::endl;
_Exit(0);
}
}
else if(pid == -1) {
return false;
}
currCommand = currCommand->getNext();
}
for(int i = 0; i < numPipes * 2; i++)
close(pipefd[i]);
for(int i = 0; i < commands->size();i++) {
if(wait(status) == -1)
return false;
}
When executing this code I receive the following output
ᕕ( ᐛ )ᕗ ls -l
total 68
-rwxrwxrwx 1 cook cook 242 May 31 18:31 CMakeLists.txt
-rwxrwxrwx 1 cook cook 617 Jun 1 22:40 Command.cpp
-rwxrwxrwx 1 cook cook 9430 Jun 8 18:02 ExecuteExternalCommand.cpp
-rwxrwxrwx 1 cook cook 682 May 31 18:35 ExecuteInternalCommand.cpp
drwxrwxrwx 2 cook cook 4096 Jun 8 17:16 headers
drwxrwxrwx 2 cook cook 4096 May 31 18:32 implementation files
-rwxr-xr-x 1 cook cook 25772 Jun 8 18:12 LeShell
-rwxrwxrwx 1 cook cook 243 Jun 5 13:02 Makefile
-rwxrwxrwx 1 cook cook 831 Jun 3 12:10 Shell.cpp
ᕕ( ᐛ )ᕗ ls -l > output.txt
ls: write error: Bad file descriptor
ᕕ( ᐛ )ᕗ ls -l | grep "cook"
ᕕ( ᐛ )ᕗ
The output of ls -l > output.txt implies that I'm closing the wrong descriptor, but closing the other related descriptor, while rendering no error, provides no output to the file. As demonstrated by ls -l, grep "cook", should generate output to the console.
With my array of file descriptors used for handling pipes, the parent
has a copy of all those descriptors. When are the descriptors held by
the parent closed? And even more so, which descriptors? Is it all of
them? All of the ones left unused by the executing commands?
A file descriptor may be closed in one of 3 ways:
You explicitly call close() on it.
The process terminates, and the operating system automatically closes every file descriptor that was still open.
When the process calls one of the seven exec() functions and the file descriptor has the O_CLOEXEC flag.
As you can see, most of the times, file descriptors will remain open until you manually close them. This is what happens in your code too - since you didn't specify O_CLOEXEC, file descriptors are not closed when the child process calls execvp(). In the child, they are closed after the child terminates. The same goes for the parent. If you want that to happen any time before terminating, you have to manually call close().
When handling pipes within the children, which descriptors are left
open by which processes? Say if I execute the command: ls -l | grep
"[username]", Which descriptors should be left open for the ls
process? Just the write end of the pipe? And if so when? The same
question applies to the grep command.
Here's a (rough) idea of what the shell does when you type ls -l | grep "username":
The shell calls pipe() to create a new pipe. The pipe file descriptors are inherited by the children in the next step.
The shell forks twice, let's call these processes c1 and c2. Let's assume c1 will run ls and c2 will run grep.
In c1, the pipe's read channel is closed with close(), and then it calls dup2() with the pipe write channel and STDOUT_FILENO, so as to make writing to stdout equivalent to writing to the pipe. Then, one of the seven exec() functions is called to start executing ls. ls writes to stdout, but since we duplicated stdout to the pipe's write channel, ls will be writing to the pipe.
In c2, the reverse happens: the pipe's write channel is closed, and then dup2() is called to make stdin point to the pipe's read channel. Then, one of the seven exec() functions is called to start executing grep. grep reads from stdin, but since we dup2()'d standard input to the pipe's read channel, grep will be reading from the pipe.
When I handle redirection of IO to a file, a new file must be opened
and duped to STDOUT (I do not support input redirection). When does
this descriptor get closed? I've seen in examples that it gets closed
immediately after the call to dup2, but then how does anything get
written to the file if the file has been closed?
So, when you call dup2(a, b), either one of these is true:
a == b. In this case, nothing happens and dup2() returns prematurely. No file descriptors are closed.
a != b. In this case, b is closed if necessary, and then b is made to refer to the same file table entry as a. The file table entry is a structure that contains the current file offset and file status flags; multiple file descriptors can point to the same file table entry, and that's exactly what happens when you duplicate a file descriptor. So, dup2(a, b) has the effect of making a and b share the same file table entry. As a consequence, writing to a or b will end up writing to the same file. So the file that is closed is b, not a. If you dup2(a, STDOUT_FILENO), you close stdout and you make stdout's file descriptor point to the same file table entry as a. Any program that writes to stdout will then be writing to the file instead, since stdout's file descriptor is pointing to the file you dupped.
UPDATE:
So, for your specific problem, here's what I have to say after briefly looking through the code:
You shouldn't be calling close(STDOUT_FILENO) in here:
if (command->getOutputFD() == REDIRECT) {
if ((outfd = open(outfile, O_CREAT | O_WRONLY | O_TRUNC)) == -1)
return false;
if (dup2(outfd, STDOUT_FILENO) == -1)
return false;
close(STDOUT_FILENO);
}
If you close stdout, you will get an error in the future when you try to write to stdout. This is why you get ls: write error: Bad file descriptor. After all, ls is writing to stdout, but you closed it. Oops!
You're doing it backwards: you want to close outfd instead. You opened outfd so that you could redirect STDOUT_FILENO to outfd, once the redirection is done, you don't really need outfd anymore and you can close it. But you most definitely don't want to close stdout because the idea is to have stdout write to the file that was referenced by outfd.
So, go ahead and do that:
if (command->getOutputFD() == REDIRECT) {
if ((outfd = open(outfile, O_CREAT | O_WRONLY | O_TRUNC)) == -1)
return false;
if (dup2(outfd, STDOUT_FILENO) == -1)
return false;
if (outfd != STDOUT_FILENO)
close(outfd);
}
Note the final if is necessary: If outfd by any chance happens to be equal to STDOUT_FILENO, you don't want to close it for the reasons I just mentioned.
The same applies to the code inside else if (command->getOutputFD() == REDIRECTAPPEND): you want to close outfd rather than STDOUT_FILENO:
else if (command->getOutputFD() == REDIRECTAPPEND) {
if ((outfd = open(outfile, O_CREAT | O_WRONLY | O_APPEND)) == -1)
return false;
if (dup2(outfd, STDOUT_FILENO) == -1)
return false;
if (outfd != STDOUT_FILENO)
close(STDOUT_FILENO);
}
This should at least get you ls -l to work as expected.
As for the problem with the pipes: your pipe management is not really correct. It's not clear from the code you showed where and how pipefd is allocated, and how many pipes you create, but notice that:
A process will never be able to read from a pipe and write to another pipe. For example, if outfile is not STDOUT and infile is not STDIN, you end up closing both the read and the write channels (and worse yet, after closing the read channel, you attempt to duplicate it). There is no way this will ever work.
The parent process is closing every pipe before waiting for the termination of the children. This provokes a race condition.
I suggest redesigning the way you manage pipes. You can see an example of a working bare-bones shell working with pipes in this answer: https://stackoverflow.com/a/30415995/2793118
Related
To expand on this, by chain piping I am referring to when I have 3 separate processes:
process 1 writes to process 2,
process 2 reads from process 1 and writes to process 3,
process 3 reads from process 2 and then finishes.
I am specifically trying to handle complex commands in a C++ written bash shell. So I would be using this to execute a set of commands like this that all communicate with each other:
ls | sort | grep "exit"
where process 1 is executing ls and its stdout is written to process 2 through a pipe, etc.
I already am writing code to solve this for a project and was just wondering if my approach is correct, as right now when just doing a 2 command call of ls | grep "exit" I am getting the bash error "grep: (standard input): Bad file descriptor"
//Block for when the userInput is a complex command
else{
if (debug)
printf("Complex command: %s\n", userInput.c_str());
vector<char*> commandsVect = splitCString(const_cast<char*>(userInput.c_str()), const_cast<char*>( delimVertPipe.c_str()));
if (debug)
printVect(commandsVect);
if (pipe(fileDescriptor) == -1){
fprintf(stderr, "Pipe failed for command %s\n", userInput.c_str());
return 1;
}
for (int i = 0; i < commandsVect.size(); ++i) {
vector<char*> tokens = splitCString(const_cast<char*>(commandsVect[i]), const_cast<char*>( delimSpace.c_str()));
printf("Commands vect size is %ld\n", commandsVect.size());
printf("Parsing command \'%s\'\n", commandsVect[i]);
if (debug) {
printVect(tokens);
}
procID = fork();
//Block for the first command
if (i == 0){
if (procID < 0){
fprintf(stderr, "Fork number %d in the complex command \'%s\' failed\n", i+1, userInput.c_str());
return 1;
}
//Child process
else if (procID == 0){
//close(fileDescriptor[READ_END]);
close(STDOUT_FILENO);
//Links the write end of the pipe to the STDOUT
dup2(fileDescriptor[WRITE_END], 1);
close(fileDescriptor[READ_END]);
close(fileDescriptor[WRITE_END]);
tokens.push_back(nullptr); //execvp() arg array needs a NULL pointer at the end
if ( execvp(tokens[0], tokens.data()) < 0 ) {
fprintf( stderr, "execvp() call failed for the command \'%s\' inside the input string \'%s\'\n", commandsVect[i], userInput.c_str() );
return 1;
}
exit(1);
}
//Parent process
else{
close(fileDescriptor[READ_END]);
close(fileDescriptor[WRITE_END]);
wait(NULL);
}
}
//Block for the very last command, which will pipe input from the previous
else if (i == commandsVect.size() - 1){
if (procID < 0){
fprintf(stderr, "Fork number %d in the complex command \'%s\' failed\n", i+1, userInput.c_str());
return 1;
}
//Child process
else if (procID == 0){
//close(fileDescriptor[WRITE_END]);
close(STDIN_FILENO);
//Links the read end of the pipe to the STDIN
dup2(fileDescriptor[READ_END], 0);
close(fileDescriptor[WRITE_END]);
close(fileDescriptor[READ_END]);
tokens.push_back(nullptr); //execvp() arg array needs a NULL pointer at the end
if ( execvp(tokens[0], tokens.data()) < 0 ) {
fprintf( stderr, "execvp() call failed for the command \'%s\' inside the input string \'%s\'\n", commandsVect[i], userInput.c_str() );
return 1;
}
exit(1);
}
//Parent process
else{
close(fileDescriptor[READ_END]);
close(fileDescriptor[WRITE_END]);
wait(NULL);
}
}
//To note for StackOverflow, this block of code is never executed since I am only ever calling a 2 chained command like ls|grep "exit"
//Block for the middle commands. (Will pipe input from previous, and output to the next)
else{
printf("GOING THROUGH BAD CODE");
continue;
if (procID < 0){
fprintf(stderr, "Fork number %d in the complex command \'%s\' failed\n", i+1, userInput.c_str());
return 1;
}
//Child process
else if (procID == 0){
exit(1);
}
//Parent process
else{
wait(NULL);
}
}
}
close(fileDescriptor[READ_END]);
close(fileDescriptor[WRITE_END]);
}
This might not be possible with your larger application, but you could simplify things by letting the shell manage the pipes. Write P1 (process one), P2, and P3 as three separate executables. In stead of doing IO on pipes, each program could read from stdin and write to stdout. Simple. To execute - let bash or whatever shell you use glue the three together by calling them as...
$P1 | P2 | P3;
Under the hood, your shell is doing pretty much what you're doing in C++ (only successfully 😉). It creates a pipe for P1, which it passes to exec as stdin to launch P1 after forking. It creates an input and output pipe for P2, and binds it stdin and stdout as appropriate in the same way - passed into exec when launching P2 after the fork. P3 gets only a stdin pipe and its stdout stream goes right to the console as normal. It's not quite as sexy as doing it all in C++, but it's very robust - pretty much guaranteed to work.
I am trying to invoke external program with some input and retrieve the output from it within a program.
It will be look like;
(some input) | (external program) | (retrieve output)
I first thought about using a popen() but it seems like, it is not possible because the pipe is not bidirectional.
Is there any easy way to handle this kind of stuff in linux?
I can try making a temp file but it will be great if it can be handled clearly without accessing the disk.
Any Solution? Thanks.
On linux you can use pipe function: Open two new pipes, one for each direction, then create a child process using fork, afterwards, you typically close the file descriptors not in use (read end on parent, write end on child of the pipe for parent sending to child and vice versa for the other pipe) and then start your application using execve or one of its front ends.
If you dup2 the pipes' file descriptors to the standard console file handles (STDIN_FILENO/STDOUT_FILENO; each process separately), you should even be able to use std::cin/std::cout for communicating with the other process (you might want to do so only for the child, as you might want to keep your console in parent). I have no tested this, though, so that's left to you.
When done, you'd yet wait or waitpid for your child process to terminate. Might look like similar to the following piece of code:
int pipeP2C[2], pipeC2P[2];
// (names: short for pipe for X (writing) to Y with P == parent, C == child)
if(pipe(pipeP2C) != 0 || pipe(pipeC2P) != 0)
{
// error
// TODO: appropriate handling
}
else
{
int pid = fork();
if(pid < 0)
{
// error
// TODO: appropriate handling
}
else if(pid > 0)
{
// parent
// close unused ends:
close(pipeP2C[0]); // read end
close(pipeC2P[1]); // write end
// use pipes to communicate with child...
int status;
waitpid(pid, &status, 0);
// cleanup or do whatever you want to do afterwards...
}
else
{
// child
close(pipeP2C[1]); // write end
close(pipeC2P[0]); // read end
dup2(pipeP2C[0], STDIN_FILENO);
dup2(pipeC2P[1], STDOUT_FILENO);
// you should be able now to close the two remaining
// pipe file desciptors as well as you dup'ed them already
// (confirmed that it is working)
close(pipeP2C[0]);
close(pipeC2P[1]);
execve(/*...*/); // won't return - but you should now be able to
// use stdin/stdout to communicate with parent
}
}
I'm using the following code part to perform input redirection for my custom C++ shell.
While the output redirection similar to this works well, the child process for the input redirection stays open and doesn't return, like it keeps waiting for new input.
What is the best way to 'ask' or 'force' a child process like this to return immediately after reading input?
Code for input redirection
int in_file = open(in, O_CREAT | O_RDONLY , S_IREAD | S_IWRITE);
pid_t pid = fork();
if (!pid) {
dup2(in_file, STDIN_FILENO);
if (execvp(argv[0], argv) < 0) {
cerr << "*** ERROR: exec failed: "<< argv[0] << endl;
exit(EXIT_FAILURE);
}
}
close(in_file);
Code for output redirection
out_file = open(out, O_CREAT | O_WRONLY | O_TRUNC, S_IRWXU);
pid_t pid = fork();
if (!pid) {
dup2(out_file, STDOUT_FILENO);
if (execvp(argv[0], argv) < 0) {
cerr << "*** ERROR: exec failed: "<< argv[0] << endl;
exit(EXIT_FAILURE);
}
}
close(out_file);
I used the following commands to test:
ps aux > out.txt
grep root < out.txt
The first command returns to shell after succesfully writing to out.txt. The second command reads succesfully from out.txt, but doesn't return or stop.
The child still has the in_file open. You must close it before exec:
dup2(in_file, STDIN_FILENO);
close(in_file);
(error checking omitted for brevity). Because the child still has an open file descriptor, it never sees the file as being closed, so it blocks on a read waiting for someone to write more data. The child process does not realize that it is the one holding the file descriptor open. Another option is to set the 'close-on-exec' flag for the file descriptor, so that it is closed automatically on exec. (Search for FD_CLOEXEC)
First, check the arguments to execvp(). If this code is part of main(int argc, char* argv[]), then argv[0] is your own program, not grep. This means that your program is recursively re-executing itself for ever.
Then, make sure that there is no error when opening in with:
if (in_file < 0) { perror(in); ... }
If in_file is an invalid descriptor, dup2() will fail and grep will run reading from the terminal, hence, not terminating. BTW, using O_CREAT | O_RDONLY looks fishy. Why read from a file that does not previously exist?
I like to redirect output from stdout for each thread to a file. The following code redirect all thread output to a single file -
int fd = open(<filename_threadid.txt>, <flags>)
_dup2(fd, 1)
How should I restore the original stdout so the next thread can reliably map its stdout to the filename_threadid?
On all platforms the standard streams (stdin, stdout, stderr) are per process. As such they cannot be redirected per thread. You should modify your code so that each thread outputs to a specific file instead of the stdout.
I use a fork() inside the thread for redirect the stdout of the forked process while the "true" thread is in waitpid().
The problem is how to pass the file where you want to redirect stdout.
I use a global thread pool, and the thread will find itself through pthread_equal(pthread_self(),iterator), then in the global thread pool structure there is the outfile where the program should redirect the stdout.
In my case I create a tmpnam() and write it to the thread struct, but you can use it how you wish.
Here is some example code: (written on the fly)
pthread_t *t_cur=NULL;
int i,pid,newout;
char *outfile=NULL;
for(i=0;i<MAX_THREADS;i++)
if(pthread_equal(pthread_self(),globals.tpool[i]->thread))
break;
if(i==MAX_THREADS)
{
printf("cannot find myself into global threads pool.\n");
pthread_exit(&i);
}
if(globals.tpool[i]->outfile == NULL) // redirect stdout only if outfile is not set ( this is specfic for my purposes )
{
outfile = globals.tpool[i]->outfile = malloc(L_tmpnam*sizeof(char));
tmpnam(outfile);
}
if((pid = fork()) == 0)
{
if(outfile!=NULL)
{
newout = open(outfile,O_CREAT | O_APPEND, S_IRUSR | S_IWUSR | S_IRGRP | S_IWGRP);
dup2(newout,STDOUT_FILENO);
close(newout);
}
/* your code here */
}
else
waitpid(pid,NULL);
pthread_exit(&i);
I really wrote it on the fly, I haven't tested this code, so take care to fix any errors. I didn't post my real code because of calls to my own library. Here I didn't check the return values from tmpnam(), fork(), open() and malloc(), which you should do.
I'm working on a server application that's going to work on Linux and Mac OS X. It goes like this:
start main application
fork of the controller process
call lock_down() in the controller process
terminate main application
the controller process then forks again, creating a worker process
eventually the controller keeps forking more worker processes
I can log using several of methods (e.g. syslog or a file) but right now I'm pondering about syslog. The "funny" thing is that no syslog output is ever seen in the controller process unless I include the #ifdef section below.
The worker processes logs flawlessly in Mac OS X and linux with or without the ifdef'ed section below. The controller also logs flawlessly in Mac OS X without the #ifdef'ed section, but on linux the ifdef is needed if I want to see any output into syslog (or the log file for that matter) from the controller process.
So, why is that?
static int
lock_down(void)
{
struct rlimit rl;
unsigned int n;
int fd0;
int fd1;
int fd2;
// Reset file mode mask
umask(0);
// change the working directory
if ((chdir("/")) < 0)
return EXIT_FAILURE;
// close any and all open file descriptors
if (getrlimit(RLIMIT_NOFILE, &rl))
return EXIT_FAILURE;
if (RLIM_INFINITY == rl.rlim_max)
rl.rlim_max = 1024;
for (n = 0; n < rl.rlim_max; n++) {
#ifdef __linux__
if (3 == n) // deep magic...
continue;
#endif
if (close(n) && (EBADF != errno))
return EXIT_FAILURE;
}
// attach file descriptors 0, 1 and 2 to /dev/null
fd0 = open("/dev/null", O_RDWR);
fd1 = dup2(fd0, 1);
fd2 = dup2(fd0, 2);
if (0 != fd0)
return EXIT_FAILURE;
return EXIT_SUCCESS;
}
camh was close, but using closelog() was the idea that did the trick so the honor goes to jilles. Something else, aside from closing a file descriptor from under syslogs feet must go on though. To make the code work I added a call to closelog() just before the loop:
closelog();
for (n = 0; n < rl.rlim_max; n++) {
if (close(n) && (EBADF != errno))
return EXIT_FAILURE;
}
I was relying on a verbatim understanding of the manual page, saying:
The use of openlog() is optional; it will automatically be called by syslog() if necessary...
I interpreted this as saying that syslog would detect if the file descriptor was closed under it. Apparently it did not. An explicit closelog() on linux was needed to tell syslog that the descriptor was closed.
One more thing that still perplexes me is that not using closelog() prevented the first forked process (the controller) from even opening and using a log file. The following forked processes could use syslog or a log file with no problems. Maybe there are some caching effect in the filesystem that make the first forked process having an unreliable "idea" of which file descriptors are available, while the next set of forked process are sufficiently delayed to not be affected by this?
The special aspect of file descriptor 3 is that it will usually be the first file descriptor returned from a system call that allocates a new file descriptor, given that 0, 1 and 2 are usually set up for stdin, stdout and stderr.
This means that if any library function you have called allocates a file descriptor for its own internal purposes in order to perform its functions, it will get fd 3.
The openlog(3) library call will need to open /dev/log to communicate with the syslog daemon. If you subsequently close all file descriptors, you may break the syslog library functions if they are not written in a way to handle that.
The way to debug this on Linux is to use strace to trace the actual system calls that are being made; the use of a file descriptor for syslog then becomes obvious:
$ cat syslog_test.c
#include <stdio.h>
#include <syslog.h>
int main(void)
{
openlog("test", LOG_PID, LOG_LOCAL0);
syslog(LOG_ERR, "waaaaaah");
closelog();
return 0;
}
$ gcc -W -Wall -o syslog_test syslog_test.c
$ strace ./syslog_test
...
socket(PF_FILE, SOCK_DGRAM, 0) = 3
fcntl64(3, F_SETFD, FD_CLOEXEC) = 0
connect(3, {sa_family=AF_FILE, path="/dev/log"}, 16) = 0
send(3, "<131>Aug 21 00:47:52 test[24264]"..., 42, MSG_NOSIGNAL) = 42
close(3) = 0
exit_group(0) = ?
Process 24264 detached
syslog(3) may keep a file descriptor to syslogd's socket open; closing this under its feet is likely to cause problems. A closelog(3) call may help.
Syslog binds on a given descriptor at startup. Most of the time descriptor 3. If you close it no logs.
syslog-ng -d -v
Gives you more info about what it's doing behind the scenes.
The output should look like something like this:
binding fd 3, inetaddr: 0.0.0.0, port: 514
io.c: Preparing fd 3 for reading
io.c: Preparing fd 4 for reading
binding fd 5, unixaddr: /dev/log
io.c: listening on fd 5