I'm following THIS TutorialsPoint guide to Linux Piping, and I specifically need to use FIFOs.
However, the code doesn't work at all for the server side.
The server file either hangs indefinitely or it reads nothing, while the client instead writes on the FIFO and immediately reads it has just written.
Here's the full code for both files in case you don't want to go through TutorialsPoint:
fifoserver_twoway.cpp
#include <stdio.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <fcntl.h>
#include <unistd.h>
#include <string.h>
#define FIFO_FILE "/tmp/fifo_twoway"
void reverse_string(char *);
int main() {
int fd;
char readbuf[80];
char end[10];
int to_end;
int read_bytes;
/* Create the FIFO if it does not exist */
mkfifo(FIFO_FILE, S_IFIFO|0640);
strcpy(end, "end");
fd = open(FIFO_FILE, O_RDWR);
while(1) {
read_bytes = read(fd, readbuf, sizeof(readbuf));
readbuf[read_bytes] = '\0';
printf("FIFOSERVER: Received string: \"%s\" and length is %d\n", readbuf, (int)strlen(readbuf));
to_end = strcmp(readbuf, end);
if (to_end == 0) {
close(fd);
break;
}
reverse_string(readbuf);
printf("FIFOSERVER: Sending Reversed String: \"%s\" and length is %d\n", readbuf, (int) strlen(readbuf));
write(fd, readbuf, strlen(readbuf));
/*
sleep - This is to make sure other process reads this, otherwise this
process would retrieve the message
*/
sleep(2);
}
return 0;
}
void reverse_string(char *str) {
int last, limit, first;
char temp;
last = strlen(str) - 1;
limit = last/2;
first = 0;
while (first < last) {
temp = str[first];
str[first] = str[last];
str[last] = temp;
first++;
last--;
}
return;
}
fifoclient_twoway.cpp
#include <stdio.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <fcntl.h>
#include <unistd.h>
#include <string.h>
#define FIFO_FILE "/tmp/fifo_twoway"
int main() {
int fd;
int end_process;
int stringlen;
int read_bytes;
char readbuf[80];
char end_str[5];
printf("FIFO_CLIENT: Send messages, infinitely, to end enter \"end\"\n");
fd = open(FIFO_FILE, O_CREAT|O_RDWR);
strcpy(end_str, "end");
while (1) {
printf("Enter string: ");
fgets(readbuf, sizeof(readbuf), stdin);
stringlen = strlen(readbuf);
readbuf[stringlen - 1] = '\0';
end_process = strcmp(readbuf, end_str);
//printf("end_process is %d\n", end_process);
if (end_process != 0) {
write(fd, readbuf, strlen(readbuf));
printf("FIFOCLIENT: Sent string: \"%s\" and string length is %d\n", readbuf, (int)strlen(readbuf));
read_bytes = read(fd, readbuf, sizeof(readbuf));
readbuf[read_bytes] = '\0';
printf("FIFOCLIENT: Received string: \"%s\" and length is %d\n", readbuf, (int)strlen(readbuf));
} else {
write(fd, readbuf, strlen(readbuf));
printf("FIFOCLIENT: Sent string: \"%s\" and string length is %d\n", readbuf, (int)strlen(readbuf));
close(fd);
break;
}
}
return 0;
}
When I run both processes, this is what I get:
./fifoserver_twoway
FIFOSERVER: Received string: "" and length is 0
FIFOSERVER: Sending Reversed String: "" and length is 0
FIFOSERVER: Received string: "" and length is 0
FIFOSERVER: Sending Reversed String: "" and length is 0
./fifoclient_twoway
FIFOCLIENT: Sent string: "ciao" and string length is 4
FIFOCLIENT: Received string: "ciao" and length is 4
Enter string: why won't you reverse?
FIFOCLIENT: Sent string: "why won't you reverse?" and string length is 29
FIFOCLIENT: Received string: "why won't you reverse?" and length is 29
It's also worth noting that before starting to write this question, the server behaviour was completely different: instead of receiving nothing and printing like you see here, it would hang indefinitely after the "read" (and I haven't changed the code one bit, except for changing the FIFO_FILE path)
You let the server sleep after writing – but not the client. That way, the client still might read its own output back before the server can fetch it. So at very least you should add a sleep after both writes, letting the server sleep a bit longer to make sure the client wakes up first to read the servers output.
Accessing the same end of unnamed pipes (created via pipe functions) concurrently is undefined behaviour. While not sure for named pipes, I'd assume pretty much the same there as well. Synchronising concurrent access to such ends via simple delays (sleep, usleep) might perhaps do the trick, but it is a pretty unsafe method.
I'd rather recommend two separate pipes instead (as Tony Tannous proposed already), one for each direction (open the respective ends RDONLY or WRONLY as needed), then you get full duplex communication instead of half duplex and you don't need further synchronisation either (delays in most simple variant):
// server
int fd_cs = open(FIFO_FILE_CS, O_RDONLY);
int fd_sc = open(FIFO_FILE_SC, O_WRONLY);
read(fd_cs, ...);
write(fd_sc, ...);
// client
int fd_cs = open(FIFO_FILE_CS, O_WRONLY);
int fd_sc = open(FIFO_FILE_SC, O_RDONLY);
write(fd_cs, ...);
read(fd_sc, ...);
Related
I tried to read file contents using Libaio, but I found that even if I didn't call io_getevents method, the expected contents are obtained.
Is it necessary to call the io_getevents method after I call the io_submit method?
If yes, why this issue happened?
If no, when should I call io_getevents to the read result? Can I repead to call it multi times?
here is the demo code:
#include <stdio.h>
#include <libaio.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <string.h>
#define error() printf("error [%s : %d]\n", __FILE__, __LINE__)
#define BUFF_SIZE 51
#define BUFF_CNT 50
#define READLEN 4194304
int main(int argc, char *argv[])
{
int fd = open(__FILE__, O_RDONLY);
if (fd == -1) {
error();
return -1;
}
io_context_t ctx = 0;
int err = io_setup(BUFF_CNT, &ctx);
if (err != 0) {
error();
return -1;
}
struct iocb *io = (struct iocb *)malloc(sizeof(struct iocb));
if (NULL == io) {
printf("alloc struct iocb failed");
return -1;
}
memset(io, 0x00, sizeof(io));
char double_check_m_buf[READLEN];
io_prep_pread(io, fd, double_check_m_buf, READLEN, 0);
int rc = io_submit(ctx, 1, &io);
if (rc < 0) {
printf("aio send read one block failed");
return -1;
}
printf("aio send read one block success, len: %d \n content: %s",strlen(double_check_m_buf), double_check_m_buf);
There are two things that may be going on here.
There are a bunch of cases where io_submit can't do asynchronous I/O. When those happen, it falls back to synchronous I/O. It's up to the caller to realize the requested I/O happened inline of the call.
In your case, you need to open the file with O_RDONLY|O_DIRECT. That will likely do the trick, presuming your filesystem supports AIO.
If you open __FILE__ with O_DIRECT, io_submit will bypass the kernel's buffer cache. It may still complete rapidly. Depending on what the underlying storage is, it may very well complete before you can inspect the buffer. It's an off-chance, but still, it's possible.
The only way to be sure your I/O has completed is to call io_getevents. That's the only way to retrieve the error if it fails. Here, __FILE__ is likely far shorter than 4 MiB, so you'll need the length of the read that's returned in that structure as well.
While I was playing with pipes in c++ I stumbled accross something rather interesting.
#include <cstdio>
#include <iostream>
#include <string>
int main()
{
FILE *pystream = popen("python","w"); // Calling the python console
fprintf(pystream,"print(2+3)"); // Making it do something
pclose(pystream); // Closing the pipe
return 0;
}
This code outputs 5. but why ? And can the "output" be read or stored somewhere ?
I'm fairly new to C buffers and pipes, so I don't know if I'm using the right terminology.
When you write like this you're effectively writing to the stdin of the process you just started, in this case the python REPL. On Linux the python REPL is getting the expression directly ie it's not being typed in. This is ths system command
read(0, "print(2+3)", 4096) = 10
If you were doing this in the terminal each character is being read in one at a time by the terminal and when it gets carriage return it writes a newline \n ie
read(0, "\r", 1) = 1
write(1, "\n", 1
It then performs the calculation and write the result out
write(1, "5\n", 25
You're by passing the terminal and writing the data directly to the stdin of the python interpreter. If you want to see how this can easily break try this code.
#include <cstdio>
#include <iostream>
#include <string>
int main()
{
FILE *pystream = popen("python","w"); // Calling the python console
fprintf(pystream,"print(2+3)"); // Making it do something
fprintf(pystream,"print(2+3)"); // Making it do something
pclose(pystream); // Closing the pipe
return 0;
}
You will get a syntax error, to make it work the stdin needs to be fed a carriage return or a newline to separate the two lines ie add a carriage return...
fprintf(pystream,"print(2+3)\r");
The standard output of the command you're executing is connected to the standard output of your program, so when the Python writes to its standard output, it appears on the standard output of your process too.
If you had pending output before you ran Python, that won't be flushed and will appear after Python returns. For example,
std::cout << "Hello";
(no endl, no \n in the string) before popen() and
std::cout << " World\n";
after pclose() means that you'll see the Python output before Hello World.
If you want to write to Python and read the results back in your program, you can no longer use popen() and pclose(). Instead, you need to use pipe() twice (one pipe to talk to Python, one pipe to read from Python), and you need to use fork(), exec(), dup2() — probably; dup() otherwise — and close() to make the operations work. You'll be using file descriptors and hence read() and write() system calls in the parent process, too.
Those are all C functions (system calls) more than C++ functions.
This code works:
#include <unistd.h>
#include <cstdio>
#include <cstring>
int main()
{
int p1[2];
int p2[2];
if (pipe(p1) != 0 || pipe(p2) != 0)
return 1;
int pid;
if ((pid = fork()) < 0)
return 1;
if (pid == 0)
{
dup2(p1[0], STDIN_FILENO);
dup2(p2[1], STDOUT_FILENO);
close(p1[0]);
close(p1[1]);
close(p2[0]);
close(p2[1]);
execlp("python", "python", (char *)0);
fprintf(stderr, "failed to exec python\n");
return 1;
}
else
{
close(p1[0]);
close(p2[1]);
const char command[] = "print(2+3)\n";
int len = strlen(command);
if (write(p1[1], command, len) != len)
{
fprintf(stderr, "failed to write command to python\n");
return 1;
}
close(p1[1]);
char buffer[256];
int nbytes;
if ((nbytes = read(p2[0], buffer, sizeof(buffer))) <= 0)
{
fprintf(stderr, "failed to read response from python\n");
return 1;
}
printf("Python said: (%d) [%.*s]\n", nbytes, nbytes, buffer);
close(p2[0]);
printf("Finished\n");
}
return 0;
}
The bad news is that changing this code to write more than one command while synchronously reading a response from Python does not work. Python does not process each line separately as it does when its input is a terminal; it reads all the data before it responds at all. You can work around that with python -i, but then the prompts from Python appear on stderr. So, you can redirect that to /dev/null to lose it:
#include <unistd.h>
#include <fcntl.h>
#include <cstdio>
#include <cstring>
int main()
{
int p1[2];
int p2[2];
if (pipe(p1) != 0 || pipe(p2) != 0)
return 1;
int pid;
if ((pid = fork()) < 0)
return 1;
if (pid == 0)
{
dup2(p1[0], STDIN_FILENO);
dup2(p2[1], STDOUT_FILENO);
close(p1[0]);
close(p1[1]);
close(p2[0]);
close(p2[1]);
int dn = open("/dev/null", O_WRONLY);
if (dn >= 0)
{
dup2(dn, STDERR_FILENO);
close(dn);
}
execlp("python", "python", "-i", (char *)0);
fprintf(stderr, "failed to exec python\n");
return 1;
}
else
{
close(p1[0]);
close(p2[1]);
const char *commands[] =
{
"print(2+3)\n",
"print(3+4)\n",
};
enum { NUM_COMMANDS = sizeof(commands) / sizeof(commands[0]) };
for (int i = 0; i < NUM_COMMANDS; i++)
{
int len = strlen(commands[i]);
if (write(p1[1], commands[i], len) != len)
{
fprintf(stderr, "failed to write command to python\n");
return 1;
}
char buffer[256];
int nbytes;
if ((nbytes = read(p2[0], buffer, sizeof(buffer))) <= 0)
{
fprintf(stderr, "failed to read response from python\n");
return 1;
}
printf("Python said: (%d) [%.*s]\n", nbytes, nbytes, buffer);
}
close(p1[1]);
close(p2[0]);
printf("Finished\n");
}
return 0;
}
Without redirection of stderr:
Python 2.7.10 (default, Oct 23 2015, 19:19:21)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.59.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> >>> Python said: (2) [5
]
>>> Python said: (2) [7
]
Finished
With redirection of stderr:
Python said: (2) [5
]
Python said: (2) [7
]
Finished
The disadvantage of losing the standard error output to /dev/null is that you won't get any notice when Python objects to what you send it to execute — the code will hang. Working around that is fun (a third pipe, and using poll() or epoll() or — perish the thought — select() would be one way around the problem).
This is the first time I'm communicating with Arduino using my computer. I use Ubuntu 14.04. This is the C program for writing to the file. The Arduino shows up ttyACM0.
While compiling using gcc the compiler shows an error saying:
Segmentation fault(core dumped)
How do I rectify this error.
#include<unistd.h>
#include<stdio.h>
int main() {
char data[] = {'f','b','r'}; //Random data we want to send
FILE *file;
file = fopen("/dev/ttyACM0","w"); //Opening device file
int i = 0;
for(i = 0 ; i < 3 ; i++) {
fprintf(file,"%c",data[i]); //Writing to the file
fprintf(file,"%c",','); //To separate digits
sleep(1);
}
fclose(file);
}
Pardon my ignorance. I tried researching on it. Couldn't make it work. Thanks in advance for your help.
You're getting a NULL return from the fopen() that NULL is being passed to fprintf() which is expecting a valid FILE* and messing up causing the SEGV.
If you use fopen you should check what it returns so you can give the user a something more useful than "segmentation fault".
The probable cause of the fopen() failure is you don't have permission to play with the serial port.
Normally you need the group dialout to be able to access the serial port.
As root do:
usermod -a -G dialoutyourusername
Then log out and back in so you get the new group.
Consider using minicom or microcom (on any of the several other serial terminal programs) to access the serial port instead of writing your own.
I also suggest you have the Arduino send a hello message when it boots up so you can be sure you have the right baud rate etc...
You did not put any success check on the return value of fopen("/dev/ttyACM0","w");. In case fopen() fails, using file further is undefined behavior, causing segmentation fault. Do something like
file = fopen("/dev/ttyACM0","w"); //Opening device file
if (file)
{
//do something with file
}
else
return 0;
Also, add a return 0 before ending main().
// the following code:
// compiles cleanly
// performs appropriate error checking
// has proper return statement
#include <unistd.h> // sleep()
#include <stdio.h> // fopen(), fclose(), fprintf(), perror()
#include <stdlib.h> // exit() and EXIT_FAILURE
int main()
{
char data[] = {'f','b','r'}; //Random data we want to send
FILE *file;
if( NULL == (file = fopen("/dev/ttyACM0","w") ) ) //Opening device file
{ // then fopen failed
perror("fopen failed for ttyACM0" );
exit( EXIT_FAILURE );
}
// implied else, fopen successful
int i = 0;
for(i = 0 ; i < 3 ; i++)
{
if( 0 >= fprintf(file,"%c",data[i]) ) //Writing to the file
{ // fprintf failed
perror("fprintf data failed" );
exit( EXIT_FAILURE );
}
// implied else, fprintf successful for data
if( 0 >= fprintf(file,"%c",',') ) //To separate digits
{ // then, fprintf failed
perror( "fprintf for comma failed");
exit( EXIT_FAILURE );
}
// implied else, fprintf successful for comma
sleep(1);
} // end for
fclose(file);
return(0);
} // end function: main
On failure fopen returns NULL, so you are potentially dereferencing a NULL pointer, the correct way of doing that, is checking the result of fopen. I would however suggest low level IO for this kind of thing something like
#include <unistd.h>
#include <stdio.h>
#include <fcntl.h>
int main()
{
char data[] = {'f','b','r'}; //Random data we want to send
int fd;
int i;
fd = open("/dev/ttyACM0", O_WRONLY); //Opening device file
if (fd == -1)
{
perror("cannot open /dev/ttyACM0");
return -1;
}
for(i = 0 ; i < 3 ; i++)
{
write(fd, &(data[i]), 1);
write(fd, ",", 1);
sleep(1);
}
close(fd);
return 0;
}
on error open returns a special value -1 so you should abort writing to it.
I'm pretty sure in your case there will be a permission denied error, since normally the /dev/tty* belong to group dialout and they have group write permission by default, but since probably your user doesn't belong to that group you don't have write access to /dev/ttyACM0.
Is that possible? I'd like an easy access to the executable's memory to edit it. Alternately, when I'm not the administrator, is it possible to edit the executable's memory from another process? I've tried the ptrace library and it fails if I'm not the administrator. I'm on Linux
I'm not entirely sure what you are asking, but this is possible with shared memory.
See here: http://www.kernel.org/doc/man-pages/online/pages/man7/shm_overview.7.html
This is what a debugger does. You could look at the code of an open source debugger, e.g. gdb, to see how it works.
The answer:
Yes - it works: you don't have to be administrator / root, but of course you need the rights to access the process' memory, i.e. same user.
No - it is not easy
The possibility to write to /proc/pid/mem was added some time ago to the Linux kernel. Therefore it depends on the kernel you are using. The small programs were checked with kernel 3.2 where this works and 2.6.32 where it fails.
The solution consists of two programs:
A 'server' which is started, allocates some memory, writes some pattern into this memory and outputs every three seconds the memory contents which is placed after the pattern is printed.
A 'client' which connects via the /proc/pid/maps and /proc/pid/mem to the server, searches for the pattern and writes some other string into the server's memory.
The implementation uses heap - but as long as the permissions allow - it is also possible to change other portions of the other process' memory.
This is implemented in C, because it is very 'low level' - but it should work in C++. It is a proof of concept - no production code - e.g. there are some error checks missing and it has some fixed size buffers.
memholder.c
/*
* Alloc memory - write in some pattern and print out the some bytes
* after the pattern.
*
* Compile: gcc -Wall -Werror memholder.c -o memholder.o
*/
#include <sys/types.h>
#include <string.h>
#include <stdlib.h>
#include <stdio.h>
#include <unistd.h>
int main() {
char * m = (char*) malloc(2048);
memset(m, '\xAA', 1024);
strcpy(m + 1024, "Some local data.");
printf("PID: %d\n", getpid());
while(1) {
printf("%s\n", m + 1024);
sleep(3);
}
return 0;
}
memwriter.c
/*
* Searches for a pattern in the given PIDs memory
* and changes some bytes after them.
*
* Compile: gcc -Wall -std=c99 -Werror memwriter.c -o memwriter
*/
#include <sys/types.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <unistd.h>
#include <string.h>
#include <sys/mman.h>
#include <sys/ptrace.h>
#include <sys/wait.h>
int open_proc_file(pid_t other_pid, char const * const sn,
int flags) {
char fname[1024];
snprintf(fname, 1023, "/proc/%d/%s", other_pid, sn);
// Open file for reading and writing
int const fd = open(fname, flags );
if(fd==-1) {
perror("Open file");
exit(1);
}
return fd;
}
void get_heap(int fd_maps, size_t * heap_start, size_t * heap_end) {
char buf[65536];
ssize_t const r = read(fd_maps, buf, 65535);
if(r==-1) {
perror("Reading maps file");
exit(1);
}
buf[r] = '\0';
char * const heap = strstr(buf, "[heap]");
if(heap==NULL) {
printf("[heap] not found in maps file");
exit(1);
}
// Look backward to the latest newline
char const * hl_start;
for(hl_start = heap; hl_start > buf && *hl_start != '\n';
--hl_start) {}
// skip \n
++hl_start;
// Convert to beginnig and end address
char * lhe;
*heap_start = strtol(hl_start, &lhe, 16);
++lhe;
*heap_end = strtol(lhe, &lhe, 16);
}
int main(int argc, char *argv[]) {
if(argc!=2) {
printf("Usage: memwriter <pid>\n");
return 1;
}
pid_t const other_pid = atoi(argv[1]);
int fd_mem = open_proc_file(other_pid, "mem", O_RDWR);
int fd_maps = open_proc_file(other_pid, "maps", O_RDONLY);
size_t other_mem_start;
size_t other_mem_end;
get_heap(fd_maps, &other_mem_start, &other_mem_end);
ptrace(PTRACE_ATTACH, other_pid, NULL, NULL);
waitpid(other_pid, NULL, 0);
if( lseek(fd_mem, other_mem_start, SEEK_SET) == -1 ) {
perror("lseek");
return 1;
}
char buf[512];
do {
ssize_t const r = read(fd_mem, buf, 512);
if(r!=512) {
perror("read?");
break;
}
// Check for pattern
int pat_found = 1;
for(int i = 0; i < 512; ++i) {
if( buf[i] != '\xAA' )
pat_found = 0;
break;
}
if( ! pat_found ) continue;
// Write about one k of strings
char const * const wbuf = "REMOTE DATA - ";
for(int i = 0; i < 70; ++i) {
ssize_t const w = write(fd_mem, wbuf, strlen(wbuf));
if( w == -1) {
perror("Write");
return 1;
}
}
// Append a \0
write(fd_mem, "\0", 1);
break;
} while(1);
ptrace(PTRACE_DETACH, other_pid, NULL, NULL);
close(fd_mem);
close(fd_maps);
return 0;
}
Example output
$ ./memholder
PID: 2621
Some local data.
Some local data.
MOTE DATA - REMOTE DA...
Other interpretation
There is also another interpretation of your question (when reading the headline and not the question), that you want to replace the 'executable' from one process with another one. That can be easily handled by exec() (and friends):
From man exec:
The exec() family of functions replaces the current process image with a new process image.
In Windows, the methods used for this are named ReadProcessMemory / WriteProcessMemory, you will, however, need administrative rights for this. The same is for linux, as I've said in my comment, no sane system would allow user process to modify non-owned memory.
For linux, the only function is ptrace. You will need to be administrator.
http://cboard.cprogramming.com/cplusplus-programming/92093-readprocessmemory-writeprocessmemory-linux-equivalent.html contains more detailed discussion.
Can you imagine the consequences of allowing process to modify other process memory, without being administrator?
Using inotify to monitor a directory for any new file created in the directory by adding a watch on the directory by
fd = inotify_init();
wd = inotify_add_watch(fd, "filename_with_path", IN_CLOSE_WRITE);
inotify_add_watch(fd, directory_name, IN_CLOSE_WRITE);
const int event_size = sizeof(struct inotify_event);
const int buf_len = 1024 * (event_size + FILENAME_MAX);
while(true) {
char buf[buf_len];
int no_of_events, count = 0;
no_of_events = read(fd, buf, buf_len);
while(count < no_of_events) {
struct inotify_event *event = (struct inotify_event *) &buf[count];
if (event->len) {
if (event->mask & IN_CLOSE_WRITE) {
if (!(event->mask & IN_ISDIR)) {
//It's here multiple times
}
}
}
count += event_size + event->len;
}
When I scp a file to the directory, this loops infinitely. What is the problem with this code ? It shows the same event name and event mask too. So , it shows that the event for the same, infinite times.
There are no break statements. If I find an event, I just print it and carry on waiting for another event on read(), which should be a blocking call. Instead, it starts looping infinitely. This means, read doesn't block it but returns the same value for one file infinitely.
This entire operation runs on a separate boost::thread.
EDIT:
Sorry all. The error I was getting was not because of the inotify but because of sqlite which was tricky to detect at first. I think I jumped the gun here. With further investigation, I did find that the inotify works perfectly well. But the error actually came from the sqlite command : ATTACH
That command was not a ready-only command as it was supposed to. It was writing some meta data to the file. So inotify gets notification again and again. Since they were happening so fast, it screwed up the application.I finally had to breakup the code to understand why.
Thanks everyone.
I don't see anything wrong with your code...I'm running basically the same thing and it's working fine. I'm wondering if there's a problem with the test, or some part of the code that's omitted. If you don't mind, let's see if we can remove any ambiguity.
Can you try this out (I know it's almost the same thing, but just humor me) and let me know the results of the exact test?
1) Put the following code into test.c
#include <sys/types.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <string.h>
#include <stdio.h>
#include <errno.h>
#include <sys/inotify.h>
int main (int argc, char *argv[])
{
char target[FILENAME_MAX];
int result;
int fd;
int wd; /* watch descriptor */
const int event_size = sizeof(struct inotify_event);
const int buf_len = 1024 * (event_size + FILENAME_MAX);
strcpy (target, ".");
fd = inotify_init();
if (fd < 0) {
printf ("Error: %s\n", strerror(errno));
return 1;
}
wd = inotify_add_watch (fd, target, IN_CLOSE_WRITE);
if (wd < 0) {
printf ("Error: %s\n", strerror(errno));
return 1;
}
while (1) {
char buff[buf_len];
int no_of_events, count = 0;
no_of_events = read (fd, buff, buf_len);
while (count < no_of_events) {
struct inotify_event *event = (struct inotify_event *)&buff[count];
if (event->len){
if (event->mask & IN_CLOSE_WRITE)
if(!(event->mask & IN_ISDIR)){
printf("%s opened for writing was closed\n", target);
fflush(stdout);
}
}
count += event_size + event->len;
}
}
return 0;
}
2) Compile it with gcc:
gcc test.c
3) kick it off in one window:
./a.out
4) in a second window from the same directory try this:
echo "hi" > blah.txt
Let me know if that works correctly to show output every time the file is written to and does not loop as your code does. If so, there's something important your omiting from your code. If not, then there's some difference in the systems.
Sorry for putting this in the "answer" section, but too much for a comment.
My guess is that read is returning -1 and since you dont ever try to fix the error, you get another error on the next call to read which also returns -1.