How to close file in C that is already opened - c++

I am write application to read DICOM file and I must using other library to do it. I find that the library will open file but it don't close file when finishing. And the library is not open source. I know that open file limiting is 1024 in Linux and I can change the number. But I don't want to do the way. I like to close the file that is opened by library. How to close file in C if I know it is opening. I am using DICOM2NII library that is got from http://cbi.nyu.edu/software/dinifti.php.
And this is the code to open file but it does not close
bool DICOMImage::OpenFile(const char *path)
{
bool retValue = true;
DCM_Objects handle_;
unsigned long options = DCM_ORDERLITTLEENDIAN | DCM_FORMATCONVERSION | DCM_VRMASK;
// Try opening as PART10, if it fails it's might be bcause it does not have
// a preable and the try it that way
if ( DCM_OpenFile(path, options | DCM_PART10FILE, &handle_) != DCM_NORMAL )
{
DCM_CloseObject(&handle_);
COND_PopCondition(TRUE);
if ( DCM_OpenFile(path, options, &handle_) != DCM_NORMAL )
{
retValue = false;
}
else
retValue=true;
}
return retValue;
}

In your DICOMImage class, add a member
DCM_OBJECT *handle_;
and in your destructor close the file
DICOMImage::DICOMImage() : handle_(0) { ... }
DICOMImage::~DICOMImage() {
if (handle_ != 0)
DCM_CloseObject(&handle_);
}
and use this member handle_ in DICOMImage::OpenFile() as well, of course.

You could first test all file descriptors to see which ones are in use by doing a dummy fcntl(fd, F_GETFD) on every fd from 0 up to getdtablesize(). When the library function returns there will be one more open fd which you can just close with close(). You could also just call close(fd) on everything that wasn't open before and one of those will succeed (and you can stop at that point in your search).
It's likely you can do the initial probe up to the first unused fd and the library will end up using that fd provided it doesn't do anything more complex than open one file. If it opens multiple files or uses dup() it could end up elsewhere.
To spell this out:
#include <iostream>
#include <vector>
#include <unistd.h>
#include <fcntl.h>
std::vector<bool> getOpenFileMap()
{
int limit = getdtablesize();
std::vector<bool> result(limit);
for (int fd = 0; fd < limit; ++fd)
result[fd] = fcntl(fd, F_GETFD) != -1;
return result;
}
void closeOpenedFiles(const std::vector<bool> &existing)
{
int limit = existing.size();
for (int fd = 0; fd < limit; ++fd)
if (!existing[fd])
close(fd);
}
int getLikelyFd()
{
int limit = getdtablesize();
for (int fd = 0; fd < limit; ++fd)
if (fcntl(fd, F_GETFD) != -1)
return fd;
}
int main()
{
std::vector<bool> existing = getOpenFileMap();
int fd = open("/dev/null", O_RDONLY);
closeOpenedFiles(existing);
bool closed = write(fd, "test", 4) == -1;
std::cout << "complex pass " << std::boolalpha << closed << std::endl;
int guess = getLikelyFd();
fd = open("/dev/null", O_RDONLY);
bool match = fd == guess;
std::cout << "simple pass " << std::boolalpha << match << std::endl;
}

Related

Write/Read a stream of data (double) using named pipes in C++

I am trying to develop a little application in C++, within a Linux environment, which does the following:
1) gets a data stream (a series of arrays of doubles) from the output of a 'black-box' and writes it to a pipe. The black-box can be thought as an ADC;
2) reads the data stream from the pipe and feeds it to another application which requires these data as stdin;
Unfortunately, I was not able to find tutorials or examples. The best way I found to realize this is summarized in the following test-bench example:
#include <iostream>
#include <fcntl.h>
#include <sys/stat.h>
#include <stdio.h>
#define FIFO "/tmp/data"
using namespace std;
int main() {
int fd;
int res = mkfifo(FIFO,0777);
float *writer = new float[10];
float *buffer = new float[10];
if( res == 0 ) {
cout<<"FIFO created"<<endl;
int fres = fork();
if( fres == -1 ) {
// throw an error
}
if( fres == 0 )
{
fd = open(FIFO, O_WRONLY);
int idx = 1;
while( idx <= 10) {
for(int i=0; i<10; i++) writer[i]=1*idx;
write(fd, writer, sizeof(writer)*10);
}
close(fd);
}
else
{
fd = open(FIFO, O_RDONLY);
while(1) {
read(fd, buffer, sizeof(buffer)*10);
for(int i=0; i<10; i++) printf("buf: %f",buffer[i]);
cout<<"\n"<<endl;
}
close(fd);
}
}
delete[] writer;
delete[] buffer;
}
The problem is that, by running this example, I do not get a printout of all the 10 arrays I am feeding to the pipe, whereas I keep getting always the first array (filled by 1).
Any suggestion/correction/reference is very welcome to make it work and learn more about the behavior of pipes.
EDIT:
Sorry guys! I found a very trivial error in my code: in the while loop within the writer part, I am not incrementing the index idx......once I correct it, I get the printout of all the arrays.
But now I am facing another problem: when using a lot of large arrays, these are randomly printed out (the whole sequence is not printed); as if the reader part is not able to cope with the speed of the writer. Here is the new sample code:
#include <iostream>
#include <fcntl.h>
#include <sys/stat.h>
#include <stdio.h>
#define FIFO "/tmp/data"
using namespace std;
int main(int argc, char** argv) {
int fd;
int res = mkfifo(FIFO,0777);
int N(1000);
float writer[N];
float buffer[N];
if( res == 0 ) {
cout<<"FIFO created"<<endl;
int fres = fork();
if( fres == -1 ) {
// throw an error
}
if( fres == 0 )
{
fd = open(FIFO, O_WRONLY | O_NONBLOCK);
int idx = 1;
while( idx <= 1000 ) {
for(int i=0; i<N; i++) writer[i]=1*idx;
write(fd, &writer, sizeof(float)*N);
idx++;
}
close(fd);
unlink(FIFO);
}
else
{
fd = open(FIFO, O_RDONLY);
while(1) {
int res = read(fd, &buffer, sizeof(float)*N);
if( res == 0 ) break;
for(int i=0; i<N; i++) printf(" buf: %f",buffer[i]);
cout<<"\n"<<endl;
}
close(fd);
}
}
}
Is there some mechanism to implement in order to make the write() wait until read() is still reading data from the fifo, or am I missing something trivial also in this case?
Thank you for those who have already given answers to the previous version of my question, I have implemented the suggestions.
The arguments to read and write are incorrect. Correct ones:
write(fd, writer, 10 * sizeof *writer);
read(fd, buffer, 10 * sizeof *buffer);
Also, these functions may do partial reads/writes, so that the code needs to check the return values to determine whether the operation must be continued.
Not sure why while( idx <= 10) loop in the writer, this loop never ends. Even on a 5GHz CPU. Same comment for the reader.

File Locks not Preventing File Overwrite

I have the following c++ code which writes "Line from #" to a file while managing a file lock. I am running this code on two different computers, which share at least some of their memory. That is I can access my files by logging onto either of these computers.
On the first computer I run the program as ./test 1 (e.g. so it will print Line from 1 20,000 times) and on the second computer I run the program as ./test 17. I am starting these programs close enough in time so that the writes to file.txt should be interleaved and controlled by the file locks.
The problem is that I am losing output as the file has 22,770 newlines, but it should have exactly 40,000 newlines.
wc file.txt
22770 68310 276008 file.txt
Also,
cat -n file.txt | grep 18667
18667 ne from 17
My question is why are my file locks not preventing file overwriting, and how can I fix my code so that multiple processes can write to the same file without file loss.
#include <unistd.h>
#include <fcntl.h>
#include <cstdio>
#include <cstdlib>
#include <fstream>
#include <sstream>
#include <iostream>
using namespace std;
void inline Set_Lck(struct flock &flck, const int fd)
{
flck.l_type = F_WRLCK;
if (fcntl(fd, F_SETLKW, &flck) == -1) {
perror("fcntl");
exit(1);
}
}
void inline Release_Lck(struct flock &flck, const int fd)
{
flck.l_type = F_UNLCK;
if (fcntl(fd,F_SETLK,&flck) == -1) {
perror("fcntl");
exit(1);
}
}
void Print_Spec(fstream &fout, ostringstream &oss,struct flock &flck, const int fd)
{
Set_Lck(flck,fd);
fout.seekp(0,ios_base::end);
fout << oss.str() << endl;
flush(fout);
Release_Lck(flck,fd);
}
int main(int argc, char **argv)
{
int fd_cd;
struct flock flock_cd;
ostringstream oss;
fstream comp_data;
const string s_cd_lck = "file_lock.txt";
const string s_cd = "file.txt";
int my_id;
if (argc == 1) {
my_id = 0;
} else if (argc == 2) {
my_id = atoi(argv[1]);
} else {
fprintf(stderr,"error -- usage ./test [my_id]\n");
exit(1);
}
/* Open file computed_data.txt for writing; create it if non-existent.*/
comp_data.open(s_cd.c_str(),ios::app|ios::out);
if (comp_data.fail()) {
perror("comp_data.open");
exit(1);
}
/* Open file that we will be locking. */
fd_cd = open(s_cd_lck.c_str(),O_CREAT|O_WRONLY,0777);
if (fd_cd == -1) {
perror("fd_cd = open");
exit(1);
}
/* Set up the lock. */
flock_cd.l_type = F_WRLCK;
flock_cd.l_whence = SEEK_SET;
flock_cd.l_start = 0;
flock_cd.l_len = 0;
flock_cd.l_pid = getpid();
for (int i = 0; i < 20000; ++i) {
oss.str(""); /* Yes, this can be moved outside the loop. */
oss << "Line from " << my_id << endl;
Print_Spec(comp_data,oss,flock_cd,fd_cd);
}
return 0;
}
I am using c++ and this program is running on Red Hat Enterprise Linux Server release 7.2 (Maipo).
My Research
I am not sure if part of the answer comes from the following Stackoverflow post (https://stackoverflow.com/a/2059059/6417898) where they state that "locks are bound to processes."
At this website (http://perl.plover.com/yak/flock/samples/slide005.html), the author dissuades against using LOCK_UN with flock and suggests closing the file each time and reopening it as needed, so as to flush the file buffer. I don't know if this carries over with fcntl or if this is even necessary if flush the file buffer manually.

Possible to retrieve input from user and running another process?

Is it possible to use getline(cin,buffer); at the top of my program, then have a "animated menu" still running below it?
For example (very basic):
#include <iostream>
#include <string>
using namespace std;
int stringLen=0;
string buffer;
getline(cin, buffer);
for (int i = 0; i < kMaxWait;i++)
{
printf("counter waiting for user input %d",i);
if (1 >= buffer.length())
break;
}
Would I have to fork that loop somehow so it would keep counting and display the counter until the user enters something??
One possible answer, given in the comments, is to use threads. But it's not necessary, there's a way to do this without threads.
Make stdin a non-blocking file descriptor.
Wait for stdin to become readable, via poll()/select(), in the meantime do your animation, etc...
Make stdin a blocking file descriptor, again.
Use std::getline().
There are also some ancillary issues to consider, such as the buffering that comes from std::streambuf, so before doing all that, check if there's already something to read from std::cin, first.
This is something I used sometime ago. It's quite rudimentary, but you can get the gist of the process - using poll. It returns true if there is input, and puts it in str, false otherwise. So, you can put this in your loop somewhere, and take action when there is input.
bool polled_input(std::string& str)
{
struct pollfd fd_user_in;
fd_user_in.fd = STDIN_FILENO;
fd_user_in.events = POLLIN;
fd_user_in.revents = 0;
int rv = poll(&fd_user_in, 1, 0);
if (rv == -1) {/* error */}
else if (rv == 0) return false;
else if (fd_user_in.revents & POLLIN)
{
char buffer[MAX_BUFF_SIZE];
int rc = read(STDIN_FILENO, buffer, MAX_BUFF_SIZE-1);
if (rc >= 0)
{
buffer[rc]='\0';
str = std::string(buffer);
return true;
}
else {/* error */}
}
else {/* error */}
}
select is meant for this, multiplexed, blocking I/O. It can be done without a poll I think:
#include <iostream>
#include <sys/time.h>
#include <sys/types.h>
#include <unistd.h>
int main(int argc, char **arg)
{
const int time_in_secs = 10;
const int buffer_size = 1024;
fd_set readfds;
FD_ZERO(&readfds);
FD_SET(STDIN_FILENO, &readfds);
struct timeval tv;
tv.tv_sec = time_in_secs;
tv.tv_usec = 0;
int ret = select(STDIN_FILENO + 1, &readfds, NULL, NULL, &tv);
if (!ret)
{
std::cout << "Timeout\n";
exit(1);
}
char buf[buffer_size];
if (FD_ISSET(STDIN_FILENO, &readfds))
{
int len = read(STDIN_FILENO, buf, buffer_size);
buf[len] = '\0';
}
std::cout << "You typed: " << buf << "\n";
return 0;
}

Number of open file in a C++ program

Is there a simple way to get the number of files opened by a c++ program.
I would like to do it from my code, ideally in C++.
I found this blog article which is using a loop through all the available file descriptor and testing the result of fstat but I am wondering if there is any simpler way to do that.
Edit
It seems that there are no other solution than keeping a count of the files opened. Thanks to everybody for your help.
Kevin
Since the files are FILE *, we could do something like this:
In a headerfile that gets included everywhere:
#define fopen(x, y) debug_fopen(x, y, __FILE__, __LINE__)
#define fclose(x) debug_fclose(x)
in "debugfile.cpp" (must obviously NOT use the above #define's)
struct FileInfo
{
FileInfo(const char *nm, const char fl, int ln) :
name(nm), file(fl), line(ln) {}
std::string name;
const char *file;
int line;
};
std::map<FILE*, FileInfo> filemap;
FILE *debug_fopen(const char *fname, const char *mode, const char *file, int line)
{
FILE *f = fopen(fname, mode);
if (f)
{
FileInfo inf(fname, file, line);
filemap[f] = inf;
}
}
int debug_fclose(FILE *f)
{
int res = fclose(f);
filemap.erase(f);
return res;
}
// Called at some points.
void debug_list_openfiles()
{
for( i : filemap )
{
cerr << "File" << (void *) i.first << " opened as " << i.second.name
<< " at " << i.second.file << ":" << i.second.line << endl;
}
}
(I haven't compiled this code, and it's meant to show the concept, it may have minor bugs, but I think the concept would hold - as long as your code, and not some third party library is leaking)
This is a legitimate question: I count open file descriptors in unit tests to verify none has leaked. On Linux systems there's one entry in /proc/self/fd for each open file descriptor, so you just have to count them. In c++17 it looks like this:
long file_descriptor_count() {
return std::distance(std::filesystem::directory_iterator("/proc/self/fd"), std::filesystem::iterator{});
}
there is a good practice that the scope of file opened in the smallest possible, open dump all information you want, or buffer into the fd, then close.
so this mean usual case we will have 3 fd the std in/out/err, plus all opened files.
keep track of your open files manually is the best, if you keep files opened.
put a global fdCounter variable,
increment it after a successful file open, decremented after closing
If you are under linux, this information is available under /proc/you_pid/fd.
Then use, lstat on each file descriptor to keep only regular files.
If you encapsulated it properly, it should be simple to add reference counters or logging to it and print them to the console.
One approach to debug it is to override the open calls with your own implementation and from there call the real thing. Then you can also put some logging in, to see if you loose file descriptors. How do you open the files? With open() or are you using fopen()?
Something like this maybe:
#include <fstream>
#include <iostream>
#include <stdlib.h>
#include <fcntl.h>
inline int log_open(char *p, int flags, int mode)
{
int fd = ::open(p, flags, mode);
std::cout << "OPEN: " << fd << std::endl;
return fd;
}
inline int log_close(int fd)
{
int rc = ::close(fd);
std::cout << "CLOSE: " << fd << std::endl;
return rc;
}
#define open(p, f, m) log_open(p, f, m)
#define close(fd) log_close(fd)
int main(int argc, char *argv[])
{
int fd = open("tmp.txt", O_RDWR | O_CREAT | O_TRUNC, 0666);
std::cout << "FD: " << fd << std::endl;
if(fd != -1)
close(fd);
return 0;
}
In my experience, by the time you need to count the number of file descriptors, you don't know where they were opened, by what submodule or library. Thus, wrapping open/close is not a viable strategy. Brute-force counting seems to be the only way.
The domain with the orig blog post no longer resolves in DNS. I copy two proposals from Find current number of open filehandle ( NOT lsof )
int j, n = 0;
// count open file descriptors
for (j = 0; j < FDMAX; ++j) // FDMAX should be retrieved from process limits,
// but a constant value of >=4K should be
// adequate for most systems
{
int fd = dup (j);
if (fd < 0)
continue;
++n;
close (fd);
}
printf ("%d file descriptors open\n", n);
and also this:
#include <stdio.h>
#include <sys/types.h>
#include <dirent.h>
int main (void)
{
DIR *dp;
struct dirent *ep;
dp = opendir ("/proc/MYPID/fd/");
if (dp != NULL)
{
while (ep = readdir (dp))
puts (ep->d_name);
(void) closedir (dp);
}
else
perror ("Couldn't open the directory");
return 0;
}

program that reacts to inotify and prints the events

I am working in Ubuntu. I want to monitor a folder and print every event that pops up in the subfolders (print files).
I have the following code but it doesn't work. When executed, there is no println of the events.
In the second code I only see the events from the folder. The events from each subfolder do not pop up.
#include <string>
#include <iostream>
#include <stdio.h>
using namespace std;
std::string exec(char* cmd) {
FILE* pipe = popen(cmd, "r");
if (!pipe) return "ERROR";
char buffer[256];
std::string result = "";
while(!feof(pipe)) {
if(fgets(buffer, 256, pipe) != NULL)
result += buffer;
}
pclose(pipe);
cout<<"result is: "<<result<<endl;
return result;
}
int main()
{
//while(1)
//{
string s=exec((char*)"inotifywait -rme create /home/folder/");
cout << s << endl;
//}
return 0;
}
This code only prints the events from the folder I'm monitoring. It doesn't print the events from each subfolder. I don't know how to improve it for my needs.
#include <sys/inotify.h>
#include <sys/ioctl.h>
#include <iostream>
void processNewFiles(int fd, int wd);
int main(int argc, char** argv)
{
const char* dirPath = "/home/folder/" ;//argv[1];
int fd = inotify_init();
int wd = inotify_add_watch(fd, dirPath, IN_CREATE);
if (wd)
{
processNewFiles(fd, wd);
inotify_rm_watch(fd, wd);
}
}
void processNewFiles(int fd, int wd)
{
bool done = false;
do
{
int qLen = 0;
ioctl(fd, FIONREAD, &qLen);
char* buf = new char[qLen];
int num = read(fd, buf, qLen);
if (num == qLen)
{
inotify_event* iev = reinterpret_cast<inotify_event*>(buf);
if (iev->wd == wd && iev->mask & IN_CREATE)
{
std::cout << "New file created: " << iev->name << std::endl;
}
}
delete [] buf;
} while (!done);
}
Your second solution does not work because inotify_add_watch is not working recursivly. You would have to add watches for subdirectories manually. As this might be annoying, it is also possible to use the utility inotifywait as you do in your first example.
Your first example is not working because you're reading from the pipe forever. If you kill the inotifywait process (e.g. if you're the only person on the machine and this is the only inotifywait process just using "killall inotifywait") you will get your output because you'll break out of the loop reading from the pipe. If you output something inside the loop, it will work, too.