Is there a simple way to get the number of files opened by a c++ program.
I would like to do it from my code, ideally in C++.
I found this blog article which is using a loop through all the available file descriptor and testing the result of fstat but I am wondering if there is any simpler way to do that.
Edit
It seems that there are no other solution than keeping a count of the files opened. Thanks to everybody for your help.
Kevin
Since the files are FILE *, we could do something like this:
In a headerfile that gets included everywhere:
#define fopen(x, y) debug_fopen(x, y, __FILE__, __LINE__)
#define fclose(x) debug_fclose(x)
in "debugfile.cpp" (must obviously NOT use the above #define's)
struct FileInfo
{
FileInfo(const char *nm, const char fl, int ln) :
name(nm), file(fl), line(ln) {}
std::string name;
const char *file;
int line;
};
std::map<FILE*, FileInfo> filemap;
FILE *debug_fopen(const char *fname, const char *mode, const char *file, int line)
{
FILE *f = fopen(fname, mode);
if (f)
{
FileInfo inf(fname, file, line);
filemap[f] = inf;
}
}
int debug_fclose(FILE *f)
{
int res = fclose(f);
filemap.erase(f);
return res;
}
// Called at some points.
void debug_list_openfiles()
{
for( i : filemap )
{
cerr << "File" << (void *) i.first << " opened as " << i.second.name
<< " at " << i.second.file << ":" << i.second.line << endl;
}
}
(I haven't compiled this code, and it's meant to show the concept, it may have minor bugs, but I think the concept would hold - as long as your code, and not some third party library is leaking)
This is a legitimate question: I count open file descriptors in unit tests to verify none has leaked. On Linux systems there's one entry in /proc/self/fd for each open file descriptor, so you just have to count them. In c++17 it looks like this:
long file_descriptor_count() {
return std::distance(std::filesystem::directory_iterator("/proc/self/fd"), std::filesystem::iterator{});
}
there is a good practice that the scope of file opened in the smallest possible, open dump all information you want, or buffer into the fd, then close.
so this mean usual case we will have 3 fd the std in/out/err, plus all opened files.
keep track of your open files manually is the best, if you keep files opened.
put a global fdCounter variable,
increment it after a successful file open, decremented after closing
If you are under linux, this information is available under /proc/you_pid/fd.
Then use, lstat on each file descriptor to keep only regular files.
If you encapsulated it properly, it should be simple to add reference counters or logging to it and print them to the console.
One approach to debug it is to override the open calls with your own implementation and from there call the real thing. Then you can also put some logging in, to see if you loose file descriptors. How do you open the files? With open() or are you using fopen()?
Something like this maybe:
#include <fstream>
#include <iostream>
#include <stdlib.h>
#include <fcntl.h>
inline int log_open(char *p, int flags, int mode)
{
int fd = ::open(p, flags, mode);
std::cout << "OPEN: " << fd << std::endl;
return fd;
}
inline int log_close(int fd)
{
int rc = ::close(fd);
std::cout << "CLOSE: " << fd << std::endl;
return rc;
}
#define open(p, f, m) log_open(p, f, m)
#define close(fd) log_close(fd)
int main(int argc, char *argv[])
{
int fd = open("tmp.txt", O_RDWR | O_CREAT | O_TRUNC, 0666);
std::cout << "FD: " << fd << std::endl;
if(fd != -1)
close(fd);
return 0;
}
In my experience, by the time you need to count the number of file descriptors, you don't know where they were opened, by what submodule or library. Thus, wrapping open/close is not a viable strategy. Brute-force counting seems to be the only way.
The domain with the orig blog post no longer resolves in DNS. I copy two proposals from Find current number of open filehandle ( NOT lsof )
int j, n = 0;
// count open file descriptors
for (j = 0; j < FDMAX; ++j) // FDMAX should be retrieved from process limits,
// but a constant value of >=4K should be
// adequate for most systems
{
int fd = dup (j);
if (fd < 0)
continue;
++n;
close (fd);
}
printf ("%d file descriptors open\n", n);
and also this:
#include <stdio.h>
#include <sys/types.h>
#include <dirent.h>
int main (void)
{
DIR *dp;
struct dirent *ep;
dp = opendir ("/proc/MYPID/fd/");
if (dp != NULL)
{
while (ep = readdir (dp))
puts (ep->d_name);
(void) closedir (dp);
}
else
perror ("Couldn't open the directory");
return 0;
}
Related
I have the following c++ code which writes "Line from #" to a file while managing a file lock. I am running this code on two different computers, which share at least some of their memory. That is I can access my files by logging onto either of these computers.
On the first computer I run the program as ./test 1 (e.g. so it will print Line from 1 20,000 times) and on the second computer I run the program as ./test 17. I am starting these programs close enough in time so that the writes to file.txt should be interleaved and controlled by the file locks.
The problem is that I am losing output as the file has 22,770 newlines, but it should have exactly 40,000 newlines.
wc file.txt
22770 68310 276008 file.txt
Also,
cat -n file.txt | grep 18667
18667 ne from 17
My question is why are my file locks not preventing file overwriting, and how can I fix my code so that multiple processes can write to the same file without file loss.
#include <unistd.h>
#include <fcntl.h>
#include <cstdio>
#include <cstdlib>
#include <fstream>
#include <sstream>
#include <iostream>
using namespace std;
void inline Set_Lck(struct flock &flck, const int fd)
{
flck.l_type = F_WRLCK;
if (fcntl(fd, F_SETLKW, &flck) == -1) {
perror("fcntl");
exit(1);
}
}
void inline Release_Lck(struct flock &flck, const int fd)
{
flck.l_type = F_UNLCK;
if (fcntl(fd,F_SETLK,&flck) == -1) {
perror("fcntl");
exit(1);
}
}
void Print_Spec(fstream &fout, ostringstream &oss,struct flock &flck, const int fd)
{
Set_Lck(flck,fd);
fout.seekp(0,ios_base::end);
fout << oss.str() << endl;
flush(fout);
Release_Lck(flck,fd);
}
int main(int argc, char **argv)
{
int fd_cd;
struct flock flock_cd;
ostringstream oss;
fstream comp_data;
const string s_cd_lck = "file_lock.txt";
const string s_cd = "file.txt";
int my_id;
if (argc == 1) {
my_id = 0;
} else if (argc == 2) {
my_id = atoi(argv[1]);
} else {
fprintf(stderr,"error -- usage ./test [my_id]\n");
exit(1);
}
/* Open file computed_data.txt for writing; create it if non-existent.*/
comp_data.open(s_cd.c_str(),ios::app|ios::out);
if (comp_data.fail()) {
perror("comp_data.open");
exit(1);
}
/* Open file that we will be locking. */
fd_cd = open(s_cd_lck.c_str(),O_CREAT|O_WRONLY,0777);
if (fd_cd == -1) {
perror("fd_cd = open");
exit(1);
}
/* Set up the lock. */
flock_cd.l_type = F_WRLCK;
flock_cd.l_whence = SEEK_SET;
flock_cd.l_start = 0;
flock_cd.l_len = 0;
flock_cd.l_pid = getpid();
for (int i = 0; i < 20000; ++i) {
oss.str(""); /* Yes, this can be moved outside the loop. */
oss << "Line from " << my_id << endl;
Print_Spec(comp_data,oss,flock_cd,fd_cd);
}
return 0;
}
I am using c++ and this program is running on Red Hat Enterprise Linux Server release 7.2 (Maipo).
My Research
I am not sure if part of the answer comes from the following Stackoverflow post (https://stackoverflow.com/a/2059059/6417898) where they state that "locks are bound to processes."
At this website (http://perl.plover.com/yak/flock/samples/slide005.html), the author dissuades against using LOCK_UN with flock and suggests closing the file each time and reopening it as needed, so as to flush the file buffer. I don't know if this carries over with fcntl or if this is even necessary if flush the file buffer manually.
I am write application to read DICOM file and I must using other library to do it. I find that the library will open file but it don't close file when finishing. And the library is not open source. I know that open file limiting is 1024 in Linux and I can change the number. But I don't want to do the way. I like to close the file that is opened by library. How to close file in C if I know it is opening. I am using DICOM2NII library that is got from http://cbi.nyu.edu/software/dinifti.php.
And this is the code to open file but it does not close
bool DICOMImage::OpenFile(const char *path)
{
bool retValue = true;
DCM_Objects handle_;
unsigned long options = DCM_ORDERLITTLEENDIAN | DCM_FORMATCONVERSION | DCM_VRMASK;
// Try opening as PART10, if it fails it's might be bcause it does not have
// a preable and the try it that way
if ( DCM_OpenFile(path, options | DCM_PART10FILE, &handle_) != DCM_NORMAL )
{
DCM_CloseObject(&handle_);
COND_PopCondition(TRUE);
if ( DCM_OpenFile(path, options, &handle_) != DCM_NORMAL )
{
retValue = false;
}
else
retValue=true;
}
return retValue;
}
In your DICOMImage class, add a member
DCM_OBJECT *handle_;
and in your destructor close the file
DICOMImage::DICOMImage() : handle_(0) { ... }
DICOMImage::~DICOMImage() {
if (handle_ != 0)
DCM_CloseObject(&handle_);
}
and use this member handle_ in DICOMImage::OpenFile() as well, of course.
You could first test all file descriptors to see which ones are in use by doing a dummy fcntl(fd, F_GETFD) on every fd from 0 up to getdtablesize(). When the library function returns there will be one more open fd which you can just close with close(). You could also just call close(fd) on everything that wasn't open before and one of those will succeed (and you can stop at that point in your search).
It's likely you can do the initial probe up to the first unused fd and the library will end up using that fd provided it doesn't do anything more complex than open one file. If it opens multiple files or uses dup() it could end up elsewhere.
To spell this out:
#include <iostream>
#include <vector>
#include <unistd.h>
#include <fcntl.h>
std::vector<bool> getOpenFileMap()
{
int limit = getdtablesize();
std::vector<bool> result(limit);
for (int fd = 0; fd < limit; ++fd)
result[fd] = fcntl(fd, F_GETFD) != -1;
return result;
}
void closeOpenedFiles(const std::vector<bool> &existing)
{
int limit = existing.size();
for (int fd = 0; fd < limit; ++fd)
if (!existing[fd])
close(fd);
}
int getLikelyFd()
{
int limit = getdtablesize();
for (int fd = 0; fd < limit; ++fd)
if (fcntl(fd, F_GETFD) != -1)
return fd;
}
int main()
{
std::vector<bool> existing = getOpenFileMap();
int fd = open("/dev/null", O_RDONLY);
closeOpenedFiles(existing);
bool closed = write(fd, "test", 4) == -1;
std::cout << "complex pass " << std::boolalpha << closed << std::endl;
int guess = getLikelyFd();
fd = open("/dev/null", O_RDONLY);
bool match = fd == guess;
std::cout << "simple pass " << std::boolalpha << match << std::endl;
}
I'm trying to call a shell script from C++ with custom input. What I could do is:
void dostuff(string s) {
system("echo " + s + " | myscript.sh");
...
}
Of course, escaping s is quite difficult. Is there a way that I can use s as stdin for myscript.sh? Ie, something like this:
void dostuff(string s) {
FILE *out = stringToFile(s);
system("myscript.sh", out);
}
A simple test to reassign stdin and restore it after the system call:
#include <cstdlib> // system
#include <cstdio> // perror
#include <unistd.h> // dup2
#include <sys/types.h> // rest for open/close
#include <sys/stat.h>
#include <fcntl.h>
#include <errno.h>
#include <iostream>
int redirect_input(const char* fname)
{
int save_stdin = dup(0);
int input = open(fname, O_RDONLY);
if (!errno) dup2(input, 0);
if (!errno) close(input);
return save_stdin;
}
void restore_input(int saved_fd)
{
close(0);
if (!errno) dup2(saved_fd, 0);
if (!errno) close(saved_fd);
}
int main()
{
int save_stdin = redirect_input("test.cpp");
if (errno)
{
perror("redirect_input");
} else
{
system("./dummy.sh");
restore_input(save_stdin);
if (errno) perror("system/restore_input");
}
// proof that we can still copy original stdin to stdout now
std::cout << std::cin.rdbuf() << std::flush;
}
Works out nicely. I tested it with a simple dummy.sh script like this:
#!/bin/sh
/usr/bin/tail -n 3 | /usr/bin/rev
Note the last line dumps standard input to standard output, so you could test it like
./test <<< "hello world"
and expect the following output:
won tuodts ot nidts lanigiro ypoc llits nac ew taht foorp //
;hsulf::dts << )(fubdr.nic::dts << tuoc::dts
}
hello world
Use popen:
void dostuff(const char* s) {
FILE* f = fopen(s, "r");
FILE* p = popen("myscript.sh", "w");
char buf[4096];
while (size_t n = fread(buf, 1, sizeof(buf), f))
if (fwrite(buf, 1, n, p) < n)
break;
pclose(p);
}
You'll need to add error checking to make this robust.
Note that I prefer a const char*, since it is more flexible (works with things other than std::string) and matches what's going on inside. If you really prefer std::string, do it like so:
void dostuff(const std::string& s) {
FILE* f = fopen(s.c_str(), "r");
⋮
Also note that the 4096-byte buffer was chosen because it matches the page size on most systems. This isn't necessarily the most efficient approach, but it'll be fine for most purposes. I've found 32 KiB to be a sweet spot in my own unscientific tests on a laptop, so you might want to play around, but if you are serious about efficiency, you'll want to switch to asynchronous I/O, and start readn+1 immediately after initiating writen.
I'm calling a LINUX command from within a C++ programme which creates the following output. I need to copy the first column of the output to a C++ variable (say a long int). How can I do it?? If that is not possible how can I copy this result into a .txt file with which I can work with?
Edit
0 +0
2361294848 +2361294848
2411626496 +50331648
2545844224 +134217728
2713616384 +167772160
I have this stored as a file, file.txt and I'm using the following code to
extract the left column with out the 0 to store it at integers
string stringy="";
int can_can=0;
for(i=begin;i<length;i++)
{
if (buffer[i]==' ' && can_can ==1) //**buffer** is the whole text file read in char*
{
num=atoi(stringy.c_str());
array[univ]=num; // This where I store the values.
univ+=1;
can_can=1;
}
else if (buffer[i]==' ' && can_can ==0)
{
stringy="";
}
else if (buffer[i]=='+')
{can_can=0;}
else{stringy.append(buffer[i]);}
}
I'm getting a segmentation error for this. What can be done ?
Thanks in advance.
Just create a simple streambuf wrapper around popen()
#include <iostream>
#include <stdio.h>
struct SimpleBuffer: public std::streambuf
{
typedef std::streambuf::traits_type traits;
typedef traits::int_type int_type;
SimpleBuffer(std::string const& command)
: stream(popen(command.c_str(), "r"))
{
this->setg(&c[0], &c[0], &c[0]);
this->setp(0, 0);
}
~SimpleBuffer()
{
if (stream != NULL)
{
fclose(stream);
}
}
virtual int_type underflow()
{
std::size_t size = fread(c, 1, 100, stream);
this->setg(&c[0], &c[0], &c[size]);
return size == 0 ? EOF : *c;
}
private:
FILE* stream;
char c[100];
};
Usage:
int main()
{
SimpleBuffer buffer("echo 55 hi there Loki");
std::istream command(&buffer);
int value;
command >> value;
std::string line;
std::getline(command, line);
std::cout << "Got int(" << value << ") String (" << line << ")\n";
}
Result:
> ./a.out
Got int(55) String ( hi there Loki)
It is popen you're probably looking for. Try
man popen
.
Or see this little example:
#include <iostream>
#include <stdio.h>
using namespace std;
int main()
{
FILE *in;
char buff[512];
if(!(in = popen("my_script_from_command_line", "r"))){
return 1;
}
while(fgets(buff, sizeof(buff), in)!=NULL){
cout << buff; // here you have each line
// of the output of your script in buff
}
pclose(in);
return 0;
}
Unfortunately, it’s not easy since the platform API is written for C. The following is a simple working example:
#include <cstdio>
#include <iostream>
int main() {
char const* command = "ls -l";
FILE* fpipe = popen(command, "r");
if (not fpipe) {
std::cerr << "Unable to execute commmand\n";
return EXIT_FAILURE;
}
char buffer[256];
while (std::fgets(buffer, sizeof buffer, fpipe)) {
std::cout << buffer;
}
pclose(fpipe);
}
However, I’d suggest wrapping the FILE* handle in a RAII class to take care of resource management.
You probably want to use popen to execute the command. This will give you a FILE * that you can read its output from. From there, you can parse out the first number with (for example) something like:
fscanf(inpipe, "%d %*d", &first_num);
which, just like when reading from a file, you'll normally repeat until you receive an end of file indication, such as:
long total = 0;
while (1 == fscanf(inpipe, "%l %*d", &first_num))
total = first_num;
printf("%l\n", total);
I am working in Ubuntu. I want to monitor a folder and print every event that pops up in the subfolders (print files).
I have the following code but it doesn't work. When executed, there is no println of the events.
In the second code I only see the events from the folder. The events from each subfolder do not pop up.
#include <string>
#include <iostream>
#include <stdio.h>
using namespace std;
std::string exec(char* cmd) {
FILE* pipe = popen(cmd, "r");
if (!pipe) return "ERROR";
char buffer[256];
std::string result = "";
while(!feof(pipe)) {
if(fgets(buffer, 256, pipe) != NULL)
result += buffer;
}
pclose(pipe);
cout<<"result is: "<<result<<endl;
return result;
}
int main()
{
//while(1)
//{
string s=exec((char*)"inotifywait -rme create /home/folder/");
cout << s << endl;
//}
return 0;
}
This code only prints the events from the folder I'm monitoring. It doesn't print the events from each subfolder. I don't know how to improve it for my needs.
#include <sys/inotify.h>
#include <sys/ioctl.h>
#include <iostream>
void processNewFiles(int fd, int wd);
int main(int argc, char** argv)
{
const char* dirPath = "/home/folder/" ;//argv[1];
int fd = inotify_init();
int wd = inotify_add_watch(fd, dirPath, IN_CREATE);
if (wd)
{
processNewFiles(fd, wd);
inotify_rm_watch(fd, wd);
}
}
void processNewFiles(int fd, int wd)
{
bool done = false;
do
{
int qLen = 0;
ioctl(fd, FIONREAD, &qLen);
char* buf = new char[qLen];
int num = read(fd, buf, qLen);
if (num == qLen)
{
inotify_event* iev = reinterpret_cast<inotify_event*>(buf);
if (iev->wd == wd && iev->mask & IN_CREATE)
{
std::cout << "New file created: " << iev->name << std::endl;
}
}
delete [] buf;
} while (!done);
}
Your second solution does not work because inotify_add_watch is not working recursivly. You would have to add watches for subdirectories manually. As this might be annoying, it is also possible to use the utility inotifywait as you do in your first example.
Your first example is not working because you're reading from the pipe forever. If you kill the inotifywait process (e.g. if you're the only person on the machine and this is the only inotifywait process just using "killall inotifywait") you will get your output because you'll break out of the loop reading from the pipe. If you output something inside the loop, it will work, too.