Context: I'm using memory mapped file in my code created using ACE_Mem_Map. It is observed that the memory mapped file is consuming more disk space than expected.
Scenario:
I have a structure containing a char array of 15KB. I have created a memory map file for array of this struct with file size ~2GB.
If I try to access few bytes of the char array(say 256), then, file size consumed is shown as 521 MB but actual disk usage shown by filesystem(using df -h) is more than 3GB.
If I access all bytes of the memory, then both file size and disk usage is shown as 2 GB.
Environment:
OS: Oracle Linux 7.3
Kernel version: 3.10.0/4.1.12
Code:
#include<ace/Mem_Map.h>
#include <stdio.h>
#define TEST_BUFF_SIZE 15*1024
typedef struct _test_struct_ {
char test[TEST_BUFF_SIZE];
_test_struct_() {
reset();
}
void reset() {
/* Issue replicating */
memset(test, '\0', 256);
/* Issue not replicating */
memset(test, '\0', TEST_BUFF_SIZE);
}
}TestStruct_t;
int main(int argc, char *argv[]) {
if(3 != argc) {
printf("Usage: %s <num of blocks> <filename>\n",
argv[0]);
return -1;
}
ACE_Mem_Map map_buf_;
size_t num_of_blocks = strtoull(argv[1], NULL, 10);
size_t MAX_SIZE = num_of_blocks*sizeof(TestStruct_t);
char* mmap_file_name = argv[2];
printf("num_of_blocks[%llu], sizeof(TestStruct_t)[%llu], MAX_SIZE[%llu], mmap_file_name[%s]\n",
num_of_blocks,
sizeof(TestStruct_t),
MAX_SIZE,
mmap_file_name);
TestStruct_t *base_addr_;
ACE_HANDLE fp_ = ACE_OS::open(mmap_file_name,O_RDWR|O_CREAT,
ACE_DEFAULT_OPEN_PERMS,0);
if (fp_ == ACE_INVALID_HANDLE)
{
printf("Error opening file\n");
return -1;
}
map_buf_.map(fp_,MAX_SIZE,PROT_WRITE,MAP_SHARED);
base_addr_ = (TestStruct_t*)map_buf_.addr();
if (base_addr_ == MAP_FAILED)
{
printf("Map init failure\n");
ACE_OS::close(fp_);
return -1;
}
printf("map_buf_ size[%llu]\n",
map_buf_.size());
for(size_t i = 0; i < num_of_blocks; i++) {
base_addr_[i].reset();
}
return 0;
}
Can anyone explain why is scenario 1 happening??
Note: In scenario 1, if I make a copy of generated mmap file and then delete that copy, then the additional 2.5GB disk space gets freed. Don't know the reason
I 'upgraded' your program to nearly C and minus whatever ACE is and got this:
$ ./a.out 32 fred
num_of_blocks[32], sizeof(TestStruct_t)[15360], MAX_SIZE[491520], mmap_file_name[fred]
Bus error: 10
Which is pretty much expected. Mmap does not extend the size of the mapped file, so it generates an address error when you try to reference an unfilled part.
So, the answer is that whatever ACE.map does, it likely invokes something like ftruncate(2) to extend the file to the size you give as a parameter. #John Bollinger hints at this by asking how are you measuring that: ls or du. You should use the latter.
Anyway, almost C version:
#include <sys/mman.h>
#include <sys/types.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <fcntl.h>
#include <stdio.h>
#define TEST_BUFF_SIZE 15*1024
typedef struct _test_struct_ {
char test[TEST_BUFF_SIZE];
_test_struct_() {
reset();
}
void reset() {
/* Issue replicating */
memset(test, '\0', 256);
/* Issue not replicating */
memset(test, '\0', TEST_BUFF_SIZE);
}
}TestStruct_t;
int main(int argc, char *argv[]) {
if(argc < 3) {
printf("Usage: %s <num of blocks> <filename>\n",
argv[0]);
return 1;
}
void *buf;
size_t num_of_blocks = strtoull(argv[1], NULL, 10);
size_t MAX_SIZE = num_of_blocks*sizeof(TestStruct_t);
char* mmap_file_name = argv[2];
printf("num_of_blocks[%zu], sizeof(TestStruct_t)[%zu], MAX_SIZE[%zu], mmap_file_name[%s]\n",
num_of_blocks,
sizeof(TestStruct_t),
MAX_SIZE,
mmap_file_name);
int fp = open(mmap_file_name,O_RDWR|O_CREAT,0666);
if (fp == -1)
{
perror("Error opening file");
return 1;
}
/*SOMETHING CLEVER*/
switch (argc) {
case 3:
break;
case 4:
if (ftruncate(fp, MAX_SIZE) != 0) {
perror("ftruncate");
return 1;
}
break;
case 5:
if (lseek(fp, MAX_SIZE-1, SEEK_SET) != MAX_SIZE-1 ||
write(fp, "", 1) != 1) {
perror("seek,write");
return 1;
}
}
void *b = mmap(0, MAX_SIZE, PROT_READ|PROT_WRITE, MAP_SHARED, fp, 0);
if (b == MAP_FAILED)
{
perror("Map init failure");
return 1;
}
TestStruct_t *base_addr = (TestStruct_t *)b;
for(size_t i = 0; i < num_of_blocks; i++) {
base_addr[i].reset();
}
return 0;
}
The SOMETHING CLEVER bit allows you to either work with an empty file (argc == 3), grow it with ftruncate (argc == 4), or grow it with lseek && write (argc == 5).
On UNIX-y systems, ftruncate may or may not reserve space for your file; a lengthened file without reserved space is called sparce. Almost universally, the lseek && write will create a sparse file, unless your system doesn't support that.
The sparce file will allocate actual disk blocks as you write to it, however, if it fails, it will deliver a signal whereas the pre-allocated one will not.
Your loop at the bottom walks the whole extent, so the file will always be grown; reduce that loop and you can see if the options make a difference on your system.
Related
I am currently writing a program in c++ which includes reading lots of large text files. Each has ~400.000 lines with in extreme cases 4000 or more characters per line. Just for testing, I read one of the files using ifstream and the implementation offered by cplusplus.com. It took around 60 seconds, which is way too long. Now I was wondering, is there a straightforward way to improve reading speed?
edit:
The code I am using is more or less this:
string tmpString;
ifstream txtFile(path);
if(txtFile.is_open())
{
while(txtFile.good())
{
m_numLines++;
getline(txtFile, tmpString);
}
txtFile.close();
}
edit 2: The file I read is only 82 MB big. I mainly said that it could reach 4000 because I thought it might be necessary to know in order to do buffering.
edit 3: Thank you all for your answers, but it seems like there is not much room to improve given my problem. I have to use readline, since I want to count the number of lines. Instantiating the ifstream as binary didn't make reading any faster either. I will try to parallelize it as much as I can, that should work at least.
edit 4: So apparently there are some things I can to. Big thank you to sehe for putting so much time into this, I appreciate it a lot! =)
Updates: Be sure to check the (surprising) updates below the initial answer
Memory mapped files have served me well1:
#include <boost/iostreams/device/mapped_file.hpp> // for mmap
#include <algorithm> // for std::find
#include <iostream> // for std::cout
#include <cstring>
int main()
{
boost::iostreams::mapped_file mmap("input.txt", boost::iostreams::mapped_file::readonly);
auto f = mmap.const_data();
auto l = f + mmap.size();
uintmax_t m_numLines = 0;
while (f && f!=l)
if ((f = static_cast<const char*>(memchr(f, '\n', l-f))))
m_numLines++, f++;
std::cout << "m_numLines = " << m_numLines << "\n";
}
This should be rather quick.
Update
In case it helps you test this approach, here's a version using mmap directly instead of using Boost: see it live on Coliru
#include <algorithm>
#include <iostream>
#include <cstring>
// for mmap:
#include <sys/mman.h>
#include <sys/stat.h>
#include <fcntl.h>
const char* map_file(const char* fname, size_t& length);
int main()
{
size_t length;
auto f = map_file("test.cpp", length);
auto l = f + length;
uintmax_t m_numLines = 0;
while (f && f!=l)
if ((f = static_cast<const char*>(memchr(f, '\n', l-f))))
m_numLines++, f++;
std::cout << "m_numLines = " << m_numLines << "\n";
}
void handle_error(const char* msg) {
perror(msg);
exit(255);
}
const char* map_file(const char* fname, size_t& length)
{
int fd = open(fname, O_RDONLY);
if (fd == -1)
handle_error("open");
// obtain file size
struct stat sb;
if (fstat(fd, &sb) == -1)
handle_error("fstat");
length = sb.st_size;
const char* addr = static_cast<const char*>(mmap(NULL, length, PROT_READ, MAP_PRIVATE, fd, 0u));
if (addr == MAP_FAILED)
handle_error("mmap");
// TODO close fd at some point in time, call munmap(...)
return addr;
}
Update
The last bit of performance I could squeeze out of this I found by looking at the source of GNU coreutils wc. To my surprise using the following (greatly simplified) code adapted from wc runs in about 84% of the time taken with the memory mapped file above:
static uintmax_t wc(char const *fname)
{
static const auto BUFFER_SIZE = 16*1024;
int fd = open(fname, O_RDONLY);
if(fd == -1)
handle_error("open");
/* Advise the kernel of our access pattern. */
posix_fadvise(fd, 0, 0, 1); // FDADVICE_SEQUENTIAL
char buf[BUFFER_SIZE + 1];
uintmax_t lines = 0;
while(size_t bytes_read = read(fd, buf, BUFFER_SIZE))
{
if(bytes_read == (size_t)-1)
handle_error("read failed");
if (!bytes_read)
break;
for(char *p = buf; (p = (char*) memchr(p, '\n', (buf + bytes_read) - p)); ++p)
++lines;
}
return lines;
}
1 see e.g. the benchmark here: How to parse space-separated floats in C++ quickly?
4000 * 400,000 = 1.6 GB if you're hard drive isn't an SSD you're likely getting ~100 MB/s sequential read. That's 16 seconds just in I/O.
Since you don't elaborate on the specific code your using or how you need to parse these files (do you need to read it line by line, does the system have a lot of RAM could you read the whole file into a large RAM buffer and then parse it?) There's little you can do to speed up the process.
Memory mapped files won't offer any performance improvement when reading a file sequentially. Perhaps manually parsing large chunks for new lines rather than using "getline" would offer an improvement.
EDIT After doing some learning (thanks #sehe). Here's the memory mapped solution I would likely use.
#include <stdio.h>
#include <stdlib.h>
#include <fcntl.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/mman.h>
#include <sys/stat.h>
#include <errno.h>
int main() {
char* fName = "big.txt";
//
struct stat sb;
long cntr = 0;
int fd, lineLen;
char *data;
char *line;
// map the file
fd = open(fName, O_RDONLY);
fstat(fd, &sb);
//// int pageSize;
//// pageSize = getpagesize();
//// data = mmap((caddr_t)0, pageSize, PROT_READ, MAP_PRIVATE, fd, pageSize);
data = mmap((caddr_t)0, sb.st_size, PROT_READ, MAP_PRIVATE, fd, 0);
line = data;
// get lines
while(cntr < sb.st_size) {
lineLen = 0;
line = data;
// find the next line
while(*data != '\n' && cntr < sb.st_size) {
data++;
cntr++;
lineLen++;
}
/***** PROCESS LINE *****/
// ... processLine(line, lineLen);
}
return 0;
}
Neil Kirk, unfortunately I can not reply to your comment (not enough reputation) but I did a performance test on ifstream an stringstream and the performance, reading a text file line by line, is exactly the same.
std::stringstream stream;
std::string line;
while(std::getline(stream, line)) {
}
This takes 1426ms on a 106MB file.
std::ifstream stream;
std::string line;
while(ifstream.good()) {
getline(stream, line);
}
This takes 1433ms on the same file.
The following code is faster instead:
const int MAX_LENGTH = 524288;
char* line = new char[MAX_LENGTH];
while (iStream.getline(line, MAX_LENGTH) && strlen(line) > 0) {
}
This takes 884ms on the same file.
It is just a little tricky since you have to set the maximum size of your buffer (i.e. maximum length for each line in the input file).
As someone with a little background in competitive programming, I can tell you: At least for simple things like integer parsing the main cost in C is locking the file streams (which is by default done for multi-threading). Use the unlocked_stdio versions instead (fgetc_unlocked(), fread_unlocked()). For C++, the common lore is to use std::ios::sync_with_stdio(false) but I don't know if it's as fast as unlocked_stdio.
For reference here is my standard integer parsing code. It's a lot faster than scanf, as I said mainly due to not locking the stream. For me it was as fast as the best hand-coded mmap or custom buffered versions I'd used previously, without the insane maintenance debt.
int readint(void)
{
int n, c;
n = getchar_unlocked() - '0';
while ((c = getchar_unlocked()) > ' ')
n = 10*n + c-'0';
return n;
}
(Note: This one only works if there is precisely one non-digit character between any two integers).
And of course avoid memory allocation if possible...
Do you have to read all files at the same time? (at the start of your application for example)
If you do, consider parallelizing the operation.
Either way, consider using binary streams, or unbffered read for blocks of data.
Use Random file access or use binary mode. for sequential, this is big but still it depends on what you are reading.
I am trying to develop a little application in C++, within a Linux environment, which does the following:
1) gets a data stream (a series of arrays of doubles) from the output of a 'black-box' and writes it to a pipe. The black-box can be thought as an ADC;
2) reads the data stream from the pipe and feeds it to another application which requires these data as stdin;
Unfortunately, I was not able to find tutorials or examples. The best way I found to realize this is summarized in the following test-bench example:
#include <iostream>
#include <fcntl.h>
#include <sys/stat.h>
#include <stdio.h>
#define FIFO "/tmp/data"
using namespace std;
int main() {
int fd;
int res = mkfifo(FIFO,0777);
float *writer = new float[10];
float *buffer = new float[10];
if( res == 0 ) {
cout<<"FIFO created"<<endl;
int fres = fork();
if( fres == -1 ) {
// throw an error
}
if( fres == 0 )
{
fd = open(FIFO, O_WRONLY);
int idx = 1;
while( idx <= 10) {
for(int i=0; i<10; i++) writer[i]=1*idx;
write(fd, writer, sizeof(writer)*10);
}
close(fd);
}
else
{
fd = open(FIFO, O_RDONLY);
while(1) {
read(fd, buffer, sizeof(buffer)*10);
for(int i=0; i<10; i++) printf("buf: %f",buffer[i]);
cout<<"\n"<<endl;
}
close(fd);
}
}
delete[] writer;
delete[] buffer;
}
The problem is that, by running this example, I do not get a printout of all the 10 arrays I am feeding to the pipe, whereas I keep getting always the first array (filled by 1).
Any suggestion/correction/reference is very welcome to make it work and learn more about the behavior of pipes.
EDIT:
Sorry guys! I found a very trivial error in my code: in the while loop within the writer part, I am not incrementing the index idx......once I correct it, I get the printout of all the arrays.
But now I am facing another problem: when using a lot of large arrays, these are randomly printed out (the whole sequence is not printed); as if the reader part is not able to cope with the speed of the writer. Here is the new sample code:
#include <iostream>
#include <fcntl.h>
#include <sys/stat.h>
#include <stdio.h>
#define FIFO "/tmp/data"
using namespace std;
int main(int argc, char** argv) {
int fd;
int res = mkfifo(FIFO,0777);
int N(1000);
float writer[N];
float buffer[N];
if( res == 0 ) {
cout<<"FIFO created"<<endl;
int fres = fork();
if( fres == -1 ) {
// throw an error
}
if( fres == 0 )
{
fd = open(FIFO, O_WRONLY | O_NONBLOCK);
int idx = 1;
while( idx <= 1000 ) {
for(int i=0; i<N; i++) writer[i]=1*idx;
write(fd, &writer, sizeof(float)*N);
idx++;
}
close(fd);
unlink(FIFO);
}
else
{
fd = open(FIFO, O_RDONLY);
while(1) {
int res = read(fd, &buffer, sizeof(float)*N);
if( res == 0 ) break;
for(int i=0; i<N; i++) printf(" buf: %f",buffer[i]);
cout<<"\n"<<endl;
}
close(fd);
}
}
}
Is there some mechanism to implement in order to make the write() wait until read() is still reading data from the fifo, or am I missing something trivial also in this case?
Thank you for those who have already given answers to the previous version of my question, I have implemented the suggestions.
The arguments to read and write are incorrect. Correct ones:
write(fd, writer, 10 * sizeof *writer);
read(fd, buffer, 10 * sizeof *buffer);
Also, these functions may do partial reads/writes, so that the code needs to check the return values to determine whether the operation must be continued.
Not sure why while( idx <= 10) loop in the writer, this loop never ends. Even on a 5GHz CPU. Same comment for the reader.
Given a process id, what is the best way to find the process's creation date-time using C/C++ ?
I'd suggest looking at the top and ps source code (in particular, libtop.c).
I think the following call should be what you need:
int proc_pidbsdinfo(proc_t p, struct proc_bsdinfo *pbsd, int zombie);
From <sys/proc_info.h>:
struct proc_bsdinfo {
...
struct timeval pbi_start;
...
}
Unfortunately there is no public interface for process inspection so the calls are not only version-dependant but also likely to change in future releases.
Here is a simple utility demonstrating how to do this:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/errno.h>
#include <sys/sysctl.h>
int main(int argc, char **argv) {
// Check arguments
if (argc != 2) {
fprintf(stderr,"usage: %s PID\n", argv[0]);
return 1;
}
// Parse and validate PID argument
errno = 0;
char *end = NULL;
long pid = strtol(argv[1], &end, 10);
if (errno != 0 || end == argv[1] || end == NULL || *end != 0 || ((pid_t)pid) != pid) {
fprintf(stderr,"%s: bad PID argument: %s\n", argv[0], argv[1]);
return 1;
}
// Get process info from kernel
struct kinfo_proc info;
int mib[] = { CTL_KERN, KERN_PROC, KERN_PROC_PID, (int)pid };
size_t len = sizeof info;
memset(&info,0,len);
int rc = sysctl(mib, (sizeof(mib)/sizeof(int)), &info, &len, NULL, 0);
if (rc != 0) {
fprintf(stderr,"%s: sysctl failed with errno=%d\n", argv[0], errno);
return 1;
}
// Extract start time and confirm process exists
struct timeval tv = info.kp_proc.p_starttime;
if (tv.tv_sec == 0) {
fprintf(stderr,"%s: no such PID %d\n", argv[0], (int)pid);
return 1;
}
// Convert to string - no \n because ctime() result is already \n-terminated
printf("PID %d started at %s", (int)pid, ctime(&(tv.tv_sec)));
return 0;
}
Compile it: clang -o procstart procstart.c
Example of running it:
$ ./procstart $$
PID 37675 started at Sat Apr 9 20:01:55 2022
Note the sysctl API this is calling is undocumented and as such (at least in theory) subject to change at any time. In practice however, it has been rather stable across releases, so while nobody can predict what Apple might do in the future, I think the risk of them breaking this is low.
I've been studying hashing in C/C++ and tried to replicate the md5sum command in Linux. After analysing the source code, it seems that md5sum relies on the md5 library's md5_stream. I've approximated the md5_stream function from the md5.h library into the code below, and it runs in ~13-14 seconds. I've tried to call the md5_stream function directly and got ~13-14 seconds. The md5sum runs in 4 seconds. What have the GNU people done to get the speed out of the code?
The md5.h/md5.c code is available in the CoreUtils source code.
#include <QtCore/QCoreApplication>
#include <QtCore/QDebug>
#include <iostream>
#include <iomanip>
#include <fstream>
#include "md5.h"
#define BLOCKSIZE 32784
int main()
{
FILE *fpinput, *fpoutput;
if ((fpinput = fopen("/dev/sdb", "rb")) == 0) {
throw std::runtime_error("input file doesn't exist");
}
struct md5_ctx ctx;
size_t sum;
char *buffer = (char*)malloc (BLOCKSIZE + 72);
unsigned char *resblock = (unsigned char*)malloc (16);
if (!buffer)
return 1;
md5_init_ctx (&ctx);
size_t n;
sum = 0;
while (!ferror(fpinput) && !feof(fpinput)) {
n = fread (buffer + sum, 1, BLOCKSIZE - sum, fpinput);
if (n == 0){
break;
}
sum += n;
if (sum == BLOCKSIZE) {
md5_process_block (buffer, BLOCKSIZE, &ctx);
sum = 0;
}
}
if (n == 0 && ferror (fpinput)) {
free (buffer);
return 1;
}
/* Process any remaining bytes. */
if (sum > 0){
md5_process_bytes (buffer, sum, &ctx);
}
/* Construct result in desired memory. */
md5_finish_ctx (&ctx, resblock);
free (buffer);
for (int x = 0; x < 16; ++x){
std::cout << std::setfill('0') << std::setw(2) << std::hex << static_cast<uint16_t>(resblock[x]);
std::cout << " ";
}
std::cout << std::endl;
free(resblock);
return 0;
}
EDIT: Was a default mkspec problem in Fedora 19 64-bit.
fread() is convenient, but don't use fread() if you care about performance. fread() will copy from the OS to a libc buffer, then to your buffer. This extra copying cost CPU cycles and cache.
For better performance use open() then read() to avoid the extra copy. Make sure your read() calls are multiples of the block size, but lower than your CPU cache size.
For best performance use mmap() map the disk directly to RAM.
If you try something like the below code, it should go faster.
// compile gcc mmap_md5.c -lgcrypt
#include <sys/mman.h>
#include <sys/stat.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <gcrypt.h>
#include <linux/fs.h> // ioctl
#define handle_error(msg) \
do { perror(msg); exit(EXIT_FAILURE); } while (0)
int main(int argc, char *argv[])
{
char *addr;
int fd;
struct stat sb;
off_t offset, pa_offset;
size_t length;
ssize_t s;
unsigned char digest[16];
char digest_ascii[32+1] = {0,};
int digest_length = gcry_md_get_algo_dlen (GCRY_MD_MD5);
int i;
if (argc < 3 || argc > 4) {
fprintf(stderr, "%s file offset [length]\n", argv[0]);
exit(EXIT_FAILURE);
}
fd = open(argv[1], O_RDONLY);
if (fd == -1)
handle_error("open");
if (fstat(fd, &sb) == -1) /* To obtain file size */
handle_error("fstat");
offset = atoi(argv[2]);
pa_offset = offset & ~(sysconf(_SC_PAGE_SIZE) - 1);
if (sb.st_mode | S_IFBLK ) {
// block device. use ioctl to find length
ioctl(fd, BLKGETSIZE64, &length);
} else {
/* offset for mmap() must be page aligned */
if (offset >= sb.st_size) {
fprintf(stderr, "offset is past end of file size=%zd, offset=%d\n", sb.st_size, (int) offset);
exit(EXIT_FAILURE);
}
if (argc == 4) {
length = atoi(argv[3]);
if (offset + length > sb.st_size)
length = sb.st_size - offset;
/* Canaqt display bytes past end of file */
} else { /* No length arg ==> display to end of file */
length = sb.st_size - offset;
}
}
printf("length= %zd\n", length);
addr = mmap(NULL, length + offset - pa_offset, PROT_READ,
MAP_PRIVATE, fd, pa_offset);
if (addr == MAP_FAILED)
handle_error("mmap");
gcry_md_hash_buffer(GCRY_MD_MD5, digest, addr + offset - pa_offset, length);
for (i=0; i < digest_length; i++) {
sprintf(digest_ascii+(i*2), "%02x", digest[i]);
}
printf("hash=%s\n", digest_ascii);
exit(EXIT_SUCCESS);
}
It turned out to be an error in the Qt mkspecs regarding an optimization flag not being set properly.
Using the readlink function used as a solution to How do I find the location of the executable in C?, how would I get the path into a char array? Also, what do the variables buf and bufsize represent and how do I initialize them?
EDIT: I am trying to get the path of the currently running program, just like the question linked above. The answer to that question said to use readlink("proc/self/exe"). I do not know how to implement that into my program. I tried:
char buf[1024];
string var = readlink("/proc/self/exe", buf, bufsize);
This is obviously incorrect.
This Use the readlink() function properly for the correct uses of the readlink function.
If you have your path in a std::string, you could do something like this:
#include <unistd.h>
#include <limits.h>
std::string do_readlink(std::string const& path) {
char buff[PATH_MAX];
ssize_t len = ::readlink(path.c_str(), buff, sizeof(buff)-1);
if (len != -1) {
buff[len] = '\0';
return std::string(buff);
}
/* handle error condition */
}
If you're only after a fixed path:
std::string get_selfpath() {
char buff[PATH_MAX];
ssize_t len = ::readlink("/proc/self/exe", buff, sizeof(buff)-1);
if (len != -1) {
buff[len] = '\0';
return std::string(buff);
}
/* handle error condition */
}
To use it:
int main()
{
std::string selfpath = get_selfpath();
std::cout << selfpath << std::endl;
return 0;
}
Accepted answer is almost correct, except you can't rely on PATH_MAX because it is
not guaranteed to be defined per POSIX if the system does not have such
limit.
(From readlink(2) manpage)
Also, when it's defined it doesn't always represent the "true" limit. (See http://insanecoding.blogspot.fr/2007/11/pathmax-simply-isnt.html )
The readlink's manpage also give a way to do that on symlink :
Using a statically sized buffer might not provide enough room for the
symbolic link contents. The required size for the buffer can be
obtained from the stat.st_size value returned by a call to lstat(2) on
the link. However, the number of bytes written by readlink() and read‐
linkat() should be checked to make sure that the size of the symbolic
link did not increase between the calls.
However in the case of /proc/self/exe/ as for most of /proc files, stat.st_size would be 0. The only remaining solution I see is to resize buffer while it doesn't fit.
I suggest the use of vector<char> as follow for this purpose:
std::string get_selfpath()
{
std::vector<char> buf(400);
ssize_t len;
do
{
buf.resize(buf.size() + 100);
len = ::readlink("/proc/self/exe", &(buf[0]), buf.size());
} while (buf.size() == len);
if (len > 0)
{
buf[len] = '\0';
return (std::string(&(buf[0])));
}
/* handle error */
return "";
}
Let's look at what the manpage says:
readlink() places the contents of the symbolic link path in the buffer
buf, which has size bufsiz. readlink does not append a NUL character to
buf.
OK. Should be simple enough. Given your buffer of 1024 chars:
char buf[1024];
/* The manpage says it won't null terminate. Let's zero the buffer. */
memset(buf, 0, sizeof(buf));
/* Note we use sizeof(buf)-1 since we may need an extra char for NUL. */
if (readlink("/proc/self/exe", buf, sizeof(buf)-1) < 0)
{
/* There was an error... Perhaps the path does not exist
* or the buffer is not big enough. errno has the details. */
perror("readlink");
return -1;
}
char *
readlink_malloc (const char *filename)
{
int size = 100;
char *buffer = NULL;
while (1)
{
buffer = (char *) xrealloc (buffer, size);
int nchars = readlink (filename, buffer, size);
if (nchars < 0)
{
free (buffer);
return NULL;
}
if (nchars < size)
return buffer;
size *= 2;
}
}
Taken from: http://www.delorie.com/gnu/docs/glibc/libc_279.html
#include <stdlib.h>
#include <unistd.h>
static char *exename(void)
{
char *buf;
char *newbuf;
size_t cap;
ssize_t len;
buf = NULL;
for (cap = 64; cap <= 16384; cap *= 2) {
newbuf = realloc(buf, cap);
if (newbuf == NULL) {
break;
}
buf = newbuf;
len = readlink("/proc/self/exe", buf, cap);
if (len < 0) {
break;
}
if ((size_t)len < cap) {
buf[len] = 0;
return buf;
}
}
free(buf);
return NULL;
}
#include <stdio.h>
int main(void)
{
char *e = exename();
printf("%s\n", e ? e : "unknown");
free(e);
return 0;
}
This uses the traditional "when you don't know the right buffer size, reallocate increasing powers of two" trick. We assume that allocating less than 64 bytes for a pathname is not worth the effort. We also assume that an executable pathname as long as 16384 (2**14) bytes has to indicate some kind of anomaly in how the program was installed, and it's not useful to know the pathname as we'll soon encounter bigger problems to worry about.
There is no need to bother with constants like PATH_MAX. Reserving so much memory is overkill for almost all pathnames, and as noted in another answer, it's not guaranteed to be the actual upper limit anyway. For this application, we can pick a common-sense upper limit such as 16384. Even for applications with no common-sense upper limit, reallocating increasing powers of two is a good approach. You only need log n calls for a n-byte result, and the amount of memory capacity you waste is proportional to the length of the result. It also avoids race conditions where the length of the string changes between the realloc() and the readlink().