Statfs return strange values - c++

I'm currently trying to get disk space in byte en used space in byte with statfs().
I made a small function but i get really strange values.
(I'm working on a ubuntu 32bit system)
Here is the code:
bool
CheckDiskSpace( const CLString &devPath, ulonglong &diskSize, ulonglong &totalFreeBytes )
{
bool retVal = false;
struct statfs fs;
if( ( statfs( devPath.c_str(), &fs ) ) < 0 ) {
printf( "Failed to stat %s: %s\n", devPath.c_str() ,strerror( errno ) );
return false;
} else {
diskSize = fs.f_blocks * fs.f_bsize;
totalFreeBytes = fs.f_bfree * fs.f_bsize;
retVal = true;
}
return retVal;
}
int main()
{
ulonglong diskSize, totalFreeBytes;
CheckDiskSpace( "/dev/sda5", diskSize, totalFreeBytes );
printf( "Disk size: %llu Byte\n", diskSize );
printf( "Free size: %llu Byte\n", totalFreeBytes );
}
And I get:
Disk size: 1798447104 Byte
Free size: 1798443008 Byte
I do really not understand this result because with "df" command I get:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda5 111148848 47454952 58047832 45% /
udev 1756296 4 1756292 1% /dev
tmpfs 705324 912 704412 1% /run
none 5120 0 5120 0% /run/lock
none 1763300 1460 1761840 1% /run/shm
Any help is very appeciated !
PS: I have a 120 GB ssd and my partition is in Ext4.
EDIT: ulonglong is a predifined type:
typedef unsigned long long ulonglong;

You are getting the result for the tmpfs mounted on /dev as the /dev/sda5 is on that filesystem. If you want to know for your root filesystem, just use / or any path not in /dev nor /run.

i think what you want is:
CheckDiskSpace( "/dev/sda5", &diskSize, &totalFreeBytes );
because diskSize and totalFreeBytes are not changed in CheckDiskSpace with your code

Related

c++ application fails allocating more hugepages than a certain limit

Overview
I have a c++ application that reads large amount of data (~1T). I run it using hugepages (614400 pages at 2M) and this works - until it hits 128G.
For testing I created a simple application in c++ that allocates chunks of 2M until it can't.
Application is run using:
LD_PRELOAD=/usr/lib64/libhugetlbfs.so HUGETLB_MORECORE=yes ./a.out
While running I monitor the nr of free hugepages (from /proc/meminfo).
I can see that it consumes hugepages at the expected rate.
However the application crashes with a std::bad_alloc exception at 128G allocated (or 65536 pages).
If I run two or more instances at the same time, they all crash at 128G each.
If I decrease the cgroup limit to something small, say 16G, the app crashes correctly at that point with a 'bus error'.
Am I missing something trivial? Please look below for details.
I'm running out of ideas...
Details
Machine, OS and software:
CPU : Intel(R) Xeon(R) CPU E5-2650 v4 # 2.20GHz
Memory : 1.5T
Kernel : 3.10.0-693.5.2.el7.x86_64 #1 SMP Fri Oct 20 20:32:50 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
OS : CentOS Linux release 7.4.1708 (Core)
hugetlbfs : 2.16-12.el7
gcc : 7.2.1 20170829
Simple test code I used (allocates chunks of 2M until free hugepages is below a limit)
#include <iostream>
#include <fstream>
#include <vector>
#include <array>
#include <string>
#define MEM512K 512*1024ul
#define MEM2M 4*MEM512K
// data block
template <size_t N>
struct DataBlock {
char data[N];
};
// Hugepage info
struct HugePageInfo {
size_t memfree;
size_t total;
size_t free;
size_t size;
size_t used;
double used_size;
};
// dump hugepage info
void dumpHPI(const HugePageInfo & hpi) {
std::cout << "HugePages total : " << hpi.total << std::endl;
std::cout << "HugePages free : " << hpi.free << std::endl;
std::cout << "HugePages size : " << hpi.size << std::endl;
}
// dump hugepage info in one line
void dumpHPIline(const size_t i, const HugePageInfo & hpi) {
std::cout << i << " "
<< hpi.memfree << " "
<< hpi.total-hpi.free << " "
<< hpi.free << " "
<< hpi.used_size
<< std::endl;
}
// get hugepage info from /proc/meminfo
void getHugePageInfo( HugePageInfo & hpi ) {
std::ifstream fmeminfo;
fmeminfo.open("/proc/meminfo",std::ifstream::in);
std::string line;
size_t n=0;
while (fmeminfo.good()) {
std::getline(fmeminfo,line);
const size_t sep = line.find_first_of(':');
if (sep==std::string::npos) continue;
const std::string lblstr = line.substr(0,sep);
const size_t endpos = line.find(" kB");
const std::string trmstr = line.substr(sep+1,(endpos==std::string::npos ? line.size() : endpos-sep-1));
const size_t startpos = trmstr.find_first_not_of(' ');
const std::string valstr = (startpos==std::string::npos ? trmstr : trmstr.substr(startpos) );
if (lblstr=="HugePages_Total") {
hpi.total = std::stoi(valstr);
} else if (lblstr=="HugePages_Free") {
hpi.free = std::stoi(valstr);
} else if (lblstr=="Hugepagesize") {
hpi.size = std::stoi(valstr);
} else if (lblstr=="MemFree") {
hpi.memfree = std::stoi(valstr);
}
}
hpi.used = hpi.total - hpi.free;
hpi.used_size = double(hpi.used*hpi.size)/1024.0/1024.0;
}
// allocate data
void test_rnd_data() {
typedef DataBlock<MEM2M> elem_t;
HugePageInfo hpi;
getHugePageInfo(hpi);
dumpHPIline(0,hpi);
std::array<elem_t *,MEM512K> memmap;
for (size_t i=0; i<memmap.size(); i++) memmap[i]=nullptr;
for (size_t i=0; i<memmap.size(); i++) {
// allocate a new 2M block
memmap[i] = new elem_t();
// output progress
if (i%1000==0) {
getHugePageInfo(hpi);
dumpHPIline(i,hpi);
if (hpi.free<1000) break;
}
}
std::cout << "Cleaning up...." << std::endl;
for (size_t i=0; i<memmap.size(); i++) {
if (memmap[i]==nullptr) continue;
delete memmap[i];
}
}
int main(int argc, const char** argv) {
test_rnd_data();
}
Hugepages is setup at boot time to use 614400 pages at 2M each.
From /proc/meminfo:
MemTotal: 1584978368 kB
MemFree: 311062332 kB
MemAvailable: 309934096 kB
Buffers: 3220 kB
Cached: 613396 kB
SwapCached: 0 kB
Active: 556884 kB
Inactive: 281648 kB
Active(anon): 224604 kB
Inactive(anon): 15660 kB
Active(file): 332280 kB
Inactive(file): 265988 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 2097148 kB
SwapFree: 2097148 kB
Dirty: 0 kB
Writeback: 0 kB
AnonPages: 222280 kB
Mapped: 89784 kB
Shmem: 18348 kB
Slab: 482556 kB
SReclaimable: 189720 kB
SUnreclaim: 292836 kB
KernelStack: 11248 kB
PageTables: 14628 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 165440732 kB
Committed_AS: 1636296 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 7789100 kB
VmallocChunk: 33546287092 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
HugePages_Total: 614400
HugePages_Free: 614400
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 341900 kB
DirectMap2M: 59328512 kB
DirectMap1G: 1552941056 kB
Limits from ulimit:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 6191203
max locked memory (kbytes, -l) 1258291200
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 4096
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
cgroup limit:
> cat /sys/fs/cgroup/hugetlb/hugetlb.2MB.limit_in_bytes
9223372036854771712
Tests
Output when running the test code using HUGETLB_DEBUG=1:
...
libhugetlbfs [abc:185885]: INFO: Attempting to map 2097152 bytes
libhugetlbfs [abc:185885]: INFO: ... = 0x1ffb200000
libhugetlbfs [abc:185885]: INFO: hugetlbfs_morecore(2097152) = ...
libhugetlbfs [abc:185885]: INFO: heapbase = 0xa00000, heaptop = 0x1ffb400000, mapsize = 1ffaa00000, delta=2097152
libhugetlbfs [abc:185885]: INFO: Attempting to map 2097152 bytes
libhugetlbfs [abc:185885]: WARNING: New heap segment map at 0x1ffb400000 failed: Cannot allocate memory
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted (core dumped)
Using strace:
...
mmap(0x1ffb400000, 2097152, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_HUGETLB, -1, 0x1ffa200000) = 0x1ffb400000
mmap(0x1ffb600000, 2097152, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_HUGETLB, -1, 0x1ffa400000) = 0x1ffb600000
mmap(0x1ffb800000, 2097152, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_HUGETLB, -1, 0x1ffa600000) = 0x1ffb800000
mmap(0x1ffba00000, 2097152, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_HUGETLB, -1, 0x1ffa800000) = 0x1ffba00000
mmap(0x1ffbc00000, 2097152, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_HUGETLB, -1, 0x1ffaa00000) = -1 ENOMEM (Cannot allocate memory)
write(2, "libhugetlbfs", 12) = 12
write(2, ": WARNING: New heap segment map "..., 79) = 79
mmap(NULL, 3149824, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory)
mmap(NULL, 134217728, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM (Cannot allocate memory)
mmap(NULL, 67108864, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM (Cannot allocate memory)
mmap(NULL, 134217728, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM (Cannot allocate memory)
mmap(NULL, 67108864, PROT_NONE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_NORESERVE, -1, 0) = -1 ENOMEM (Cannot allocate memory)
mmap(NULL, 2101248, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = -1 ENOMEM (Cannot allocate memory)
write(2, "terminate called after throwing "..., 48) = 48
write(2, "std::bad_alloc", 14) = 14
write(2, "'\n", 2) = 2
write(2, " what(): ", 11) = 11
write(2, "std::bad_alloc", 14) = 14
write(2, "\n", 1) = 1
rt_sigprocmask(SIG_UNBLOCK, [ABRT], NULL, 8) = 0
gettid() = 188617
tgkill(188617, 188617, SIGABRT) = 0
--- SIGABRT {si_signo=SIGABRT, si_code=SI_TKILL, si_pid=188617, si_uid=1001} ---
Finally in /proc/pid/numa_maps:
...
1ffb000000 default file=/anon_hugepage\040(deleted) huge anon=1 dirty=1 N1=1 kernelpagesize_kB=2048
1ffb200000 default file=/anon_hugepage\040(deleted) huge anon=1 dirty=1 N1=1 kernelpagesize_kB=2048
1ffb400000 default file=/anon_hugepage\040(deleted) huge anon=1 dirty=1 N1=1 kernelpagesize_kB=2048
1ffb600000 default file=/anon_hugepage\040(deleted) huge anon=1 dirty=1 N1=1 kernelpagesize_kB=2048
1ffb800000 default file=/anon_hugepage\040(deleted) huge anon=1 dirty=1 N1=1 kernelpagesize_kB=2048
...
However the application crashes with a std::bad_alloc exception at 128G allocated (or 65536 pages).
You are allocating too many small-sized segments, there is a limit of the number of map segments you can get per process.
sysctl -n vm.max_map_count
You are trying to use 1024 * 512 * 4 == 2097152 MAP at least and one more for the array, but the default value of vm.max_map_count is only 65536.
You can change it with:
sysctl -w vm.max_map_count=3000000
Or you could allocate a bigger segment in your code.

can't configure hardware parameters on ALSA raspberry pi c appliction

I trying to write ALSA application for recording audio, and when I try to set some parameters and then print them to the screen I getting some default numbers that i cant change
#include <alsa/asoundlib.h>
using namespace std;
typedef struct {
int audio;
int recording;
void *cons;
snd_pcm_t *inhandle;
snd_pcm_t *outhandle;
unsigned long sampleIndex;
unsigned long inlen;
unsigned long sampleRate;
} audio_t;
static audio_t aud;
void aboutAlsa(snd_pcm_t *handle,snd_pcm_hw_params_t *params) {
unsigned int val, val2;
snd_pcm_format_t val3;
int dir;
snd_pcm_uframes_t frames;
printf("ALSA library version: %s\n",SND_LIB_VERSION_STR);
printf("PCM handle name = '%s'\n",snd_pcm_name(handle));
printf("PCM state = %s\n",snd_pcm_state_name(snd_pcm_state(handle)));
snd_pcm_hw_params_get_access(params,(snd_pcm_access_t *) &val);
printf("access type = %s\n",snd_pcm_access_name((snd_pcm_access_t)val));
snd_pcm_hw_params_get_format(params, &val3);
printf("format = '%s' (%s)\n",snd_pcm_format_name(val3),
snd_pcm_format_description(val3));
snd_pcm_hw_params_get_subformat(params,(snd_pcm_subformat_t *)&val);
printf("subformat = '%s' (%s)\n",snd_pcm_subformat_name((snd_pcm_subformat_t)val),
snd_pcm_subformat_description((snd_pcm_subformat_t)val));
snd_pcm_hw_params_get_channels(params, &val);
printf("channels = %d\n", val);
snd_pcm_hw_params_get_rate(params, &val, &dir);
printf("rate = %d bps\n", val);
snd_pcm_hw_params_get_period_time(params,&val, &dir);
printf("period time = %d us\n", val);
snd_pcm_hw_params_get_period_size(params,&frames, &dir);
printf("period size = %d frames\n", (int)frames);
snd_pcm_hw_params_get_buffer_time(params,&val, &dir);
printf("buffer time = %d us\n", val);
snd_pcm_hw_params_get_buffer_size(params,(snd_pcm_uframes_t *) &val);
printf("buffer size = %d frames\n", val);
snd_pcm_hw_params_get_periods(params, &val, &dir);
printf("periods per buffer = %d frames\n", val);
snd_pcm_hw_params_get_rate_numden(params,&val, &val2);
printf("exact rate = %d/%d bps\n", val, val2);
val = snd_pcm_hw_params_get_sbits(params);
printf("significant bits = %d\n", val);
return;
}
static int openKnownAudio(int record) {
int rc;
int SAMPLERATE = 16000;
unsigned int val;
int dir=0;
snd_pcm_t *handle;
snd_pcm_hw_params_t *hw_params=NULL;
snd_pcm_uframes_t frames;
size_t esz = 256;
char err[esz];
/* Open PCM device for recording (capture). */
if (record) {
if ((rc=snd_pcm_open(&aud.inhandle, "default",SND_PCM_STREAM_CAPTURE, 0))<0) {
snprintf(err, esz, "unable to open pcm device for recording: %s\n",snd_strerror(rc));
}
handle=aud.inhandle;
} else {
if ((rc=snd_pcm_open(&aud.outhandle, "default",SND_PCM_STREAM_PLAYBACK, 0))<0) {
snprintf(err, esz, "unable to open pcm device for playback: %s\n",snd_strerror(rc));
}
handle=aud.outhandle;
}
/* Configure hardware parameters */
if((rc=snd_pcm_hw_params_malloc(&hw_params)) < 0) {
snprintf(err, esz, "unable to malloc hw_params: %s\n",snd_strerror(rc));
}
if((rc=snd_pcm_hw_params_any(handle, hw_params))<0) {
snprintf(err, esz, "unable to setup hw_params: %s\n",snd_strerror(rc));
}
if((rc=snd_pcm_hw_params_set_access(handle, hw_params, SND_PCM_ACCESS_RW_INTERLEAVED))<0) {
snprintf(err, esz, "unable to set access mode: %s\n",snd_strerror(rc));
}
if((rc=snd_pcm_hw_params_set_format(handle, hw_params, SND_PCM_FORMAT_S16_LE))<0) {
snprintf(err, esz, "unable to set format: %s\n",snd_strerror(rc));
}
if((rc=snd_pcm_hw_params_set_channels(handle, hw_params, 1))<0) {
snprintf(err, esz, "unable to set channels: %s\n",snd_strerror(rc));
}
val = SAMPLERATE;
dir = 0;
if((rc=snd_pcm_hw_params_set_rate(handle, hw_params, SAMPLERATE,0))<0) {
snprintf(err, esz, "unable to set samplerate: %s\n",snd_strerror(rc));
}
if (val!=SAMPLERATE) {
snprintf(err, esz, "unable to set requested samplerate: requested=%i got=%i\n",SAMPLERATE,val);
}
frames = 64;
if ((rc=snd_pcm_hw_params_set_period_size_near(handle,hw_params, &frames, &dir))<0) {
snprintf(err, esz, "unable to set period size: %s\n",snd_strerror(rc));
}
frames = 4096;
if ((rc=snd_pcm_hw_params_set_buffer_size_near(handle,hw_params, &frames))<0) {
snprintf(err, esz, "unable to set buffer size: %s\n",snd_strerror(rc));
}
if ((rc = snd_pcm_hw_params(handle, hw_params))<0) {
snprintf(err, esz, "unable to set hw parameters: %s\n",snd_strerror(rc));
}
aboutAlsa(handle,hw_params);
snd_pcm_hw_params_free(hw_params);
aud.recording = (record)? 1:0;
aud.audio=1;
return 1;
}
This what I get on raspberry pi when I run it:
ALSA library version: 1.0.28
PCM handle name = 'default'
PCM state = PREPARED
access type = RW_INTERLEAVED
format = 'S16_LE' (Signed 16 bit Little Endian)
subformat = 'STD' (Standard)
channels = 1
rate = 16000 bps
period time = 21333 us
period size = 341 frames
buffer time = 256000 us
buffer size = 4096 frames
periods per buffer = 4096 frames
exact rate = 16000/1 bps
significant bits = 16
And this is what I get when I run it on desktop pc:
ALSA library version: 1.0.28
PCM handle name = 'default'
PCM state = PREPARED
access type = RW_INTERLEAVED
format = 'S16_LE' (Signed 16 bit Little Endian)
subformat = 'STD' (Standard)
channels = 1
rate = 16000 bps
period time = 4000 us
period size = 64 frames
buffer time = 256000 us
buffer size = 4096 frames
periods per buffer = 64 frames
exact rate = 16000/1 bps
significant bits = 16
As you can see I'm trying to set the period size to 64 and getting back 341, this value only changes when I change the rate, lets say I set the rate to 44100 and this what I getting back:
rate = 44100 bps
period time = 21333 us
period size = 940 frames
buffer time = 85328 us
buffer size = 3763 frames
periods per buffer = 3763 frames
On desktop pc this doesn't happens I tried to trace down this functions in alsa-lib but I getting lost there also tried different sound cards and still getting same result .
In case of PulseAudio you did set the PulseAudio device , not the real device.
The real HW can have the limitation, you must correctly react to.
If you'd like to see min/max boundary of some parameter, you can do the next:
using snd_pcm_hw_params_dump function
snd_pcm_hw_params_t *params;
snd_pcm_t *pcm_handle;
int pcm;
/* Open the PCM device in playback mode */
pcm = snd_pcm_open(&pcm_handle, PCM_DEVICE, SND_PCM_STREAM_PLAYBACK, 0);
if (pcm < 0) {
printf("ERROR: Can't open \"%s\" PCM device. %s\n", PCM_DEVICE, snd_strerror(pcm));
goto error_handling;
}
/* Allocate parameters object and fill it with default values*/
snd_pcm_hw_params_alloca(&params);
pcm = snd_pcm_hw_params_any(pcm_handle, params);
if (pcm < 0) {
printf("Broken configuration for this PCM: no configurations available\n");
goto error_handling;
}
printf("hw boundary params ***********************\n");
snd_pcm_hw_params_dump(params, log);
printf("*******************************************\n");
The same using min/max functions
snd_pcm_t* pcm;
snd_pcm_hw_params_t* hw_parameters;
int parameter;
//... open device and allocate hw params here
/*Fill params with a full configuration space for a PCM.
The configuration space will be filled with all possible
ranges for the PCM device.*/
snd_pcm_hw_params_any(pcm,hw_parameters);
/* please substitute <parameter name> with real parameter name
for example buffer_size, buffer_time, rate, etc*/
snd_pcm_hw_params_get_<parameter name>_min(hw_parameters,&parameter);
printf("<parameter name> min : %d/n", parameter);
snd_pcm_hw_params_get_<parameter name>_max(hw_parameters,&parameter);
printf("<parameter name> max : %d/n", parameter);
I faced with the same issue, when tried to set the period size.
There are my boundary (two different pcm devices):
log #1
hw boundary params ***********************
ACCESS: RW_INTERLEAVED
FORMAT: U8 S16_LE S16_BE S24_LE S24_BE S32_LE S32_BE FLOAT_LE FLOAT_BE MU_LAW A_LAW S24_3LE S24_3BE
SUBFORMAT: STD
SAMPLE_BITS: [8 32]
FRAME_BITS: [8 1024]
CHANNELS: [1 32]
RATE: [1 192000]
PERIOD_TIME: (5 4294967295)
PERIOD_SIZE: [1 1398102)
PERIOD_BYTES: [128 1398102)
PERIODS: [3 1024]
BUFFER_TIME: (15 4294967295]
BUFFER_SIZE: [3 4194304]
BUFFER_BYTES: [384 4194304]
TICK_TIME: ALL
*******************************************
log#2
**********************************DEBUG
period time min : 21333
period time max : 21334
buffer time min : 1
buffer time max : -1
channels min : 1
channels max : 10000
rate min : 4000
rate max : -1
period size min : 85
period size max : 91628833
buffer size min : 170
buffer size max : 274877906
**********************************DEBUG_END
Here we can't change the period size due to period time limitation.

Writing data bigger than page size into shared memory

My processor has its page size as 4096. I need to write data into shared memory and this data has a size 7168 (7 KB).
I used the ftruncate and allocated 8192 (2*page_size) so that there would be sufficient memory.
shmem_fd = shm_open( TRIAL_SHMEM_FILE, O_RDWR, S_IRUSR | S_IWUSR);
if( shmem_fd == -1 )
{
printf("Create_shmem, open failed:%s",strerror( errno));PASLOG return false;
}
if( ftruncate( shmem_fd, 8192) == -1 )
{
printf("Create_shmem, ftruncate failed:%s",strerror( errno));PASLOG return false;
}
I am writing the structure as below. [767*10]bytes is lesser than [2*page_size]. But the below code causes a segmentation fault.
If I try to write [767*5] which is within [1-page_size] there is no crash. I am unable to know the actual cause of the crash. Is there a different way to proceed?
// data to be written into shared memory
list_data item[10]; // struct size is 767 bytes
for (uiCounter=DEFAULT_VALUE_ZERO; uiCounter < 10; ++uiCounter)
{
memset(&item[uiCounter], 0, sizeof(list_data));
}
list_data* list_shmem;
list_shmem = (list_data *) mmap(NULL, sizeof(list_data) * 10, PROT_READ | PROT_WRITE, MAP_SHARED, shmem_fd, 0 );
if(list_shmem == MAP_FAILED)
{
printf("mmap failsed: %s", strerror(errno));
return false;
}
// write to shared mem
for (uiCounter = DEFAULT_VALUE_ZERO; uiCounter < 10; ++uiCounter)
{
memcpy ( list_shmem, &item[uiCounter], sizeof(person) );
++list_shmem;
}
munmap(list_shmem, sizeof(list_data) * 10);
There are a couple of issues with your code:
You pass a wrong address to munmap in:
list_data* list_shmem;
list_shmem = (list_data *) mmap(...);
for (uiCounter = DEFAULT_VALUE_ZERO; uiCounter < 10; ++uiCounter)
{
memcpy ( list_shmem, &item[uiCounter], sizeof(person) );
++list_shmem; // <---- invalidates list_shmem original value
}
munmap(list_shmem, sizeof(list_data) * 10);
You specify wrong size to memcpy in:
memcpy ( list_shmem, &item[uiCounter], sizeof(person) );
A fix is:
memcpy ( list_shmem, &item[uiCounter], sizeof(item[uiCounter]) );
One fix for both issues would be to use standard algorithm std::copy instead of the hand-coded loop:
std::copy(item + DEFAULT_VALUE_ZERO, item + 10, list_shmem);
Bonus point:
list_data item[10]; // struct size is 767 bytes
for (uiCounter=DEFAULT_VALUE_ZERO; uiCounter < 10; ++uiCounter)
{
memset(&item[uiCounter], 0, sizeof(list_data));
}
Is the same as:
list_data item[10] = {};

shmget fails with ENOMEM even though enough pages available

We're getting odd behaviour when trying to allocate an approximately 10MB block of memory from huge pages. System is SL6.4 64-bit, recent Intel CPU, 64GB RAM.
Initially we allocated 20 huge pages which should be enough.
$ cat /proc/meminfo | grep Huge
AnonHugePages: 0 kB
HugePages_Total: 20
HugePages_Free: 20
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Other huge page settings:
/proc/sys/kernel/shmall = 4294967296
/proc/sys/kernel/shmmax = 68719476736
/proc/sys/kernel/shmmni = 4096
/proc/sys/kernel/shm_rmid_forced = 0
shmget fails with ENOMEM. The only explanation I can find for this is in the man page which states "No memory could be allocated for segment overhead." but I haven't been able to discover what "segment overhead" is.
On another server with the same number of pages configured shmget returns successfully.
On the problem server we increased the number of huge pages to 100. The allocation succeeds but also allocates 64 2MB huge pages:
$ ipcs -m
------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
0x0091efab 10223638 rsprod 600 2097152 1
0x0092efab 10256407 rsprod 600 2097152 1
0x0093efab 10289176 rsprod 600 2097152 1
0x0094efab 10321945 rsprod 600 2097152 1
0x0095efab 10354714 rsprod 600 2097152 1
0x0096efab 10387483 rsprod 600 2097152 1
...
0x00cdefab 12189778 rsprod 600 2097152 1
0x00ceefab 12222547 rsprod 600 2097152 1
0x00cfefab 12255316 rsprod 600 2097152 1
0x00d0efab 12288085 rsprod 600 2097152 1
0x00000000 12320854 rsprod 600 10485760 1
The code calling shmget is below. This is only being called once in the application.
uint64_t GetHugePageSize()
{
FILE *meminfo = fopen("/proc/meminfo", "r");
if(meminfo == NULL) {
return 0;
}
char line[256];
while(fgets(line, sizeof(line), meminfo)) {
uint64_t zHugePageSize = 0;
if(sscanf(line, "Hugepagesize: %lu kB", &zHugePageSize) == 1) {
fclose(meminfo);
return zHugePageSize*1024;
}
}
fclose(meminfo);
return 0;
}
char* HugeTableNew(size_t aSize, int& aSharedMemID)
{
static const uint64_t sHugePageSize = GetHugePageSize();
uint64_t zSize = aSize;
// round up to next page size, otherwise shmat fails with EINVAL (22)
const uint64_t HUGE_PAGE_MASK = sHugePageSize-1;
if(aSize & HUGE_PAGE_MASK) {
zSize = (aSize&~HUGE_PAGE_MASK) + sHugePageSize;
}
aSharedMemID = shmget(IPC_PRIVATE, zSize, IPC_CREAT|SHM_HUGETLB|SHM_R|SHM_W);
if(aSharedMemID < 0) {
perror("shmget");
return nullptr;
}
...
Does anyone know:
What causes the allocation to fail when there are enough free huge pages available?
What causes the extra 2MB pages to be allocated?
What "segment overhead" is?

Open Raw Disk and Get Size OS X

Using the following code, I'm able to successfully open a raw disk on my machine, but when I get the disk length I get 0 each time...
// Where "Path" is /dev/rdisk1 -- is rdisk1 versus disk1 the proper way to open a raw disk?
Device = open(Path, O_RDWR);
if (Device == -1)
{
throw xException("Error opening device");
}
And getting size with both of these methods returns 0:
struct stat st;
if (stat(Path, &st) == 0)
_Length = st.st_size;
/
_Length = (INT64)lseek(Device, 0, SEEK_END);
lseek(Device, 0, SEEK_SET);
I'm not totally familiar with programming on non-Windows platforms, so please forgive anything that seems odd. My questions here are:
Is this the proper way to open a raw disk under OS X?
What might be causing the disk size to be returned as 0?
The disk in question is an unformatted disk, but for those wanting the info from Disk Utility (with non-important stuff removed):
Name : ST920217 AS Media
Type : Disk
Partition Map Scheme : Unformatted
Disk Identifier : disk1
Media Name : ST920217 AS Media
Media Type : Generic
Writable : Yes
Total Capacity : 20 GB (20,003,880,960 Bytes)
Disk Number : 1
Partition Number : 0
After a little bit of searching through ioctl request codes, I found something that actually works.
#include <sys/disk.h>
#include <sys/ioctl.h>
#include <fcntl.h>
int main()
{
// Open disk
uint32_t dev = open("/dev/disk1", O_RDONLY);
if (dev == -1) {
perror("Failed to open disk");
return -1;
}
uint64_t sector_count = 0;
// Query the number of sectors on the disk
ioctl(dev, DKIOCGETBLOCKCOUNT, &sector_count);
uint32_t sector_size = 0;
// Query the size of each sector
ioctl(dev, DKIOCGETBLOCKSIZE, &sector_size);
uint64_t disk_size = sector_count * sector_size;
printf("%ld", disk_size);
return 0;
}
Something like that should do the trick. I just copied the code I had into that, so I'm not sure if it would compile alright but it should.