Trying to alloc more than 2GB on a windows 7 - c++

I'm usining Windows 7, 64bits, 8GB ram
I'm needing to make alloc more than 2GB but I'm getting runtime error
look at my piece of code
#define MAX_PESSOAS 30000000
int i;
double ** totalPessoas = new double *[MAX_PESSOAS];
for(i = 0; i < MAX_PESSOAS; i++)
totalPessoas[i] = new double [5];
MAX_PESSOAS is set to 30milion, but I'll need at least 1billion (ok, I know I'll need more than 8GB but nvm, I can get it, I only need to know how to do that )
I'm using visual studio 2012

If your application is building to a 64-bit binary, it can address more than 8 GB without any special steps.
If your application is building to a 32-bit binary, you can address up to 3 GB (or 4 GB if you're running 64-bit Windows) by enabling 4-gigabyte tuning, as long as the system supports it.
Your best bet is probably to compile your application as a 64-bit binary, if you know that the operating system it will be running on is 64-bit.

Related

Is the maximum size of an item in a type 19 file configurable?

A WRITEBLK command fails when the item reaches 2GB in size (item is truncated to 2147483647 bytes).
Using cat I was able to create an item larger than 2GB in the same directory, but opening it in UV gave a corrupt (negative) value for STATUS<4> (Number of bytes available to read).
uv 11.1.4
64bit Linux on a VM
64BIT_FILES = 1
You can make the universe files 32 or 64 bit (regardless of the OS). So you can do a FILEINFO call to see if the file is actually 64bit (even if the account is 64bit).
My guess is that there is an File system limitation on the file size. in the Rocket UniVerse documentation (page 927) it says:
If the device runs out of disk
space, WRITEBLK takes the ELSE clause and returns –4 to the STATUS
function.
Generally only 32 bit systems would be the hard limit on 2 GB, but maybe there is some kind of 32 bit process running in our 64 bit virtual machine that is producing the same effect. See here for a few leads: https://unix.stackexchange.com/questions/274380/file-size-limit

Windows 10 memory fragmentation on long application run

We are facing a memory fragmentation issue in our 32 bit app (C++ and WPF based). when we run it for 100 hrs. as part of automated test. Application crashes after running AST for ~14 hrs.
We use CRT heap with LFH policy (Low fragment Heap) exclusively enabled in Main(). Problem is coming on windows 10 platform. No issue on Windows 8 platform with same set of our application binaries. We completed 100 hrs. run for test in windows 8 platform.
We create a large block heap in Main() method and this heap we use for specific purpose when we need a large amount of memory and we are managing it in our code. From Virtual Memory Statistics logs we can see that initial virtual memory allocation is 1.79 GB.
After 14 hrs. of automated test run : on windows 10
Combined Available = 1590176752( 1516.511 MB)
Combined Max Available = 3989504( 3.805 MB)
Combined Frag Percent = 99.75%
CRT:sum_alloc = 2737569144(98.50%, 2610.749 MB)
CRT:max_alloc = 4458496( 4.252 MB)
CRT:allocAverageSize = 9043
CRT:num_free_blocks = 37813
CRT:sum_free = 22620888( 0.81%, 21.573 MB)
CRT:max_free = 514104( 0.490 MB)
VM:sum_free = 1581957120(36.83%,1508.672 MB)
VM:max_free = 10321920( 9.844 MB)
On windows 8 for 100 hrs.
Combined Available = 1881204960( 1794.057 MB)
Combined Max Available = 1734127616( 1653.793 MB)
Combined Frag Percent = 7.82%
VM:sum_free = 1845817344(42.98%,1760.309 MB)
VM:max_free = 1734127616( 1653.793 MB)
We are using ADPlus and (debugging tools for Windows, Windbg and DebugDiag) tool to collect memory dumps at interval of 3 hrs.
Is there any setting or flag which I need to enable or anything I have to do withy code, using VS2010.
Application is based on Windows 10 LTSB 64 bit (which is very specific Enterprise OS version for windows 10, gives stability and security)

GlobalMemoryStatusEx() gives total virtual memory as 127 TeraByte

why GlobalMemoryStatusEx() gives huge total virtual memory.Does it take into account all the page files that can be created?
System details:
Windows 8.1, 64 bit Process, x64 Processor
int main()
{
MEMORYSTATUSEX mex;
mex.dwLength = sizeof (mex);
GlobalMemoryStatusEx(&mex);
std::cout<<mex.ullTotalVirtual<<" "<<mex.ullAvailVirtual;
}
140737488224256 140737478111232
EDIT:
I got same result on Windows 10.I am interested in knowing how this 127 TB figure comes up.Why does the system not take into account that i don't have 127 tb space on my disk?
A 32 bit process on (x64 system) shows only 2gb which is the accessible address limit of a 32 bit process for user mode.Why does it not take into account page files in case of 32 bit process?
Yes. From MSDN:
You can use the GlobalMemoryStatusEx() to determine how much memory your application can allocate without severely impacting other applications.

RAM detection issue on 32-bit OS

I want to detect memory using GetPhysicallyInstalledSystemMemory function, the problem is, it only displays RAM correctly on 64-bit OS. On 32-bit OS I get wrong values. For example like in the picture below:
On Virtual machine (Windows Vista SP2 x32):
Code:
ULONGLONG ramSize;
BOOL result = GetPhysicallyInstalledSystemMemory(&ramSize);
if (result == TRUE) {
QString ramMB = QString::number(ramSize / (1024.0));
QString ramGB = QString::number(ceil(ramSize / (1024.0 * 1024.0)));
QMessageBox::information(this, "Test_MB", ramMB.append(" MB")); // RAM in MB
QMessageBox::information(this, "Test_GB", ramGB.append(" GB")); // RAM in GB
}
Why it doesn't work on 32-bit OS? Thanks.
I have installed Windows 7 x32 and test it. Now the values are correct. So the bug is with VMware Workstation. I will report it to VMware later. Thanks all for help.

pthread multithreading in Mac OS vs windows multithreaing

I've developed a multi platform program (using the FLTK toolkit) and implement multithreading to do background intensive tasks.
I have followed the FLTK tutorials/examples on multithreading which involve using pthread on Mac, ie the function pthread_create and windows threading on windows ie _beginthread
What I have noticed is that the performance is much higher on Windows ie 3 to 4 times faster in these background threads (in the time to execute them).
Why might this be? Is it the threading libraries I'm using? Surely there shouldn't be such a difference? Or could it be the runtime libraries underneath it all?
Here are my machine details
Mac:
Intel(R) Core(TM) i7-3820QM CPU # 2.70GHz
16 GB DDR3 1600 MHz
Model MacBookPro9,1
OS: Mac OSX 10.8.5
Windows:
Intel(R) Core(TM) i7-3520M CPU # 2.90GHz
16 GB DDR3 1600 MHz
Model: Dell Latitude E5530
OS: Windows 7 Service Pack 1
EDIT
To just do a basic speed comparison I compiled this on both machines running from the command line
int main(int agrc, char **argv)
{
time_t t = time(NULL);
tm* tt=localtime(&t);
std::stringstream s;
s<< std::setfill('0')<<std::setw(2)<<tt->tm_mday<<"/"<<std::setw(2)<<tt->tm_mon+1<<"/"<< std::setw(4)<<tt->tm_year+1900<<" "<< std::setw(2)<<tt->tm_hour<<":"<<std::setw(2)<<tt->tm_min<<":"<<std::setw(2)<<tt->tm_sec;
std::cout<<"1: "<<s.str()<<std::endl;
double sum=0;
for (int i=0;i<100000000;i++){
double ii=i*0.123456789;
sum=sum+sin(ii)*cos(ii);
}
t = time(NULL);
tt=localtime(&t);
s.str("");
s<< std::setfill('0')<<std::setw(2)<<tt->tm_mday<<"/"<<std::setw(2)<<tt->tm_mon+1<<"/"<< std::setw(4)<<tt->tm_year+1900<<" "<< std::setw(2)<<tt->tm_hour<<":"<<std::setw(2)<<tt->tm_min<<":"<<std::setw(2)<<tt->tm_sec;
std::cout<<"2: "<<s.str()<<std::endl;
}
Windows takes less than a second. Mac takes 4/5 seconds. Any ideas?
On Mac I'm compiling with g++, with visual studio 2013 on windows.
SECOND EDIT
if I change the line
std::cout<<"2: "<<s.str()<<std::endl;
to
std::cout<<"2: "<<s.str()<<" "<<sum<<std::endl;
Then all of a sudden Windows takes a little bit longer...
This makes me think that the whole thing might be compiler optimisation. So the question would be is g++ (4.2 is the version I have) worse at optimisation or do I need to provide additional flags?
THIRD(!) AND FINAL EDIT
I can report that I achieve comparable performance by ensuring g++ optimisation flags -O were provided at compile time. One of those annoying things that happens so often
A: Im tearing my hair out on problem x
B: Are you sure you're not doing y?
A: That works, why is this information not plastered all over the place and in every tutorial on problem x on the web?
B: Did you read the manual?
A: No, if I completely read the manual for every single bit of code/program I used I would never actually get round to doing anything...
Meh.