I am running Windows 10 64 bit. My compiler is Visual Studio 2015.
What I want is:
unsigned char prime[UINT_MAX];
(and larger arrays).
That example gives compiler error C2148 because the application is a "Win32 console application". Likewise I can't use new to create the array; same problem. I am building it as an "x64 Release", but I guess the WIN32 console part is winning!
I want to unleash the power of my 64 bit operating system and break free of this tiresome INT_MAX limitation on array indexes, ie proper 64 bit operation. My application is a simple C/C++ thing which neither needs nor wants anything other than a command line interface.
I did install the (free) Visual Studio 2017 application, but it didn't give me the simple console apps that I like (so I uninstalled it).
Is there some other application type I can build in Visual Studio that gives access to more that 4GB of memory? Is there some other (free) compiler I can use to get full 64bit access under Windows?
Let me first answer the question properly so the information is readily available to others. All three actions were necessary. Any one or two alone would not work.
Change project type from “Win32 Console” to “C++/CLR console”
Change the array definition as kindly indicated by WhozCraig
Change the project properties, Linker | System | EnableLargeAddresses YES (/LARGEADDRESSAWARE)
Now let’s mention some of the comments:
“Compile the program for x64 architecture, not x32” **
I explicitly stated that it was compiled as x64 release, and that the Win32 aspect was probably winning.
It won’t work if allocated on the stack.
It was allocated on the heap as a global variable, but I also said I tried allocating it with new which would also allocate to the heap.
How much memory does you machine have?
Really. My 8GB RAM is a bit weak for the application, but won’t give a compiler error, and is enough to run the program with 4GB allocated to it.
Possible duplicate of …
No, there are some very old questions which are not very relevant.
Memory mapped files (Thomas Matthews)
A very good idea. Thank you.
As for 6 down votes on the question, seriously. Most commenters don’t even seem to have understood the problem, let alone the solution. Standard C arrays seem to be indexed by signed ints (32 bit) regardless of the /LARGEADDRESSAWARE switch and the x64 compilation.
Thanks again to WhozCraig and Thomas Matthews for helping me to solve the problem.
#include <vector>
typedef unsigned long long U64;
const U64 MAX_SIZE = 3*((U64)INT_MAX);
std::vector<unsigned char>prime(MAX_SIZE);
// The prime vector is then accessed in the usual way, prime[bigAddress]
I also turned off Unicode support in the project settings as that might have made the chars 2-bytes long.
The program is now running on a Xeon workstation with 32GB ECC RAM.
6GB is allocated to the process according to the task manager.
I assuming this is a Windows console program built in X64 mode with linker system option to yes: /LARGEADDRESSAWARE. Don't declare an array that large, instead allocate it using malloc(), and later use free() to deallocate it. Using C++ new operator will result in a compiler error "array too large", but malloc() doesn't have this issue. I was able to allocate an 8GB array on a 16GB laptop using malloc(), and use it without issue. You can use size_t, int64_t or uint64_t for index types without issue.
I've tested this with VS2015 and VS2019.
Related
I am trying to allocate memory of 1 GiB using malloc() on Windows and it fails. I know malloc's uncertainty. What is best solution to allocate memory of 1 GiB?
If you are using a 32-bit (x86) application, you are unlikely to be able to allocate a 1 GB continuous chunk of memory (and certainly can't allocate 2GB). As to why this happens, you should see the venerable presentation "Why Your Windows Game Won't Run In 2,147,352,576 Bytes" (Gamefest 2007) attached to this blog post.
You should build your application as an x64 native (x64) application instead.
You could enable /LARGEADDRESSAWARE and stick with a 32-bit application on Windows x64, but it has a number of quirks and may limit what kinds of 3rd party support libraries you can use. A better solution is to use x64 native if possible.
Use the /LARGEADDRESSAWARE flag to tell Windows that you're not doing funny things with addresses. This unlocks an extra 2GB of address space on Win64.
I need to create a matrix which size is 10000x100000. My RAM is 4GB. It works till the 25th iteration (debug), but after 25th iteration I get an error "bad allocation" however only 25% of RAM is used which means the problem is not related with memory. So what can I do?
EDIT:
int **arr;
arr=new int*[10000];
for(i=0;i<10000;i++)
arr[i]=new int[100000];
My allocation is above.
If you're compiling for x64, you shouldn't have any problems.
If you're compiling for x86 (most likely), you can enable the /LARGEADDRESSAWARE linker flag if you're using Visual C++, or something similar for other compilers. For Visual C++, the option can also be found in the Linker -> System -> Enable Large Addresses property in the IDE.
This sets a flag in the resulting EXE file telling the OS that the code can handle addresses over 2 GB. When running such an executable on x64 Windows (your case), the OS gives it 4 GB of address space to play with, as opposed to just 2 GB normally.
I tested your code on my system, Windows 7 x64, 8 GB, compiled with Visual C++ Express 2013 (x86, of course) with the linker flag, and the code ran fine - allocated almost 4 GB with no error.
Anyway, the 25th iteration is far too quick for it to fail, regardless of where it runs and how it's compiled (it's roughly 10 MB), so there's something else going wrong in there.
By the way, the HEAP linker option doesn't help in this case, as it doesn't increase the maximum heap size, it just specifies how much address space to reserve initially and in what chunks to increase the amount of committed RAM. In short, it's mostly for optimization purposes.
A possible solution would be to use your hard drive.
just open a file and store the data you need.
then just copy the data you need to a buffer.
Even if you will be successful with allocating this amount of data on the heap
you will overload the heap with data you are most likely wont be using most of the time.
Eventually you might run out of space and that will lead to either decreased performance or unexpected behaviors.
If you are worried about hindered performance by using the hard drive maybe to your problem thinking about a procedural solution would be fitting. If you could produce the data you need at any given moment instead of storing it you could solve your problem as well.
If you are using VS, you'll probably want to try out the HEAP linker option and make sure, you compile for a x64 bit target, because otherwise you'll run out of address space. The size of you physical memory should not be a limiting factor, as windows can use the pagefile to provide additional memory.
However, from a performance point of view it is probably a horrible Idea to just allocate a matrix of this size. Maybe you should consider using a spares matrix, or (as suggested by LifePhilPsyPro) generate the data on demand.
For allocating extremely large buffers you are best off using the operating system services for mapping pages to the address space rather than new/malloc.
You are trying to allocate more than 4GB.
I'm hesitant to ask this question because of the vagueness of the situation, but I'd like to understand how this is possible. I have a C++ application developed using Visual Studio 2008. When I compile the application on Windows 7 64-bit (or Vista 32-bit), the application runs fine. When I compile the application on 32-bit Windows XP SP3, I receive a buffer overrun warning and the process terminates. This is using the same verison of the Visual Studio 2008 C++ compiler. How is it that I receive a buffer overrun on XP, but not on other Windows platforms?
Write code so you don't have buffer overruns and you won't have this problem on any platform. Namely, make sure you check the bounds for the buffer you are accessing to make sure you aren't trying to read/write outside of the proper bounds.
Luck, the fundamental undeterminedness of the Universe, or (more likely than the previous) an implementation detail that changed in msvcrt.dll between XP and 7.
Bottom line is you have a bug in your application, and you should fix it.
You probably have a buffer overrun in both case, in the first it isn't detected and doesn't (apparently) do any harm. In the second it is detected. (If it is on a dynamically allocated memory, you have to know that allocators often allocate more than what asked, thus a plausible explanation is that in the first case the overrun stay in that zone, in the second it doesn't).
Sizes of data types might change from one compiler to another (thanks #AndreyT). Using hardcoded numbers like sizeof(4) to represent the size of a data type on your code, might pop up a bug on your application. You should use sizeof(int) instead or whatever type you are interested in.
Windows-7 has a feature called fault-tolerant-heap which ,as it says, is tolerant about some faulty buffer accesses. Windows XP doesn't have this feature (Vista ,I don't know). There is a video about it on channel9.msdn.com or sysinternal.com (forgot exactly where) by Mark Russinovich.
I have been using Visual Studio 2005 under Windows XP Pro 64-bit for C and C++ projects for a while. One of the popular tricks I have been using from time to time in the debugger was to remember a numeric pointer value from the previous debugging run of the program (say 0x00000000FFAB8938), add it to watch window with a proper typecast (say, ((MyObject *) 0x00000000FFAB8938)->data_field) and then watch the memory occupied by the object during the next debugging run. In many cases this is quite a convenient and useful thing to do, since as long as the code remains unchanged, it is reasonable to expect that the allocated memory layout will remain unchanged as well. In short, it works.
However, relatively recently I started using the same version of Visual Studio on a laptop with Windows Vista (Home Premium) 64-bit. Strangely enough, it is much more difficult to use this trick in that setup. The actual memory address seems to change rather often from run to run for no apparent reason, i.e. even when the code of the program was not changed at all. It appears that the actual address is not changing entirely randomly, it just selects one value from a fixed more-or-less stable set of values, but in any case it makes it much more difficult to do this type of memory watching.
Does anyone know the reason of this behavior in Windows Vista? What is causing the change in memory layout? Is that some external intrusion into the process address space from other [system] processes? Or is it some quirk/feature of Heap API implementation under Vista? Is there any way to prevent this from happening?
Windows Vista implements address space layout randomization, heap randomization, and stack randomization. This is a security mechanism, trying to prevent buffer overflow attacks that rely on the knowledge of where each piece of code and data is in memory.
It's possible to turn off ASLR by setting the MoveImages registry value. I couldn't find a way to disable heap randomization, but some Microsoft guy recommends computing addresses relative to _crtheap. Even if the heap moves around, the relative address may remain stable.
Working on porting a 32bit Windows C++ app to 64 bit. Unfortunately, the code uses frequent casting in both directions between DWORD and pointer values.
One of the ideas is to reserve the first 4GB of virtual process space as early as possible during process startup so that all subsequent calls to reserve memory will be from virtual addresses greater than 4 GB. This would cause an access violation error any unsafe cast from pointer to DWORD and then back to pointer and would help catch errors early.
When I look at the memory map of a very simple one line C++ program, there are many libraries loaded within bottom 4GB? Is there a way to make sure that all libraries, etc get loaded only above 4GB?
Thanks
Compile your project with /Wp64 switch (Detect 64-bit Portability Issues) and fix all warnings.
As a programmer, what do I need to worry about when moving to 64-bit windows?
You could insert calls to VirtualAlloc() as early as possible in your application, to allocate memory in the lower 4GB. If you use the MEM_RESERVE parameter, then only virtual memory space is allocated and so this will only use a very small amount of actual RAM.
However, this will only help you for memory allocated from the heap - any static data in your program will have already been allocated before WinMain(), and so you won't be able to change it's location.
(As an aside, even if you could reserve memory before your main binary was loaded, I think that the main binary needs to be loaded at a specific address - unless it is a built as a position-independent executable.)
Bruce Dawson posted code for a technique to reserve the bottom 4 GB of VM:
https://randomascii.wordpress.com/2012/02/14/64-bit-made-easy/
It reserves most of the address space (not actual memory) using VirtualAlloc, then goes after the process heap with HeapAlloc, and finishes off the CRT heap with malloc. It is straightforward, fast, and works great. On my machine it does about 3.8 GB of virtual allocations and only 1 MB of actual allocations.
The first time I tried it, I immediately found a longstanding bug in the project I was working on. Highly recommended.
The best solution is to fix these casts ...
You may get away with it truncated the pointer regardless (Same as casting to a POINTER_32) because I believe windows favours the lower 4GB for your application anyway. This is in no way guaranteed, though. You really are best off fixing these problem.
Search the code for "(DWORD)" and fix any you find. There is no better solution ...
What you are asking for is, essentially, to run 64-bit code in a 32-bit memory mode with AWE enabled (ie lose all the real advantages of 64-bit). I don't think microsoft could be bothered providing this for so little gain ... and who can blame them?