I am running a heavy memory intensive job on a windows OS with 12 GB of RAM. By my computations, 4 GB of memory should be enough to run the program. I am running the program I've written with dynamic memory allocation (I have 2 versions of the program in C and C++ with malloc/free and new/delete respectively) using CodeBlocks.
When I pull up task manager, I see that the program only seems to use about 2 GB of RAM, even when I have a lot more available, and the pagefile size is currently set to 30 GB. Is there any way I can get CodeBlocks to use more memory? I also used DEV-C++ and I get the same bad_alloc error in the C++ code.
Any ideas? Thanks in advance.
Oh and I am using a 64-bit Windows 7.
Look at this page for memory limits based on architecture (x86, 64-bit) and Windows version. Some work-arounds are mentioned:
https://learn.microsoft.com/en-us/windows/win32/memory/memory-limits-for-windows-releases#memory_limits
First you have to make sure you are building a 64-bit executable and not 32-bit.
If using g++, make sure you use option -m64.
As for large address awareness mentioned in the MSDN page, it should be active by default on 64-bit Windows systems.
Still, the Visual C++ linker has an option to explicitly ask for it: /LARGEADDRESSAWARE
Now if you don't use the Visual C++ linker, it appears you can always use this as an extra step if you want to activate large address awareness for your executable:
editbin /LARGEADDRESSAWARE your_executable
(editbin being an M$ Visual Studio tool)
thanks to all the help so far. There was a simple workaround. I installed mingw 64bit compiler, pointed code blocks to that compiler and everything worked like a charm. yay.
Related
I have built Clang with MinGW on Windows with the target triple x86_64-w64-windows-gnu. The executables clang.exe and clang++.exe work as expected if I build them in release mode (they compile programs without error), however when building in debug mode I cannot run them and get this error - "This app can't run on your PC". Other executables from the same build such as clang-check.exe do not display this error and run correctly.
It seems as though this could be an issue with the file size as both clang.exe and clang++.exe are > 2GB in size whereas the other executables are smaller but I was under the impression that the file size limit on 64-bit Windows is 4GB.
Has anyone else run into a similar issue? If the file size is the problem, is it possible to get LLVM to put the debug symbols in a separate file to reduce the size of the executable?
EDIT: I've tried to reduce the executable size by dumping the debug symbols to a separate file using the -gsplit-dwarf flag when building LLVM but it doesn't have any effect.
Yes, file size is a pretty good hint of having this problem. The IMAGE_OPTIONAL_HEADER64.SizeOfImage field is a DWORD, suggesting a 4 GB maximum size. But there is another limit that kicks in first, the OS loader maps the entire .exe or .dll file into memory with a memory-mapped file. The view on the MMF can never be larger than 2 GB. This is a hard technical limitation, it even applies to x64. More about this problem in this post.
Debug info is no doubt the reason why the image file blows up so badly. For comparison, the clang build that is included with VS2017 needs 27MB for the front end and 32MB for the x64 back end. Why -gsplit-dwarf cannot solve your problem is visible from this project page:
Fission is implemented in GCC 4.7, and requires support from recent versions of objcopy and the gold linker.
MinGW cannot provide you with the gold linker. They made no attempt at porting it since it can only generate ELF images. Cold hard facts, as long as you depend on MinGW then you're up a creek without a good paddle.
Something drastic is needed. I'll hesitantly mention Cygwin. Do consider another compiler, like using Clang to build Clang :) Or MSVC++. The community edition is a free download. Also be sure to take a look at the Clang port, they did a lot of work to make it ABI compatible.
I make some codes on this environment:
a) my laptop with i7 processor;
b) IDE "visual studio"/C/C++
Now, I want to transfere the code on AWS with Xeon E5-2670.
1) Is it possible ?
2) Must i change the configuration on "visual studio" or take the code and make it runs directly on the the Xeon proc ?
3) do you have some references i could follow
Thank for you help and recommendations
Alvaro
It depends on how you have set up the compilation options. If you have not enabled any specific options that allow the compiler to use instructinos not present on the target processor the executable will run. You can use Dependency Walker to determine what DLLs your executable requires.
The default options in VS C++ projects will produce executables that run on practically any modern x86 processor. By itself your machine's CPU doesn't matter when compiling, only compiler options.
It should run directly, but it might not be as efficient as if it were compiled on the AWS system. I.e. I coded a program optimised for a 4 core 8 thread computer, but when I ran it on my laptop with a 2 core 4 thread processor it nearly crashed it.I can also guess that running the program on a 6 core 12 thread processor would not achieve full efficiency.
If you're talking about the runtime environment (I just remembered that) there is a chance that Visual Studio provides non-standard libraries which you would need to download and/or compile before being able to run the program. E.g. I sent my program to my friend, who was missing a required DLL to run the program.
EDIT (I'm new here, so not enough rep to comment): Usually I just search for the missing DLLs on dll-files.com. I'm not sure about linux though, could be that you have to compile libraries yourself, which I'm not that familiar with.
After the try, the execution results in 2 errors:
MSVCP144.dll missing
MSVCP100.dll missing
As the title states, I am running a 32-bit application under win 7 64-bit. The application is made in C++ in Embarcadero XE2. I need more than 2GB of memory.
Steps:
I enabled the 3GB switch and I rebooted the pc.
I tried adding -GF: LARGEADDRESSAWARE to project options/c++ linker/Output Flags but then linking failed. It said: "Failed command GF:" or something like that.
I then found on a forum that you should do it manually in the .bpr file under FLAGS section. I added the flag and then the project linked. However, my memory available indicator in the app tells me i'm still getting under 2 GB.
Questions:
how to properly make this work ?
how to tell if I got 3GB of memory or not?
The /3GB switch is for 32 bit systems only. Your system is a 64 bit system. That means that a 32 bit executable with the LARGEADDRESSAWARE PE flag will have a 4GB address space. Don't attempt to use the /3GB boot option.
You can check whether or not your executable has the LARGEADDRESSAWARE PE flag set by using any PE viewing tool. With the MS toolchain you would use dumpbin. The Embarcadero toolchain equivalent is tdump. In addition, there are countless GUI PE viewers. Find a tool that works and make sure that you have properly set this PE flag.
I have a program that I made in Visual Studio 2010. I built the program in release mode and Win32 solution platform. I then made an executable by following this guide step by step. I then copied the setup.exe that was created onto a new 32-bit computer. I then get this error message when I try to run the setup on the new computer:
Why is the setup not working? I built the program in Win32, so it should work on a 32-bit computer? Am i missing something? Any help would be appreciated.
There are 3 major reasons for this to happen. The Windows executables contain 3 fields that must be matched by the OS: Minimal OS version number, correct CPU type and correct CPU bit-ness. Now you're probably not running into a Windows version issue (I think the error message is different), you're quite unlikely to have the wron CPU type (ARM builds are pretty hard to make by accident) so that leaves as the most likely scenario that you actually made a 64 bits build.
"Win32" is a rather deceiving term here, it doesn't always exclude 64 bits builds. E.g. the macro WIN32 is defined for 64 bits builds as well.
#Mailerdaimon and #MSalters you were correct. Even though I was building the program in win32, the target machine was x64. After changing it to x86 the program ran. Thanks for everyones help!
The solution to this was found in the question Executable runs faster on Wine than Windows -- why? Glibc's floor() is probably implemented in terms of system libraries.
I have a very small C++ program (~100 lines) for a physics simulation. I have compiled it with gcc 4.6.1 on both Ubuntu Oneiric and Windows XP on the same computer. I used precisely the same command line options (same makefile).
Strangely, on Ubuntu, the program finishes much faster than on Windows (~7.5 s vs 13.5 s). At this point I thought it's a compiler difference (despite using the same version).
But even more strangely, if I run the Windows executable under wine, it's still faster than on Windows (I get 11 s "real" and 7.7 s "user" time -- and this includes wine startup.)
I'm confused. Surely if the same code is run on the same CPU, there shouldn't be a difference in the timing.
What can be the reason for this? What could I be doing wrong?
The program does minimal I/O (outputs a single line), and only uses a fixed-length vector from the STL (i.e. no system libraries should be involved). On Ubuntu I used the default gcc and on Windows the Nuwen distribution. I verified that the CPU usage is close to zero when doing the benchmarking (I closed most programs). On Linux I used time for timing. On Windows I used timethis.exe.
UPDATE
I did some more precise timings, comparing the running time for different inputs (run-time must be proportional to the input) of the gcc and msvc-compiled programs on Windows XP, Wine and Linux. All numbers are in seconds and are the minimum of at least 3 runs.
On Windows I used timethis.exe (wall time), on Linux and Wine I used time (CPU time). (timethis.exe is broken on Wine) I made sure no other programs were using the CPU and disabled the virus scanner.
The command line options to gcc were -march=pentium-m -Wall -O3 -fno-exceptions -fno-rtti (i.e. exceptions were disabled).
What we see from this data:
the difference is not due to process startup time, as run-times are proportional to the input
The difference between running on Wine and Windows exists only for the gcc-compiled program, not the msvc-compiled one: it can't be casued by other programs hogging the CPU on Windows or timethis.exe being broken.
You'd be surprised what system libraries are involved. Just do ldd on your app, and see which are used (ok, not that much, but certainly glibc).
In order to completely trust your findings about execution speed, you would need to run your app a couple of times sequentially and take the mean execution time. It might be that the OS loader is just slower (although 4s is a long loading time).
Other very possible reasons are:
Different malloc implementation
Exception handling, if used to the extreme might cause slowdown (Windows GCC, MinGW, might not be the optimal exception handling star of the show)
OS-dependent initialization: stuff that needs to be done at program startup on Windows, but not on Linux.
Most of these are easily benchmarkable ;-)
An update to your update: the only thing you can now do is profile. Stop guessing, and let a profiler tell you where time is being spent. Use gprof and the Visual Studio built-in profiler and compare time spent in different functions.
Do benchmarking in code. Also try to compile with visual studio. On windows if you have some application like Yahoo Messenger, that are installing hooks, they can very easy slow down your application loading times.
On windows you have: QueryPerformanceCounter
On Linux: clock_gettime
Apparently the difference is system related.
You might use strace to understand what system calls are done, eg
strace -o /tmp/yourprog.tr yourprog
and then look into /tmp/yourprog.tr
(If an equivalent of strace existed on Windows, try to use it)
Perhaps your program is allocating memory (using mmap system call), and perhaps the memory related system calls are faster on Linux (or even on Wine) than on Windows? Or some other syscalls give faster functionality on Linux that on Windows.
NB. I know nothing about Windows, since I'm using Unix systems since 1986 and Linux since 1993.