I have a program that I made in Visual Studio 2010. I built the program in release mode and Win32 solution platform. I then made an executable by following this guide step by step. I then copied the setup.exe that was created onto a new 32-bit computer. I then get this error message when I try to run the setup on the new computer:
Why is the setup not working? I built the program in Win32, so it should work on a 32-bit computer? Am i missing something? Any help would be appreciated.
There are 3 major reasons for this to happen. The Windows executables contain 3 fields that must be matched by the OS: Minimal OS version number, correct CPU type and correct CPU bit-ness. Now you're probably not running into a Windows version issue (I think the error message is different), you're quite unlikely to have the wron CPU type (ARM builds are pretty hard to make by accident) so that leaves as the most likely scenario that you actually made a 64 bits build.
"Win32" is a rather deceiving term here, it doesn't always exclude 64 bits builds. E.g. the macro WIN32 is defined for 64 bits builds as well.
#Mailerdaimon and #MSalters you were correct. Even though I was building the program in win32, the target machine was x64. After changing it to x86 the program ran. Thanks for everyones help!
Related
I am compiling a dll in visual studio 2017 c++.
SDK: 10.0.17134.0
this project uses a template, that automatically creates 2 dll, one for 32 bit and one for 64 bit. I do have two machines that run the same software but have different hardware and OS.
First machine has a intel i7 and runs windows embedded standard 64 bit
the second machine has a intel atom and runs windows embedded standard 32 bit
On the 64 bit machine, both dll work. (32 bit and 64 bit), on the atom the 32 bit does not work tough. I do not have any error messages, the only thing i get from the software is that it is not compatible without any additional clues. The software is the same on both systems so I assume that the problem is related to the OS or the processor.
the software I am developing for is a vision system by omron so it is nothing that is available online or that can be shared here.
What could be the cause for this? If you need additional information just ask.
Generally, in order for an executable file (either an .EXE program or a .DLL support module) built using the MSVC C/C++ compiler in Visual Studio 2015 or later, to work on a target PC, you need to have the latest VC++ Redistributable run-time libraries installed on that PC.
See also this discussion on Stack Overflow.
I've run into an issue in VS2013 this morning, its one of those things that I'm sure to encounter in 1-3 years (either myself or a customer), and I'm going to say "Gee, I've seen that before, but I don't know what causes it".
We build for Windows Embedded platforms and are migrating customers from VS6.0 to VS2013. Since the target hardware is still WindowsEmbedded we use the v120_xp platform toolset. I've built ?hundreds? of projects and never encountered the error I saw this morning. A customer project with lots of DLL's, I successfully rebuilt all those with v120_xp, and turned on some OPTIMIZATION for performance. Not sure this is relevant. SSE2/Qpar and optimized for FAST execution. All the DLL's build fine. The customer application builds.
But at runtime I get this warning when the ntdll.dll is loaded:
ntdll.dll
%hs is either not designed to run on Windows or it contains an error.
Try installing the program again using the original installation media
or contact your system administrator or the software vendor for support.
Error status 0x.
From this StackOverFlowLink it appears the runtime is choosing the X64 version of ntdll.dll and possibly finding the x32 bit version. I'm pretty sure its something like that, but I'm trying to understand.
1) What could trigger this behavior? I haven't seen this before with other v120_xp projects in VS2013.
2) How can I tell which ntdll.dll it is seeing?
I tried dependancy walker x64 and x32 bit versions, and they seem to default different behavior.
I also played with setting compatibility mode on the exe to force Windows7 to run it in WinXP SP3 mode. That didn't work. We usually run our EXE's inside a GUI which might be forcing that...
I'm researching this today and will post my findings if I ever get to the bottom of what triggered this.
Two things that are unique about this project.
1) I turned lots of Performance CPU optimizations on in the C++ module.
2) There are a lot of customer DLL modules (also built optimized).
So one working theory I have is that the optimization or a DLL is tricking the runtime to thinking this is a X64 bit module.
I will be checking to see if a errent project doesn't have toolset v120_xp something like that.
I make some codes on this environment:
a) my laptop with i7 processor;
b) IDE "visual studio"/C/C++
Now, I want to transfere the code on AWS with Xeon E5-2670.
1) Is it possible ?
2) Must i change the configuration on "visual studio" or take the code and make it runs directly on the the Xeon proc ?
3) do you have some references i could follow
Thank for you help and recommendations
Alvaro
It depends on how you have set up the compilation options. If you have not enabled any specific options that allow the compiler to use instructinos not present on the target processor the executable will run. You can use Dependency Walker to determine what DLLs your executable requires.
The default options in VS C++ projects will produce executables that run on practically any modern x86 processor. By itself your machine's CPU doesn't matter when compiling, only compiler options.
It should run directly, but it might not be as efficient as if it were compiled on the AWS system. I.e. I coded a program optimised for a 4 core 8 thread computer, but when I ran it on my laptop with a 2 core 4 thread processor it nearly crashed it.I can also guess that running the program on a 6 core 12 thread processor would not achieve full efficiency.
If you're talking about the runtime environment (I just remembered that) there is a chance that Visual Studio provides non-standard libraries which you would need to download and/or compile before being able to run the program. E.g. I sent my program to my friend, who was missing a required DLL to run the program.
EDIT (I'm new here, so not enough rep to comment): Usually I just search for the missing DLLs on dll-files.com. I'm not sure about linux though, could be that you have to compile libraries yourself, which I'm not that familiar with.
After the try, the execution results in 2 errors:
MSVCP144.dll missing
MSVCP100.dll missing
Google gave me a clue that it is possible to compile code into a single executable that will run as 32bit on a 32bit OS and as 64bit on a 64bit OS. Is it really possible for the executable to determine its bitness at runtime?
In my case the target systems would be Windows 7 Professional x64 and Windows XP SP3 x86.
So what I read on various articles (I think there was even answers to similiar topics on SO) is that one has to go to the Soltuion's Configuration Manager (RMB onto Solution -> Configuration Manager) and set the Platform to Any CPU.
Now all of these articles described a setup for older MSVS or MSVC++ versions, but I think there are no major changes to the Configuration Manager in the 2013 RC version (which I have just recently installed).
In the Active Solution dropdown I do not have the option Any CPU, so I followed this recipe that I found on SO. Following this little guide fails in my case, I still do not have the option to select Any CPU when following step 5:
5) Make sure "Any CPU" is selected under New Platform. If there was
no Any CPU solution platform in step 3, then make sure the "Create
new solutions platform" checkbox is checked. Then click OK.
The dropdown items that are available to me are x64 and ARM (Win32 too but that is already added by default), I can not chose Any CPU.
Adding target platform x64 and compiling the executable works fine, the program is ran as 64bit on Windows 7 x64 but ofcourse it can not be run on the 32bit Windows XP machine.
How do I set the target platform to Any CPU in Microsoft Visual Studio Professional 2013 RC?
No it is absolutely not. You need to define separate executables.
The "Any CPU" dropdown is there to allow you to set compiler settings for more than one platform (e.g. a _DEBUG processor for x64 and Win32) You cannot actually build to that target.
AnyCPU refers to .Net programs, not C++. C++ must compile down to native, either x86 or x64. You could build 64 bit program and bundle it into your 32 program, extracting it at runtime. This technique is used by ProcessExplorer.
Perhaps not a true answer, but for MOST things, running 32-bit code on a 64-bit system works absolutely fine. Of course, it may run a little slower, but compared to having to deal with (and thoroughly test) two different binaries, unless it's significant performance benefits in 64-bit mode (or your application uses more than around 2GB of memory space), I'd be very tempted to use 32-bit mode.
MacOS supports something called "fat binaries", where the program is compiled twice and both versions are packed into a single file. The OS decides which version to actually launch.
Windows doesn't have anything like it.
You can compile the .NET wrapper (Any CPU) and the rest of program as yourprogram86.dll and yourprogram64.dll
[DllImport("yourprogram32.dll", CallingConvention = CallingConvention.Cdecl, EntryPoint = "open")]
public static extern void open32();
[DllImport("yourprogram64.dll", CallingConvention = CallingConvention.Cdecl, EntryPoint = "open")]
public static extern void open64();
static bool Is64()
{
//.Net 4.0
//Environment.Is64BitOperatingSystem
//.Net 2.0
return IntPtr.Size == 8;
}
static void open()
{
if (Is64())
open64();
else
open32();
}
I am running a heavy memory intensive job on a windows OS with 12 GB of RAM. By my computations, 4 GB of memory should be enough to run the program. I am running the program I've written with dynamic memory allocation (I have 2 versions of the program in C and C++ with malloc/free and new/delete respectively) using CodeBlocks.
When I pull up task manager, I see that the program only seems to use about 2 GB of RAM, even when I have a lot more available, and the pagefile size is currently set to 30 GB. Is there any way I can get CodeBlocks to use more memory? I also used DEV-C++ and I get the same bad_alloc error in the C++ code.
Any ideas? Thanks in advance.
Oh and I am using a 64-bit Windows 7.
Look at this page for memory limits based on architecture (x86, 64-bit) and Windows version. Some work-arounds are mentioned:
https://learn.microsoft.com/en-us/windows/win32/memory/memory-limits-for-windows-releases#memory_limits
First you have to make sure you are building a 64-bit executable and not 32-bit.
If using g++, make sure you use option -m64.
As for large address awareness mentioned in the MSDN page, it should be active by default on 64-bit Windows systems.
Still, the Visual C++ linker has an option to explicitly ask for it: /LARGEADDRESSAWARE
Now if you don't use the Visual C++ linker, it appears you can always use this as an extra step if you want to activate large address awareness for your executable:
editbin /LARGEADDRESSAWARE your_executable
(editbin being an M$ Visual Studio tool)
thanks to all the help so far. There was a simple workaround. I installed mingw 64bit compiler, pointed code blocks to that compiler and everything worked like a charm. yay.