I have a 32 bit application which I plan to run on 64bit Windows 7.
At this stage I cannot convert the entire application to 64 bit due to dependencies to thirdparty functionality.
However, I would like to have access to the xmm9-xmm15 registers in my SSE optimizations and also use the additional registers which 64 bit cpus provides in general while executing my application.
Is this possible to achieve with some compiler flag?
It seems to me that the best way would be to divide your program in more than one executable. The EXE compiled as 64bit can communicate with another 32-bit EXE used the 32-bit thirdparty DLL which you need. You will have some overhead in the communication and will have to implement Starting/Stopping of depended process, but you will have clear program architecture.
If you develop native C++ application you can implement the second EXE for example as the COM++ out-of-process object (LocalServer or even LocalService). You can consider the way to implement the COM server in C# (see here). Sometimes that way could simplify the implementation and you can use the advantages of both .NET and native programming.
Yes, you can. See this example for how.
As Christopher pointed out, this technique will not always work, but for this express purpose, to jump into a few lines of handcrafted assembly and make use of more registers, then put the result somewhere and then jump back to 32 bit mode, it should work out just fine.
Related
I am wondering whether there exists a way to use a 64 bit lib in a 32bit executable.
I can design the 64 bit lib accordingly hence expose specific APIs for that purpose
that will have explicit 32 bit aligned variable in them, if that will help.
Basically we have a large piece of software which is compiled 32 bit, and for the time being
we do not want to recompile it as 64 bit, since it will cause a lot of headache.
Now within that existing code we'd like to access a new API that we are building, that new API is hiding a 64bit implementation, but the API itself 32bit on the "outside"
We've been toying around with ideas such as remote execution or passing the data via sockets but that will be simply to slow for our purposes, using shared memory as IPC between 32/64 bits would be quite challenging to develop hence not worth the development effort.
Any ideas?
I plan on using Necessitas to port Qt code to the Android platform. On a first sight I noticed despite being native code, everything still passes through the Dalvik VM.
My question is does this introduce overhead? Java is less efficient than native c++ to begin with and Dalvik is rather immature compared to vanilla Java, which is the cause of my concerns.
In the Android documentation you can find the following tip:
Native code isn't necessarily more efficient than Java. For one thing,
there's a cost associated with the Java-native transition, and the JIT
can't optimize across these boundaries. If you're allocating native
resources (memory on the native heap, file descriptors, or whatever),
it can be significantly more difficult to arrange timely collection of
these resources. You also need to compile your code for each
architecture you wish to run on (rather than rely on it having a JIT).
You may even have to compile multiple versions for what you consider
the same architecture: native code compiled for the ARM processor in
the G1 can't take full advantage of the ARM in the Nexus One, and code
compiled for the ARM in the Nexus One won't run on the ARM in the G1.
Of coarse, Dalvik code is slower than pure C/C++ optimized for the platform. But communication between native code and Java code happens through JNI which is the main source of the overhead.
So the answer for your question is yes, JNI introduces additional overhead. But if you want to port existing C/C++ code, ndk is the best choice in your case.
Is there any way how use an old 32-bit static library *.a in a 64-bit system.
The is no chance to obtain a source code of this old library to compile it again.
I also do not want to use -m32 in gcc, because the program use many 64bit libraries.
Thanks.
That depends entirely on the platform on which you're running. OS X on PowerPC, for example, that would "Just Work".
On x86 platforms, you can't link a 32-bit library into a 64-bit executable. If you really need to use that library, you'll need to start a separate 32-bit process to handle your calls to the library, and use some form of IPC to pass those calls between your 64-bit application and that helper process. Be forewarned: this is a lot of hassle. Make sure that you really need that library before starting down this road.
On the x86/x86_64 platform, you can't do this. I mean, maybe you could if you wrote custom assembly language wrappers for each and every 32 bit function you wanted to call. But that's the only way it's even possible. And even if you were willing to do that work I'm not sure it would work.
The reason for this is that the calling conventions are completely different. The x864_64 platform has many more registers to play with, and the 64-bit ABI (Application Binary Interface, basically how parameters are passed, how a stack frame is set up and things like that) standards for all of the OSes make use of these extra registers for parameter passing and the like.
This makes the ABI of 32-bit and 64-bit x86/x86_64 systems completely incompatible. You'd have to write a translation layer. And it's possible the 32-bit ABI allows 32-bit code to fiddle around with CPU stuff that 64-bit code is not allowed to fiddle with, and that would make your job even harder since you'd be required to restore the possibly modified state before returning to the 64-bit code.
And that's not even talking about this issue of pointers. How do you pass a pointer to a data structure that's sitting at a 64-bit address to 32-bit code?
Simple answer: You can't .
You need to use -m32 in order to load a 32-bit library.
Probably your best approach is to create a server wrapping the library. Then a 64-bit application can use IPC (various methods, e.g. sockets, fifos) in order to communicate to and from the process hosting the library.
On Windows this would be called out-of-process COM. I don't know that there's a similar framework on unix, but the same approach will work.
It's there a way to compile a c/c++ source file to output a .exe file that can be run on other processors on different computers ?
I am asking this for windows platform.
I know it can be done with java or c# , but it uses virtual machine.
PS: For those who said that it can be done just with virtual machines or the source cod must be compiled on every machine , i am asking if all viruses are written in java or c# and you need a vm machine to be infected or you need to compile source cod of worm on your machine to be infected ? (i am not trying to make a virus, but is a good example :) )
Different computers use different instruction sets, OS system calls, etc., which is why machine code is platform specific. This is why various technologies like byte code, virtual machines, etc., have been developed to allow portable executables. However, generally C/C++ compiles directly to platform-specific machine code.
So a Windows exe simply won't run on another platform without some kind of emulation layer.
However, you can make your C/C++ source code portable. This means all you need to do to make your application run on another platform is to compile it for that platform.
Yes, you can, but it's not necessarily a good idea.
Apple introduced the idea of a fat binary when they were migrating from the Motorola 68000 to the PowerPC chips back in the early 90s (I'm not saying they invented it, that's just the earliest incarnation I know of). That link also describes the FatELF Linux universal binaries but, given how little we hear about them, they don't seem to have taken off.
This was a single file which basically contained both the 68000 and PowerPc executables bundled into one single file and required some smarts from the operating system so it could load and execute the relevant one.
You could, if you were so inclined, write a compiler which produced a fat binary that would run on a great many platforms but:
it would be hideously large; and
it would almost certainly require special loaders on each target system.
Since gcc has such a huge amount of support for different platforms and cross-compiling, it would be where I would concentrate the effort, were I mad enough to try :-)
The short answer is - you can't. Longer answer is write in portable (standard) C/C++ and compile on the platforms you need.
You can, however, do it in a different language. If you need something to run on multiple platforms, I suggest you investigate Java. Similar language to c/c++, and you can "compile" (sort of) programs to run on pretty much any computer.
Do not confuse processor platforms with OS platforms.
for different OS platforms, machine binaries are altogether different. even not possible to make one-to-one instruction mapper. it is beacause whole instruction set architecture may be different, and different instruction groups may have totally different instruction format, and even some instructions may be missing or target platform.
Only an emulator or virtual machine can do this.
Actually, some operating systems support this; it is usually called a "fat binary".
In particular, Mac OS uses (used) it to support PowerPC and x86 in one binary.
On MS Windows however, this is not possible as the OS does not support it.
Windows can run 32bit executables in 64bit mode with no problem. So your exe will be portable if you compile it in 32bit mode. Or else you must release 2 versions of your executable. If you use Java or C# (or any bytecode-compiled language), the JIT/interpreter can optimize your code for the current OS's mode, so it's fully portable. But on C++, since it produces native code, I'm afraid this can't be done except using 2 versions of your binary.
The trick to do this, is to create a binary which has machine instructions which will be emulated in a virtual machine on the operating systems and processors you want to support.
The most widely spread such virtual machine are variants of the Java virtual machine. So my suggestion would be to look at a compiler which compiles C code to Java byte code.
Also, Windows once upon a time treated x86 as a virtual machine on other (Alpha) architectures.
To summarize the other answers, if you want to create a single executable file that can be loaded and run on multiple platforms, you basically have two options:
Create a "fat binary", which contains the machine code for multiple platforms. This is not normally supported by most development tools and may require special loaders on the target platform;
Compile to a byte code for the JVM or for .Net. I've heard of one C compiler that generates Java byte code (can't remember the name offhand), but never used it, nor do I have any idea what the quality of the implementation would be.
Normally, the procedure for supporting multiple platforms in C is to generate different executables for each target, either by using a cross compiler or running a compiler on each platform. That requires you to put some thought into how you write and organize your source code so that the platform-specific bits can be easily swapped out without affecting the overall program logic, for varying degrees of "easily".
The short answer is you can't, The long answer is there are several options.
Fat Binary. Downside is that this requires OS support. The only user level OS I know of that supports it is OS X for their power pc to Intel migration.
On the fly cross translation. As used by Transmeta and Apple. Again no general solution provider that I know of.
a C\C++ interpreter. There is at least one I am aware of Ch. It runs on Windows, Linux, OS X. Note Ch is not fully C++ compatible.
This question likes to that ask "Is there a way can travel from Canada to another city in the world?"
And answer is: "yes there is."
for compiling a C/C++ Source code to an Executable file on Windows Platform without any virtual machine you can use Windows Legacy API or MFC (specially with use MFC in a Static Library instead in Dll). This executable file approximately runs on all PCs that have windows, because windows runs on only 3 platforms (x86, x64, IA64; except windows 8 & 8.1 that supports ARMs).Of course you should compile your source to x86 codes to run on 32 bit and x86-64 platforms and Itaniums can run your exe in emulation. But about all Processors that runs windows as their OS (Like ARMS in mobiles) you should compile that for the Windows Phone or Windows CE.
You can write a mid-library~ such as:
[ Library ]
[ mid-library]
[linux part] [windows part]
then, you can use library, your app will be portable~~
There are a ton of drivers & famous applications that are not available in 64-bit. Adobe for instance does not provider a 64-bit Flash player plugin for Internet Explorer. And because of that, even though I am running 64-bit Vista, I have to run 32-bit IE. Microsoft Office, Visual Studio also don't ship in 64-bit AFAIK.
Now personally, I haven't had much problems building my applications in 64-bit. I just have to remember a few rules of thumb, e.g. always use SIZE_T instead of UINT32 for string lengths etc.
So my question is, what is preventing people from building for 64-bit?
If you are starting from scratch, 64-bit programming is not that hard. However, all the programs you mention are not new.
It's a whole lot easier to build a 64-bit application from scratch, rather than port it from an existing code base. There are many gotchas when porting, especially when you get into applications where some level of optimization has been done. Programmers use lots of little assumptions to gain speed, and these are not always easy to quickly port to 64-bit. A few examples I've had to deal with:
Proper alignment of elements within a struct. As data sizes change, assumptions that certain fields in a struct will be aligned on an optimal memory boundary may fail
Length of long integers change, so if you pass values over a socket to another program that may not be 64-bit, you need to refactor your code
Pointer lengths change, as so hard to decipher code written be a guru that has left the company become a little trickier to debug
Underlying libraries will also need to have 64-bit support to properly link. This is a large part of the problem of porting code if you rely on any libraries that are not open source
In addition to the things in #jvasak's post, the major thing that can causes bugs:
pointers are larger than ints - a huge amount of code makes the assumption that the sizes are the same.
Remember that Windows will not even allow an application (whether 32-bit or 64-bit) to handle pointers that have an address above 0x7FFFFFFF (2GB or above) unless they have been specially marked as "LARGE_ADDRESS_AWARE" because so many applications will treat the pointer as a negative value at some point and fall over.
The biggest issues that I've run into porting our C/C++ code to 64 bit is support from 3rd party libraries. E.g. there is currently only 32 bit versions of the Lotus Notes API and also MAPI so you can't even link against them.
Also since you can't load a 32 bit DLL into your 64 bit process you get burnt again trying to load things dynamically. We ran into this problem again trying to support Microsoft Access under 64 bit. From wikipedia:
The Jet Database Engine will remain
32-bit for the foreseeable future.
Microsoft has no plans to natively
support Jet under 64-bit versions of
Windows
Just a guess, but I would think a large part of it would be support - If Adobe compiles the 64 bit version, they have to support it. Even though it may be a simple compile switch, they'd still have to run through a lot of testing, etc, followed by training their support staff to respond correctly, when they do run into issues fixing them either results in a new version of the 32 bit binary or a branch in the code, etc. So while it seems simple, for a large application it can still end up costing a lot.
Another reason that a lot of companies have not gone through the effort of creating 64 bit versions is simply they don't need to.
Windows has WoW64 (Windows on Windows 64 bit) and Linux can have the 32 bit libraries available alongside the 64 bit. Both of these allow us to run 32 bit applications in 64 bit environments.
As long as the software is able to run in this way, there is not a major incentive to convert to 64 bit.
Exceptions to this are things such as device drivers as they are tied in deeper with the operating systems and cannot run in the 32 bit layer that the x86-64/AMD64 based 64-bit operating systems offer (IA64 is unable to do this from what I understand).
I agree with you on flash player though, I am very disappointed in Adobe that they have not updated this product. As you have pointed out, it does not work properly in 64 bit requiring you to run the 32 bit version of Internet Explorer.
I think it is a strategic mistake on Adobe's part. Having to run the 32 bit browser for flash player is an inconvenience for users, and many will not understand this solution. This could lead to developers being apprehensive about using flash. The most important thing for a web site is to make sure everyone can view it, solutions that alienate users are typically not popular ones. Flash's popularity was fed by its own popularity, the more sites that used it, the more users had it on their systems, the more users that had it on their systems, the more sites were willing to use it.
The retail market pushes these things forward, when a general consumer goes to buy a new computer, they aren't going to know they don't need a 64 bit OS they are going to get it either because they hear it is the latest and greatest thing, the future of computing, or just because they don't know the difference.
Vista has been out for about 2 years now, and Windows XP 64-bit was out before that. In my mind that is too long for a major technology such as Flash to not be upgraded if they want to hold on to their market. It may have to do with Adobe taking over Macromedia and this is a sign that Adobe does not feel Flash is part of their future, I find it hard to believe as I think Flash and Dreamweaver were the top parts of what they got out of Macromedia, but then why have they not updated it yet?
It is not as simple as just flipping a switch on your compiler. At least, not if you want to do it right. The most obvious example is that you need to declare all your pointers using 64-bit datatypes. If you have any code which makes assumptions about the size of these pointers (e.g. a datatype which allocates 4 bytes of memory per pointer), you'll need to change it. All this needs to have been done in any libraries you use, too. Further, if you miss just a few then you'll end up with pointers being down-casted and at the wrong location. Pointers are not the only sticky point but are certainly the most obvious.
Primarily a support and QA issue. The engineering work to build for 64-bit is fairly trivial for most code, but the testing effort, and the support cost, don't scale down the same way.
On the testing side, you'll still have to run all the same tests, even though you "know" they should pass.
For a lot of applications, converting to a 64-bit memory model doesn't actually give any benefit (since they never need more than a few GB of RAM), and can actually make things slower, due to the larger pointer size (makes every object field twice as large).
Add to that the lack of demand (due to the chicken/egg problem), and you can see why it wouldn't be worth it for most developers.
Their Linux/Flash blog goes some way to explain why there isn't a 64bit Flash Player as yet. Some is Linux-specific, some is not.