I am trying to find out what the difference is between the moc.exe (Qt meta object compiler) in the respective 32-bit and the 64-bit subfolder of Qt5.
Does it make any difference if I let my application with 64-Bit target architecture be built (and processed) by the 32-Bit or 64-Bit moc.exe version?
I couldn't find any info on this. If anybody has a clue or an idea what the difference is (beside being compiled for the according architecture and having a different file size of course) or if this makes a difference at all (as it only generates cpp files) I'd be very interested to know.
Thanks in advance
Samir
Does it make any difference if I let my application with 64-Bit target architecture be built (and processed) by the 32-Bit or 64-Bit moc.exe version?
Yes. Such mixing-and-matching is not supported. It might work, but it might break since nobody tests it to ensure that it works. Qt has an extensive test suite that runs in the continuous integration process. Something like this being untested is a big hint that you're on your own if you depend on it. Don't complain if you run into strange runtime bugs. Don't do it.
what the difference is
Anything and everything. All that Qt guarantees as a contract with you is that the moc output from a previous binary-compatible Qt version retains forward binary compatibility. E.g. if you have moc output from Qt 5.7, and you build a shared binary of your application, then replace Qt with binary-compatible 5.8, the old moc output is valid and Qt 5.8 knows how to use it safely. That's all.
Since, obviously, 32- and 64-bit Qt versions are not binary-compatible, you should not expect the moc output to be either. If it happens to be, it's a coincidence and nothing in the design guarantees it. It can break at any time.
You shouldn't be facing this problem since qmake/qbs/cmake will use the correct moc for you. It seems like perhaps you're trying to preprocess the sources using moc so that further use of moc won't be necessary while building the project. This strategy will not work. You need to learn how to leverage the build tools to build your project using the code generators it needs.
moc is not the only build tool provided by Qt you might end up using in a Qt-based project. Furthermore, projects of any significant size should be using many other code generators as well to make you more productive. Avoiding code generation as a policy is counter-productive. As in: it will cost you more.
Related
I need to build a Qt app for ARM. So many pages like this one show that I have to build Qt from source using a lengthy process. Why is this necessary? Can't I just change the target platform inside Qt Creator, which I find under Tools > Options > Build & Run > Compilers?
I am a bit surprise at this, because Eclipse CDT does not require such process. It seems to suggest that I must different Qt installations for each specific platform. This seems like a bad design. Could someone enlighten me on this?
For the sake of the explanation the difference should be pointed out between:
the LIBRARY, i.e. the code provided by Qt
and
the CODE that uses the library, i.e. the code written by you.
In order to spare the time of compiling the LIBRARY over and over again, as it does not change so frequently (amongst other reasons), it is compiled once and then the compiled binaries are linked in the CODE.
Tools > Options > Build & Run > Compilers allows you to add tools to compile the CODE for the target platform, provided you have the binaries compiled by someone, e.g. Qt, for the same platform.
If no one have done that and (most importantly) made the binaries available to you, then you have to do it by yourself, meaning that you have to build the LIBRARY from its sources.
To see if there are official precompiled binaries for your target platform, please check the offline installers page.
I am a C++ developer I am interested in Cocos2d-x framework. I know that you can write C++ code using the framework, compile it for different platforms and that's it, you have your 2D on Windows, Android, iOS. This is amazing but I don't understand how it is being done and, consequently, I worry that some thing that I have done for one platform will not work on other one. To go into details I would like to open my concerns. In order to do that let's clarify what is compiling and what is running a code.
What does it mean to compile C++ code for a platform (platform is OS + CPU architecture)? It means that C++ source code is mapped to instructions which is understandable for a concrete CPU architecture. And the final set of instructions is packaged into an executable file which is understandable for a concrete OS which means a concrete SO or OSes that understand how to handle the executable file can run it. Also we should not forget that in the set of instructions that the executable contains there could be system calls. Which is also specific to OS.
What does it mean to run the executable? It means that OS knows the format of particular executable file. When you give run command OS loades it into the virtual memory and starts to execute that CPU instructions set step by step. (Very raw but in general it is like that I guess.)
Now returning to the COSOS2D-X. How it is possible to compile a C++ code so that it was able to be recognized and loaded by different OSes and by different CPUs. What mechanisms we use in order to get appropriate .apk, .ipa or .exe files. Is there a trap that we can fall while using system calls or processor specific calls? In general how all this problems are solved? Please explain the process, for example, for Android or it would be great for iOS too. :)
Cocos2d-x has 95% of the same code for all target OS and platforms. And it has 5% of code which is written for the concrete platform. For example there are some Java sources for Android. And there are some Obj-C files for iOS. Also there is some code in C++ for different platform. #define is used to separate this code. Examples of such code is working with files which is written in C++ but differs from platform to platform.
Generating of appropriate output file is responsibility of the compiler and SDK used for target platform. For example xCode with clang compiler will generate the iOS build. While Android NDK with gcc inside will build the apk.
I have to write Qt-based application which will be using CTK library and some widgets from Slicer - all compiled in Debug mode in VS2008, also needs Qt 4.8.4.
Question: is it possible to develop and debug my application on another machine with Qt 4.8.4 and VS2010 installed, without any problems?
It depends on what the interfaces of the libraries are. In particular VS states that they do break binary compatibility among different versions of the C++ standard library for debugging and optimization purposes.
If the interfaces are pure Qt, you might get along (I would check with the Qt people), but beware, if this fail you are going to have a miserable time debugging. Binary incompatibilities are one of the harder things to figure out as the view the debugger gives you of an object does not necessarily represent what the code is using it as.
I'd recommend against this, and suggest that you install the same version of the compiler (and compile with the same flags)
How can i run a program which already has been built and compiled before on Qt IDE, so that i can take that program and run on any computer I want without recompiling it on that computer. I am a beginner so bare answering this question.:)
Thanks
There are a few parts to your problem.
1) You need to compile it for each architecture you want it to be used on.
2) Each architecture will have a set of Qt dynamic libraries associated with it too that need to be available.
3) Some architectures have an easy-to-deploy mechanism. EG, on a mac you can run "macdeployqt" to get the libraries into the application directory. For nokia phones (symbian, harmattan (N9), etc) QtCreator has a deploy step that will build a package for the phone and even include an icon.
4) For systems without such a feature, like linux and windows, you'll either need to distribute the binary and require the user to have Qt available or to package up a directory/zip containing the needed Qt libraries and distribute that.
It doesn't launch because it cannot find the dependencies. As you are on Windows, these libraries can be moved in the same directory as your application. To find which library is missing, use dependency walker
I am pretty sure these libraries are not found:
The Qt dynamic libraries (can be found on Qt bin directory, take the dll)
The C dynamic libraries used for compilation. If you are on creator and use default setting it will be mingw-xxx(can be found in the Qt installation directory, don t know exactly where)
Every Architect has a set of CPU Instructions.
so it's like when you hear a language that you don't understand. like when i speak Arabic To Someone who don't Understand The Language.
Every Architect Has a set of Processor Instructions, The Compiler only convert the code into instruction only understood by The Architecture that your CPU is.
That's Why Python and the most of High level languages Use Interpreter Instead of a Compiler.
But There are some cross compilers like MinGw that Support Cross compiling To Windows (.exe files)
Simply QT Have some libraries important to be in the working directory for your project.
I've got a C++ project where we have loads and loads of dependencies. The project should work on Linux and Windows, so we've ported it to CMake. Most dependencies are now included right into the source tree and build alongside the project, so there are no problems with those.
However, we have one binary which depends on Fortran code etc. and is really complicated to build. For Linux, it's also not available as a package, but only as precompiled binaries or with full source (needs a BLAS library installed and several other dependencies). For windows, the same library is available as binary, building for Windows seems even more complicated.
The question is, how do you handle such dependencies? Just check in the binaries for the supported platforms, and require the user to set up his build environment otherwise (that is, manually point to the binary location), or would you really try to get them compiled along (even if it requires installing like 10 libraries -- BLAS libraries are the biggest pain here), or is there some other recommended way to handle that?
If the binary is independant of the other part of your build process, you definitively should check-in it. But as you cannot include every version of the binary (I mean for every platform and compile flags the user might use) the build from source seems mandatory.
I have done something similar. I have checked-in the source code archives of the libraries/binaries I needed. Then I wrote makefile/scripts to build them according to the targeted platform/flags in a specific location (no standard OS location) and make my main build process to point to the right location. I have done that to be able to handle the correct versions and options of the libraries/binaries I needed. It's quite a hard work to make things works for different platforms but it's worth the time !
Oh, and of course it's easier if you use crossplatform build tools :)
One question to you. Does the users need to modify this binary, or are they just happy it's there so the can use/access it? If they don't need to modify it, check in the binaries.
I would agree, check in the binaries for each platform if they are not going to be modified very often. Not only will this reduce build times, but it will also reduce frustration from unnecessary compilations.