I've downloaded HD Photo Device Porting Kit 1.0 and successfully compiled and executed it on x86 PC.
I want to port the image viewer program to ARM-based Windows Mobile Smartphone, but there is some missing ARM code.
First, no "/image/x86/x86.h" equivalent header file for ARM. But the file is very simple, so I copied and renamed it to "arm.h" and successfully compiled and linked the source code.
But at runtime, DWORD alignment exception occurrs. I found that on ARM build, it seems that ARMOPT_BITIO should be declared for properly aligned read & write. But with ARMOPT_BITIO, some IO functions are missing, e. g. peekBits, getBits, flushToByte, flushBits.
I copied x86 version of these functions (peekBit16, flushBit16, etc), but no luck, it does not work (I've got a stack overflow error).
I can't debug the complex HD Photo source files. Please let me know where can I find the missing ARM code.
Any help would be much appreciated. Thanks!
Based on my experience of porting some Microsoft code to ARM Linux, I do not think there is an easy way around it, unless someone has ported it already. You'll have to dive into this sort of low-level debugging.
Bugs I encountered were mainly related to unaligned access, and missing platform API calls. Also incorrect preprocessor checks resulted in code thinking it's running on big-endian platform.
The method I found useful to debug in such scenario is to build the code for the target platform and for the platform where it's known to work, and debug/trace these builds in parallel using a number of use cases. This will catch the most severe bugs.
Related
Suppose we take a compiled language, for example, C++. Now let's take an example Framework, suppose Qt. Qt has it's source code publically available and has the options for users to download the binary files and let users use their API. My question is however, when they compiled their code, it was compiled to their specific HardWare, Operating System, all that stuff. I understand how many Software Require recompilation for different types of Operating Systems (Including 32 vs 64bit) and offer multiple downloads on their website, however how does it not go even further to suggest it is also Hardware Specific and eventually result in the redistribution of compiled executes extremely frustrating to produce?
Code gets compiled to a target base CPU (e.g. 32-bit x86, x86_64, or ARM), but not necessarily a specific processor like the Core i9-10900K. By default, the compiler typically generates the code to run on the widest range of processors. And Intel and AMD guarantee forward compatibility for running that code on newer processors. Compilers often offer switches for optimizing to run on newer processors with new instruction sets, but you rarely do that since not all your customers have that config. Or perhaps you build your code twice (once for older processors, and an optimized build for newer processors).
There's also a concept called cross-compiling. That's where the compiler generates code for a completely different processor than it runs on. Such is the case when you build your iOS app on a Mac. The compiler itself is an x86_64 program, but it's generating ARM CPU instruction set to run on the iPhone.
Code gets compiled and linked with a certain set of OS APIs and external runtime libraries (including the C/C++ runtime). If you want your code to run on Windows 7 or Mac OSX Maverics, you wouldn't statically link to an API that only exists on Windows 10 or Mac OS Big Sur. The code would compile, but it wouldn't run on the older operating systems. Instead, you'd do a workaround or conditionally load the API if it is available. Microsoft and Apple provides the forward compatibility of providing those same runtime library APIs to be available on later OS releases.
Additionally Windows supports running 32-bit processes on 64-bit chips and OS. Mac can even emulate x86_64 on their new ARM based devices coming out later this year. But I digress.
As for Qt, they actually offer several pre-built configurations for their reference binary downloads. Because, at least on Windows, the MSVCRT (C-runtime APIs from Visual Studio) are closely tied to different compiler versions of Visual Studio. So they offer various downloads to match the configuration you want to build your your code for (32-bit, 64-bit, VS2017, VS2019, etc...). So when you put together a complete application with 3rd party dependencies, some of these build, linkage, and CPU/OS configs have to be accounted for.
This is a follow-on to "how-to-build-and-use-google-tensorflow-c-api" : can any one explain how to build a Tensorflow C++ program on an ARM processor? I'm thinking specifically of Nvidia's Jetson family of GPU devices. Nvidia has lots and lots of documentation for these, but it all seems to be for Python (like this), for toy examples, and nothing for anyone who wants to write a C++ program using the full tensorflow API (if one even exists) for their own machine learning models. I'd like to be able to build programs like this one, which is a deep learning inference and exactly what the Jetson is supposedly made for.
I've found Web sites that offer links to installers too, but they all seem to be for the x86 architecture instead of ARM.
I have the same question about Bazel. I gather from all the unsatisfactory documentation I've been looking at that Bazel is mandatory for anyone who wants to build tensorflow programs using a GPU, but all of the installation instructions I can find are either incomplete or for a different architecture such as x86 (for example https://www.osetc.com/en/how-to-install-bazel-on-ubuntu-14-04-16-04-18-04-linux.html
I'll add that any link or github repository that dumps a load of code in my lap without making clear the prerequisites (since my little Jetson may not have the stuff installed that you assume) or the commands needed to actually build it (especially if it includes a project file for a compiler I never heard of) isn't very much help.
I am required to write a C++ application to run on an embedded Linux setup (DMP Vortex86DX processor). The vendor provides a minimal linux installation image that can be installed to the board and contains appropriate hardware drivers. My question is motivated by the answer to my previous question about writing Linux software on a particular kernel to run on a different kernel . I don't really know where to start when it comes to writing the software with regards to ensuring compatibility.
My instinctive approach would be to install the same versions of g++ on the embedded device and on my desktop development machine, write the application on the dev maching, copy to the board and compile it there. This seems madness though and I find it hard to believe that this is how embedded software is developed. With regards to the answer to my previous question, is there a way I can simply build on my desktop but use the version of glibc that exists on the embedded device - if so how can enforce linkage to a specific version? Or is it possible to build everything statically so that the application doesn't link to anything dynamically (I doubt this is possible).
I am a total novice to embedded development, and foresee months of frustration unless I can get hold of some good advice or resources. Any pointers or suggestion of where to start will be very gratefully received no matter how simple or trivial they seem - I really am starting at the very bottom with regards to embedded stuff.
OK, given the fact that the Vortex86SX/DX/MX claims to be x86 compatible, a small set of compiler switches should enable you to compile code for your target machine: -m32 to ensure 32bit code, and no -march switch targeting a specific CPU.
Then you'll need to link your code. As long as you don't use anything fancy, but simple established glibc functions, I'd expect the ABI to be the same on your development machine and the embedded system. In other words, you compile against your host libraries, copy the binary to the embedded system, and it should simply run using the libraries available there.
If X-Linux were to use some other libc, like uclibc or similar, then you'd need a cross compiler on your host. I have little experience with Ubuntu in that regard, but I know that the sys-devel/crossdev package for Gentoo linux makes generation of cross-compilers very easy. This can be both for different architectures (not needed in your case) and different libraries (like e.g. uclibc).
I'd say simply give copying the binaries a try, and report back if you encounter any problems there.
What is the command for selecting processor(MIPS R2000) in g++? Thanks
You'll probably need a cross-compilation environment for your target platform. You might find an existing one or you may need to build your own cross-compiler using the gcc toolchain. There's no single way to do this - it will depend on the specifics of the target architecture. Specifically, is there already an operating system (e.g. Linux, BSD, etc.) running on your target system? What kind of userland does it use - your build chain will need the relevant C and C++ library as well as any other libraries you need to build and run your software. Or are you coding straight against the metal? In this case, you'll want to find existing bootstrap code for getting the system into a sensible state for running your code - rolling your own will not be easy.
Generally, you're probably best off finding an existing developer community centred around the platform in question and asking for advice there. They may have step-by-step instructions for getting started.
Note that the CPU alone is only part of the picture - for example, the ARM architecture is very popular, but compiling code for Android devices (Linux kernel with Android userland), iOS devices (xnu kernel with BSD- and OSX-derived iOS userland), a Nintendo DS or a Playstation Vita (probably no multitasking OS at all) will be extremely different, even though they all use ARM chips, in many cases even the same instruction set generation.
Anyone knows c++ code coverage tool usable under the following conditions:
Target platform is PowerPC CPU inside Nintendo WII dev.kit, that runs custom embedded OS. The only way to exchange data with the PC is to use custom proprietary API (sorry for my NDA).
Compiler is not Microsoft, not GCC, not even command line. Namely it's Metrowerks IDE (running on Windows, of course).
Thanks in advance!
Do you know about BullseyeCoverage. It is a commercial tool, which supports really big number of platforms and compilers. If you don't see you compiler you can write them an inquiry. I did not find the Metrowerks Compiler in the list.
Hope that helps,
Ovanes
See Cpp Test Coverage. This tool can be configured to collect data in embedded systems; you have to figure out how to export an array of bits from inside that system to an external file system, and if you can do that, it can show you precise test coverage.
Does the Metrowerks compiler have special syntax that is not ANSI standard?
My shop has been using a customized version of Covtool. Perhaps that could be ported to your environment.
I have used Cantata. It works with Metroworks. It instruments your code so your application will no run at full speed. You just need rewrite the IO functions so output happens using the custom proprietary API.