Efficient C++ for ARM [closed] - c++

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I will give internal training about C++ on ARM, focusing on programming tips and hints, and I searched some webpages like:
Embedded C interview Questions for Embedded Systems Engineers
Efficient C for ARM
all of above are mainly for C on ARM, in that I am wondering they apply to C++ as well, say struct padding and etc
can you help me on that, T.H.X

I didnt look at the first link, the second link Efficient C for ARM is very good, thanks for finding and sharing that, I am going to refer people to that link.
In the same way that the Zen of Assembly language is still as relevant today as when it came out is not because the modern x86 is related to the 8088/86 and the "cycle eaters", but because the thought process and analysis taught doesnt change over time. The cycle eaters might from language to language or target to target, but how you find them doesnt. That book was outdated and irrelevant when it came out as far as the tuning for an 8088/86 or so I read somewhere, but I read it then and use what I learned every day since.
Same here the Efficient C for ARM applies well to similar items in C++, but more importantly look at the early slides, before any specific structures or code is shown. You have to analyze by inspection, and by using a profiler (no different than what Zen of Assembly language says, look at it and time it). Then the Efficient C for ARM page goes on to inspect some examples, take your C++ code and compile it, then disassemble, and see what is really happening. The problem with doing that is that you have to realize that there are many tuning knobs on a compiler and compilers are constantly evolving, and different compilers say, gcc, llvm (clang) and visual C/C++ are completely different. The same C++ source code presented to the different compilers and different versions of the compilers and the same compilers with different optimzation settings is going to produce different results.
When you want to micro-optimize you have to learn a lot of how the compilers work by getting a considerable amount of experience at disassembling and analyzing what the compiler does with your code (FOR EACH TARGET YOU CARE ABOUT). Only then can you begin to do some real optimization without having to resort to writing assembler. Despite what folks may tell you you can do this, you can for some situations significantly improve execution performance by simply rearranging structures, functions, lines of code, etc. Also make code that is more portable to other processors and make code that is generally faster on a number of platforms, not just one. The nay-sayers are right in that you need to have a good reason for it, sometimes it doesnt change the readability, but often taking it to far makes your code unreadable or unmaintainable or fragile, etc. Always arrange your structures in a sensible way, larger, aligned variables first, then progressively smaller. Other things though you may not want to do as a habit but only for special occasions.

Related

Difference between developing C/C++ code for different target MCUs [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I see on quite a few job descriptions for embedded developer positions, that a requirement would be for example to have knowledge of embedded C programming on ARM microcontrollers.
What's not very clear to me is , what is the difference between having knowledge of developing C for a particular ARM processor, or developing C for a PowerPC or other type of processor. Developing with C we are abstracted from the actual instructions which will get compiled and the architecture so why does it really matter?
I would guess that it is important mainly to have knowledge of embedded development, knowing memory constraints etc. and that the actual target processor wouldn't matter too much. For example, won't a developer who was writing code for PowerPC MCUs for many years be just as good as developing code for an ARM-based or other type of MCU?
Perhaps i'm missing something obvious here. Please could someone give an example of what could be some major differences between writing C/C++ code for one type of processor and another type. Thanks
All you write about Linux and kernels, but 99% of the embedded work is purely bare metal. therefore, the knowledge of the core and peripherals is essencial.
In the most cases, before Linux programmers learn how to program the bare metal, they are way useless.
And this time needed costs your employer money.
As usual, it depends.
If you are going to develop at a deep C level, it is a good idea to know about the architecture, in order to optimize your code according to speed or/and memory (data bus size, instructions per clock, etc).
It is a must if you are going to develop at a deeper level (and if we are talking about ARM and its thumb, thumb2 and ARM... It is madness).
Furthermore, each architecture because of legacy reasons or to make our life better, adds some tricks. For instance, ARM PC points to the instruction after the next one to be executed, which is quite different in PowerPC or RH850 or whatever.
Above that (I mean, above or close to an OS), the compiler uses to be better than us, but anyway, it is a nice to know how it is working (and definitely it will make the debugging easier)
The answer is yes and no.
If a job is more of application related then MCU specific knowledge will be of little help
If the software development is at kernel/BSP/drivers or hardware related then the knowledge of MCU/CPU hardware is critical.
Please check the Job Description in details to know the kind of work expected.
Well, I haven't written any ARM code myself, but I would think the ARM-specific experience would be (in no particular order):
Experience with ARM-specific toolchains (particularly non-standard parts of that toolchain, pieces of proprietary software etc)
Knowledge of which relevant libraries are available on ARM rather than on Intel machines, and whether their ARM version is performant enough, stable enough etc.
Knowledge of how certain coding patterns look like when compiled to ARM machine code, and consequently, good discretion regarding certain choices regarding low-level implementation.
Ability to write inline ARM assembly when and if necessary
Knowledge of ARM systems' cache layout
Ability to estimate/forecast the power economy of code rather than its mere performance (as ARM processors are sometimes more concerned with that).

Does all the programming languages have assemblers [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I've found that Go language have its assembler, various compilers of C language have their own assemblers like Turbo C, LCC, OpemWatcom. But some are only asemblers like nasm, gas, etc. Why the compilers of high level languages comes with an assembler, if the compiler has to convert the source code into assembly code, then all the programming languages must be having their own assemblers. Please someone explain me this.
Does all the programming languages have assemblers
No.
Modern C++ compiler can output machine code (or intermediate object code in case of llvm) directly, without the need to generate assembly. The assembler is optional and if exists, it is provided as a convenient tool for the programmer for code inspection.
Can only one driver drive a particular car? No. The car is the assembler and can be made for multiple drivers.
It is in the best interest of the company that invents a processor to both document the instruction set machine code as well as an assembly language. Assembly language is not a universal thing, it is very specific to each assembler, the program that parses it and turns it into machine code. So the processor vendors documentation should use some assembly language to make the documentation readable. It is in their best interest to promote the new processor to create or hire someone to create an assembler and other tools, but you always have to create an assembler, if for no other reason than to validate the processor. So a generic assembler will exist.
Then compiler vendors may choose of course to support a generic one, a competitors, or make their own or any combination. And reasons for those choices can be many. For various reasons some compilers may choose to not generate assembly language as an intermediate step but go straight to machine code.
Your typical generic retargettable compiler will tend to support the compiler-assembler-linker model. The might do more than that but will tend for that model. Again which assembler, which linker? Many reasons why they might make their own or support a generic one. but if they use that model they have to at least pick one to support. Someone in the business of selling that compiler will likely want to control not just the compiler but all of the toolchain including a gui. They make take from open source or do their own but it is in their best interest to control their destiny.

Writing pure code without using third-party header files [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
As try to learn C/C++, I'm always finding it frustrating that I need to use the header files. It makes it seem like it's not my code, but I am using some other person's code instead. I simply want it to be pure, and be written by myself without using the header files.
I know for certain that C/C++ includes libraries that can give the developer some functions in order to for example create a vector. The Boost library are similar to that, but again, I want to write my own code, and maybe create my own library for my work.
But is this possible? If I wrote my own header files for C/C++ that almost acted like the iostream.h file for example, only that I've made it my own, and optimized it, will it be beneficial for my applications/projects, or should I just use the standard library that is included with the programming languages?
My answer comes, at least partially, in the form of a rhetorical question:
Are you also going to write your own compiler?
You're always using something that someone else wrote, and for general-purpose use this is a very, very good thing. Why? Because they are the experts in their field, because they are multiple people, and because their code has gone through decades of rigorous peer review, thorough testing by millions upon millions of people, and many iterations of improved versions.
Shying away from that as an instinct is one thing, but refusing to use standard headers due to it is quite another, especially when you draw the line so arbitrarily†.
In short, there is a damned good reason why the C++ standard defines the standard library and why your compiler vendor ships an implementation of it. My strong recommendation is that you work to that principle.
† …which is why mine is not a "slippery slope" argument!
Off course you should use the standard library. The only reasons not do so are:
The class you want does not exist.
You are a pro C++ programmer and something about its implementation really annoys you.
You as a beginner want to learn something by trying to build your own simple data storage types (like for instance any vector type )
Your thoughts about "all should be made by yourself" are not that uncommon, but once you've implemented one of the standard types and have spent hours on it while your actual project hasn't progressed one line and when your new "own" type still misses half of the functionality - Then you'll realize that using an existing library (especially the standard library or well known others like boost) might actually be a clever thing.
It makes it seem like it's not my code, but I am using some other person's code instead.
How would you write the <fstream> library? Opening files is not something that can be done in the pure C++ language. A library that provides that capability is needed. At base, opening files has to be done by the operating system and the OS exposes that to your code. The OS itself has to deal with other software that enables it to do these things.
Or what about this: Addition doesn't happen by magic, so somebody had to spell out exactly how to do it for your program to be able to do a + b. Does writing a + b make you feel like you're using other people's code, the code which describes how the add instruction is implemented on the CPU?
No single piece of software is going to do everything. Every piece of software will have to interact with other components, and virtually always some of those other components will be the results of someone else's work. You should just get used to the idea that your software has its own area of responsibility and it will rely on others to handle other things.
Re-inventing the wheel is a bad idea. Especially if that wheel has been designed and built by people smarter and more knowledgeable by than you and is known to everyone else who is trying to build cars (program C++).
Use the headers; don't be daft.
By the time one re-implements most standard routines, one might as well make a new language. That why we has a wide selection of languages from which to choose. Folks have dreamed-up a better idea. Re-inventing the wheel is good - we don't drive on chariot tires much these days.
C and C++ may not be the greatest, but with 40 years of history, they do have holding power (and lots of baggage). So if you are going to program in a language - use its resources, with its strengths and weaknesses. Chances are far greater, your solution, will not be better than the existing libraries improved 1,000s of others.
But who knows - you code may be better.

Kernel development and C++ [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
From what I know, even though the common OS have parts written in other languages, the kernel is entirely written in C.
I want to know if it's feasible to write a Kernel in C++ and if not, what would be the drawbacks.
There are plenty of examples of well-used operating systems (or parts of them) implemented in C++ - IOKit - the device driver subsystem of MacOSX and IOS is implemented in EC++. Then there's the eCOS RTOS - where the kernel is implemented in C++, even making use of templates.
Operating systems are traditionally awash with examples of OO concepts implemented the hard way in C. In the linux device model kobject is effectively the base-class for driver and device objects, complete with DIY v-tables and some funky arrangements implemented in macros for up and down-casting.
The Windows NT kernel has an even more deeply rooted inheritance hierarchy of kernel objects. And for all of the neigh-sayers complaining about the suitability of exception handling in kernel code, exactly such a mechanism is provided.
Traditionally, the arguments against using C++ in kernel code have been:
Portability: availability of C++ compilers for all intended target platforms. This is not really an issue any more
Cost of C++ language mechanisms such as RTTI and exceptions. Clearly if they were to be used, the standard implementation isn't suitable and a kernel-specific variant needs using. This is generally the driver behind the use of EC++
Robustness of C++ APIs, and particularly the Fragile base-class problem
Undoubtedly, the use of exceptions and RAII paradigm would vastly improve kernel code quality - you only have to look at source code for BSD or linux to see the alternative - enormous amounts of error handling code implemented with gotos.
This is covered explicitly in the OSDev Wiki.
Basically, you either have to implement runtime support for certain things (like RTTI, exceptions), or refrain from using them (leaving only a subset of C++ to be used).
Other than that, C++ is the more complex language, so you need to have a bit more competent developers that won't screw it up. Linus Torvalds hating C++ being purely coincidental, of course.
To address Torvalds' concerns and others mentioned elsewhere here:
In hard-RT systems written in C++, STL/RTTI/exceptions are not used and that same principal can be applied to the much more lenient Linux kernel. Other concerns about "OOP memory model" or "polymorphism overhead" basically show programmers that never really checked what happens at the assembly level or the memory structure. C++ is as efficient, and due to optimized compilers many times more efficient than a C programmer writing lookup tables badly since he doesn't have virtual functions at hand.
In the hands of an average programmer C++ doesn't add any additional assembly code vs a C written piece of code. Having read the asm translation of most C++ constructs and mechanisms, I'd say that the compiler even has more room to optimize vs C and can create even leaner code at times. So as far as performance it's pretty easy to use C++ as efficiently as C, while still utilizing the power of OOP in C++.
So the answer is that it's not related to facts, and basically revolves around prejudice and not really knowing what code CPP creates. I personally enjoy C almost as much as C++ and I don't mind it, but there is no rational against layering an object oriented design above Linux, or in the Kernel itself, it would've done Linux a lot of good.
You can write an OS kernel in more or less any language you like.
There are a few reasons to prefer C, however.
It is a simple language! There's very little magic. You can reason about the machinecode the compiler will generate from your source code without too much difficulty.
It tends to be quite fast.
There's not much of a required runtime; there's minimal effort needed to port that to a new system.
There are lots of decent compilers available that target many many different CPU and system architectures.
By contrast, C++ is potentially a very complex language which involves an awful lot of magic being done to translate your increasingly high-level OOP code into machine code. It is harder to reason about the generated machine code, and when you need to start debugging your panicky kernel or flaky device driver the complexities of your OOP abstractions will start becoming extremely irritating... especially if you have to do it via user-unfriendly debug ports into the target system.
Incidentally, Linus is not the only OS developer to have strong opinions on systems programming languages; Theo de Raadt of OpenBSD has made a few choice quotes on the matter too.
The feasibility of writing a kernel in C++ can be easily established: it has already been done. EKA2 is the kernel of Symbian OS, which has been written in C++.
However, some restrictions to the usage of certain C++ features apply in the Symbian environment.
While there is something "honest" about (ANSI) C, there is also something "honest", in a different way, about C++.
C++'s syntactic support for abstracting objects is very worthwhile, no matter what the application space. The more tools available for misnomer mitigation, the better ... and classes are such a tool.
If some part of an existing C++ compiler does not play well with kernel-level realities, then whittle up a modified version of the compiler that does it the "right" way, and use that.
As far as programmer caliber and code quality, one can write either hideous or sublime code in either C or C++. I don't think it is right to discriminate against people who can actually code OOP well by disallowing it at the kernel level.
That said, and even as a seasoned programmer, I miss the old days of writing in assembler. I like 'em both ... C++ and ASM ... as long as I can use Emacs and source level debuggers (:-).
Revision after many years:
Looking back, I'd say the biggest problem is actually with the tons of high level features in C++, that are either hidden or outside the control of the programmer. The standard doesn't enforce any particular way of implementing things, even if most implementations follow common sanity, there are many good reasons to be 100% explicit and have full control over how things are implemented in a OS kernel.
This allows (as long as you know what you are doing) to reduce memory footprint, optimize data layout based on access patterns rather than OOP paradigms, thus improve cache-friendliness and performance, and avoid potential bugs that might come hidden in the tons of high level features of C++.
Note that even tho far more simple, even C is too unpredictable in some cases, which is one of the reasons there is also a lot of platform specific assembly in the kernel code.
Google's new-coming operating system Fuchsia is based on the kernel called Zircon, which is written mostly in C++, with some parts in assembly language[1] [2]. Plus, the rest of the OS is also written mostly in C++[3]. I think modern C++ gives programmers many reasons to use it as a general programming environment for huge codebases. It has lots of great features, and new features are added regularly. I think this is the main motive behind Google's decision. I think C++ could easily be the future of system programming.
One of the big benefits of C is it's readability. If you have a lot of code, which is more readable:
foo.do_something();
or:
my_class_do_something(&foo);
The C version is explicit about which type foo is every time foo is used. In C++ you have lots and lots of ambiguous "magic" going on behind the scenes. So readability is much worse if you are just looking at some small piece of code.

recommended guides/books to read assembly [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
So lately I've been intrested in reading assembly which is displayed by a disassembler like ollydbg. The reason why I want to read this assembly is to learn how other developers build their applications or things like file formats of binary files the program has.
It's not like I'm a complete newbie at programming since I've been using C++ and C# for a while now. And I have a solid understanding of C++ so the whole pointer concept is clear to me.
Well I know that there are tons of assembly guides out there on the internet but I have no idea how reliable they are this tutorial: http://jakash3.wordpress.com/2010/04/24/x86-assembly-a-crash-course-tutorial-i/ was very usefull too me and this is the kind of tutorial with just a short explanation of the instruction. This one was very clear to me but it doesn't cover all of the assembly codes.
I hope someone could give a good guide/tutorial.
I think the standard guide to assembly is The Art of Assembly. It's online and free.
If you are interesed in x86 assembly and the opcodes for all instructions, try the Intel manuals (free download). If you want to know how to program in assembler, use the recommendation by Seth Carnegie. Another download would be the 32 bit edition.
I learned much of what I know about assembly language from Matt Pietrek's Just Enough Assembly Language to Get By and its sequel. This is especially made for C or C++ programmers wanting to read what the compiler emits in their debugger.
Pietrek's material is above any doubt and he writes clear and entertaining. It's tailored to the Windows platform, though.
This is a different approach. I recently released a first draft of learning assembly by example. It is not x86 assembly though, on purpose. Learning a better and easier to understand instruction set first, will get you through the basic concepts, from there other instruction sets, x86 included are often a matter of downloading an instruction set reference for the desired processor. https://github.com/dwelch67/lsasim
Normally googling xyz instruction set will get you a number of hits (instead of xyz choose the one you are interested in, arm, avr, 6502, etc). Ideally you want the vendors documentation which is usually free. There have been so many variations on the x86 by different companies that it adds to the mess. There are a lot of good online references though. For other families, msp430, avr, arm, mips, pic, etc, you can often go to the (core processor) vendors site to find a good reference. msp430, arm and thumb are also good first time instruction sets if you are not interested in the lsa thing. mips or dlx as well so I am told unfortunately I have not learned those yet. avr and x86 after you have learned something else first. Pic has a few flavors, the non-mips traditional pic instruction set is certainly educational in its simplicity and approach, might be a stepping stone to x86 (I dont necessarily recommend pic as a first instruction set either). I recommend learning a couple of non-x86 instruction sets first. And I recommend learning the 8088/86 instructions first, I can give you an ISBN number for the original intel manuals, can probably be found for a few bucks at a used book store (online). Lots of websites have the x86 instruction set defined as well, I highly recommend a visible simulator first before trying on hardware, will make life easier...qemu for example is not very visible nor easy to make visible. gdb's simulators might be as you can at least step and dump things out.
I really liked Programming From the Ground Up, a free book which aims to teach you the basics of ASM programming in a pretty easy to understand way. You can check it out here:
Programming from the Ground up