Is C++ suited for tiny embedded targets? [closed] - c++

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
We are currently redesigning our embedded software and are going from 8 bit to 32 bit Cortex-M microcontrollers. Memory is pretty limited (128 kByte Flash and 32 kByte RAM).
In another thread an embedded software library (www.redblocks.de) was recommended. It seems to cover my needs very well, but requires the use of C++.
Does anybody have experience with C++ on embedded platforms like ours? I am wondering what kind of overhead I am dealing with, compared to C.

Depending on the C++ features you are using, there is little to no overhead compared to C.
Here are some features compared:
Using classes without virtual methods results in the same binary code
as C functions working on a data structure that is passed along as a
pointer
When classes with virtual methods are used, a vptr is added to the object’s data section and a vtable introduced in the text memory segment. Similar functionality can be implemented in C with function pointers (that occupy memory as well). As soon as you have more than one virtual method in a class, you typically end up with more efficient binary code when using C++ instead of manually introducing multiple function pointers per object with C.
The efficiency of exception handling differs from compiler to compiler.
RTTI adds overhead and should not be used on tiny embedded targets.
Non-deterministic dynamic memory usage (malloc / free in C and new / delete in C++) should be avoided with both programming languages on platforms without virtual memory management.
Templates have much in common with C preprocessor macros as they are evaluated during compile time and are a kind of compile time source code generation. Thus, they do not add any runtime overhead. However, using them non-deliberately will result in bloated code. If used in the right places, they can even help reducing runtime overhead.
I think the most challenging issue is the developers’ knowledge. C++, especially when using templates a lot, is a much more sophisticated language than C. So you need a bunch of pretty good developers.
However, if you want to go for a clean and reusable object oriented design, C++ is certainly the better choice than C.

I am not an embedded developer myself but I have several colleagues using c++ on the kind of microcontrollers you are targeting.
The language by itself does not add a lot of overhead but the use of the standard library (containers, algorithms...) is not recommended if you are limited in Flash/RAM.
If the performances are an issue you might also want to avoid RTTI and exception.
More information on this paper or on this page.
The book Effective C++ in an Embedded Environment form Scott Meyers is also a very good source of information.

Related

is converting Lua to spir-v possible? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I want to write shader code in Lua but it needs to be covered to spir-v. I have not come across a non-glsl compiler for it yet. Is this possible to do?
It certainly is possible (to make a converter from Lua to SPIR-V), but it is a lot of work (certainly several years, if you want the generated SPIR-V code to be efficient). You'll need to write a Lua to SPIR-V compiler.
If you want to go on that route, read several books about compilation, starting with the Dragon Book. Of course, optimization (in your Lua -> SPIR-V compiler) is really important
You have tagged your question as C++. If you want something related to C++ about SPIR-V consider using OpenACC
My recommendation is to stay reasonable: if you want to code for a GPGPU, use a dedicated low-level GPGPU language like OpenCL (or CUDA). You are likely to need to write only short routines (compute kernels) in that (OpenCL or CUDA) language (and more glue code, e.g. to be able to use them from a Lua program, or even a C++ or a C one).
Lua, as a high-level scripting language, has a number of features that SPIR-V, as a low-level shading language, either 1) cannot handle at all, 2) can only handle inefficiently, or 3) cannot handle transparently (ie: the code invoking the shader operation needs to do different stuff).
For example, Lua makes functions first-class objects with lexical scoping. That might be possible in SPIR-V, but it would require memory allocation. And SPIR-V can't do that; it can only work with the memory objects that the external system provides. So this means that the code invoking the shader operation needs to provide storage of some sort to the shader process. How much would depend on what the script does.
Also, SPIR-V has no concept of strings, yet pretty much everything in Lua relies on that. So you'll have to manufacture strings out of whole cloth. And since it cannot allocate memory, the external system would again be required to provide storage for strings.
Lua tables would also be incredibly expensive to implement. Not only do they require dynamic allocation, they also require lots of memory access indirection. Accessing a table with a runtime string value would hurt shader performance. And since Vulkan-flavored SPIR-V require logical addressing, implementing tables will require not having actual pointers. So you'll have to use some sort of array with an index.
Oh, and Lua has none of the data structures or syntax that SPIR-V and shaders work with. SPIR-V needs to communicate with the outside world through a very well defined interface, consisting of typed and decorated variable declarations. Lua has no way of explicitly defining the type of variables, let along decorating them, so you'll have to invent such grammar.
Lua (pre-5.3) also lacks a formal notion of the distinction between a float and an integer. This decision is very important for SPIR-V, as it helps define the interface between a shader stage and the outside world.
Oh, and SPIR-V outright forbids recursion. So even if you managed to make all of the above work, there's no way to guarantee that all Lua shaders could work as a SPIR-V shader.
Is it possible? Probably. Is it at all reasonable? No.

Why is there a networking library proposal for C++14/17? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Even though TCP/UDP/IP are commonly used protocols, I do not understand why they want it to be part of the ISO C++ Standard. These have nothing to do with the core of the language. Data structures are universally required tools hence STL makes sense but these protocols are too specific IMO.
There has been a long-standing sentiment that the C++ library's tiny focus area is something bad that's holding the language back. Most "modern" languages come with large framework libraries which include networking, graphics and JSON. By contrast, if you want to do any of these in C++, you a) don't get anything by default, and b) are overwhelmed with a choice of third-party libraries which you are usually unable to appraise properly and select from.
This is how that side of the opinion goes.
Of course there are other people who think that that's just the way it should be. Nonetheless, standardization is hard work, and whereas languages like Java and C# and Go have large companies behind them that can put energy into developing a huge library, C++ doesn't have that kind of manpower, and most people who spend time on C++ standardization are more interested in core aspects of programming: data structures, concurrency, language evolution (concepts, ranges, modules...).
So it isn't so much that people are generally opposed to a larger library, but it's not a priority for many. But if good ideas come around, they have a good chance for being considered. And large library components like networking won't be going into the standard library anyway, but rather into a free-standing Technical Specification, which is a way to see whether the idea is useful, popular and correct. Only if a TS gets widely used and receives lots of positive feedback will there be a possible future effort to include it into the IS.
(You may have noticed similar efforts to create Technical Specifications for file systems and for graphics.)
C++ 11 includes Threading in Standard. Now programmers need not to write PThread in Linux and Windows Threads in Windows, separably. Same can happen if Networking library gets Standardized.

Writing pure code without using third-party header files [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
As try to learn C/C++, I'm always finding it frustrating that I need to use the header files. It makes it seem like it's not my code, but I am using some other person's code instead. I simply want it to be pure, and be written by myself without using the header files.
I know for certain that C/C++ includes libraries that can give the developer some functions in order to for example create a vector. The Boost library are similar to that, but again, I want to write my own code, and maybe create my own library for my work.
But is this possible? If I wrote my own header files for C/C++ that almost acted like the iostream.h file for example, only that I've made it my own, and optimized it, will it be beneficial for my applications/projects, or should I just use the standard library that is included with the programming languages?
My answer comes, at least partially, in the form of a rhetorical question:
Are you also going to write your own compiler?
You're always using something that someone else wrote, and for general-purpose use this is a very, very good thing. Why? Because they are the experts in their field, because they are multiple people, and because their code has gone through decades of rigorous peer review, thorough testing by millions upon millions of people, and many iterations of improved versions.
Shying away from that as an instinct is one thing, but refusing to use standard headers due to it is quite another, especially when you draw the line so arbitrarily†.
In short, there is a damned good reason why the C++ standard defines the standard library and why your compiler vendor ships an implementation of it. My strong recommendation is that you work to that principle.
† …which is why mine is not a "slippery slope" argument!
Off course you should use the standard library. The only reasons not do so are:
The class you want does not exist.
You are a pro C++ programmer and something about its implementation really annoys you.
You as a beginner want to learn something by trying to build your own simple data storage types (like for instance any vector type )
Your thoughts about "all should be made by yourself" are not that uncommon, but once you've implemented one of the standard types and have spent hours on it while your actual project hasn't progressed one line and when your new "own" type still misses half of the functionality - Then you'll realize that using an existing library (especially the standard library or well known others like boost) might actually be a clever thing.
It makes it seem like it's not my code, but I am using some other person's code instead.
How would you write the <fstream> library? Opening files is not something that can be done in the pure C++ language. A library that provides that capability is needed. At base, opening files has to be done by the operating system and the OS exposes that to your code. The OS itself has to deal with other software that enables it to do these things.
Or what about this: Addition doesn't happen by magic, so somebody had to spell out exactly how to do it for your program to be able to do a + b. Does writing a + b make you feel like you're using other people's code, the code which describes how the add instruction is implemented on the CPU?
No single piece of software is going to do everything. Every piece of software will have to interact with other components, and virtually always some of those other components will be the results of someone else's work. You should just get used to the idea that your software has its own area of responsibility and it will rely on others to handle other things.
Re-inventing the wheel is a bad idea. Especially if that wheel has been designed and built by people smarter and more knowledgeable by than you and is known to everyone else who is trying to build cars (program C++).
Use the headers; don't be daft.
By the time one re-implements most standard routines, one might as well make a new language. That why we has a wide selection of languages from which to choose. Folks have dreamed-up a better idea. Re-inventing the wheel is good - we don't drive on chariot tires much these days.
C and C++ may not be the greatest, but with 40 years of history, they do have holding power (and lots of baggage). So if you are going to program in a language - use its resources, with its strengths and weaknesses. Chances are far greater, your solution, will not be better than the existing libraries improved 1,000s of others.
But who knows - you code may be better.

C vs C++ - advantages with c-language [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
C++, as the name suggests, is a superset of C. As a matter of fact, C++ can run most of C code while C cannot run C++ code.
There are several advantages with c++ compared with c - for instance
data can not be hidden in c language
c is more low level (which mean harder to understand and code - and that means more bugs)
c does not allow function overloading
c does not support exception handling
you can use functions inside structures in C++ but not in C
This list could certainly be much longer - but here comes my question: Is there ANY advantage with c-langauge compared with c++? Is there anything whatsoever that is better with c than with c++? Does c have anything that c++ lacks?
I do not know about this at all - but could c possibly be slighty faster than c++ due to fewer instruction-sets? A low-level language would possibly require fewer instructions by the processor.
In simple, C and C++ are two different languages.
C++, as the name suggests, is a superset of C
No. This is not true. C++ is not a superset of C..
Is there ANY advantage with c-language compared with c++? Is there anything whatsoever that is better with c than with c++?
Static initialize is safe in C but not in C++, because in C++ static initialization can cause code to run, which depends on other variables having been statically initialized. It can also cause cleanup code to run at shutdown which you can't control sequence of (destructors).
C gives you better control over what happens when your code is executed. When reading seek out it is fairly straightforward to decipher one code is getting executed and when memory is just restart or primitive operations are performed.
C supports variable sized arrays on the stack. Which is much faster to allocate than on the heap. (C99 feature)
No name mangling. If you intend to read generated assembly code, this makes that much easier. It can be useful when trying to optimize code.
De facto standard application binary interface (ABI). Code produced by different compilers can easily be combined.
Much easier to interface with other languages. A lot of languages will let you call C functions directly. Binding to a C++ library is usually a much more elaborate job.
Compiling C programs is faster than compiling C++ programs, because parsing C is much easier than parsing C++.
Varargs cannot safely be used in C++. They're not entirely safe in in C either. However they're much more so in the C++, to the point that they are prohibited in the C++ coding standards (Sutter, Alexandrescu).
C requires less runtime support. Makes it more suitable for low-level environments such as embedded systems or OS components.
Standard way in C to do encapsulation is to forward declare a struct and only allow access to its data through functions. This method also creates compile time encapsulation. Compile time encapsulation allows us to change the data structures members without recompilation of client code (other code using our interface). The standard way of doing encapsulation C++ on the other hand (using classes) requires recompilation of client code when adding or removing private member variables.

Kernel development and C++ [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
From what I know, even though the common OS have parts written in other languages, the kernel is entirely written in C.
I want to know if it's feasible to write a Kernel in C++ and if not, what would be the drawbacks.
There are plenty of examples of well-used operating systems (or parts of them) implemented in C++ - IOKit - the device driver subsystem of MacOSX and IOS is implemented in EC++. Then there's the eCOS RTOS - where the kernel is implemented in C++, even making use of templates.
Operating systems are traditionally awash with examples of OO concepts implemented the hard way in C. In the linux device model kobject is effectively the base-class for driver and device objects, complete with DIY v-tables and some funky arrangements implemented in macros for up and down-casting.
The Windows NT kernel has an even more deeply rooted inheritance hierarchy of kernel objects. And for all of the neigh-sayers complaining about the suitability of exception handling in kernel code, exactly such a mechanism is provided.
Traditionally, the arguments against using C++ in kernel code have been:
Portability: availability of C++ compilers for all intended target platforms. This is not really an issue any more
Cost of C++ language mechanisms such as RTTI and exceptions. Clearly if they were to be used, the standard implementation isn't suitable and a kernel-specific variant needs using. This is generally the driver behind the use of EC++
Robustness of C++ APIs, and particularly the Fragile base-class problem
Undoubtedly, the use of exceptions and RAII paradigm would vastly improve kernel code quality - you only have to look at source code for BSD or linux to see the alternative - enormous amounts of error handling code implemented with gotos.
This is covered explicitly in the OSDev Wiki.
Basically, you either have to implement runtime support for certain things (like RTTI, exceptions), or refrain from using them (leaving only a subset of C++ to be used).
Other than that, C++ is the more complex language, so you need to have a bit more competent developers that won't screw it up. Linus Torvalds hating C++ being purely coincidental, of course.
To address Torvalds' concerns and others mentioned elsewhere here:
In hard-RT systems written in C++, STL/RTTI/exceptions are not used and that same principal can be applied to the much more lenient Linux kernel. Other concerns about "OOP memory model" or "polymorphism overhead" basically show programmers that never really checked what happens at the assembly level or the memory structure. C++ is as efficient, and due to optimized compilers many times more efficient than a C programmer writing lookup tables badly since he doesn't have virtual functions at hand.
In the hands of an average programmer C++ doesn't add any additional assembly code vs a C written piece of code. Having read the asm translation of most C++ constructs and mechanisms, I'd say that the compiler even has more room to optimize vs C and can create even leaner code at times. So as far as performance it's pretty easy to use C++ as efficiently as C, while still utilizing the power of OOP in C++.
So the answer is that it's not related to facts, and basically revolves around prejudice and not really knowing what code CPP creates. I personally enjoy C almost as much as C++ and I don't mind it, but there is no rational against layering an object oriented design above Linux, or in the Kernel itself, it would've done Linux a lot of good.
You can write an OS kernel in more or less any language you like.
There are a few reasons to prefer C, however.
It is a simple language! There's very little magic. You can reason about the machinecode the compiler will generate from your source code without too much difficulty.
It tends to be quite fast.
There's not much of a required runtime; there's minimal effort needed to port that to a new system.
There are lots of decent compilers available that target many many different CPU and system architectures.
By contrast, C++ is potentially a very complex language which involves an awful lot of magic being done to translate your increasingly high-level OOP code into machine code. It is harder to reason about the generated machine code, and when you need to start debugging your panicky kernel or flaky device driver the complexities of your OOP abstractions will start becoming extremely irritating... especially if you have to do it via user-unfriendly debug ports into the target system.
Incidentally, Linus is not the only OS developer to have strong opinions on systems programming languages; Theo de Raadt of OpenBSD has made a few choice quotes on the matter too.
The feasibility of writing a kernel in C++ can be easily established: it has already been done. EKA2 is the kernel of Symbian OS, which has been written in C++.
However, some restrictions to the usage of certain C++ features apply in the Symbian environment.
While there is something "honest" about (ANSI) C, there is also something "honest", in a different way, about C++.
C++'s syntactic support for abstracting objects is very worthwhile, no matter what the application space. The more tools available for misnomer mitigation, the better ... and classes are such a tool.
If some part of an existing C++ compiler does not play well with kernel-level realities, then whittle up a modified version of the compiler that does it the "right" way, and use that.
As far as programmer caliber and code quality, one can write either hideous or sublime code in either C or C++. I don't think it is right to discriminate against people who can actually code OOP well by disallowing it at the kernel level.
That said, and even as a seasoned programmer, I miss the old days of writing in assembler. I like 'em both ... C++ and ASM ... as long as I can use Emacs and source level debuggers (:-).
Revision after many years:
Looking back, I'd say the biggest problem is actually with the tons of high level features in C++, that are either hidden or outside the control of the programmer. The standard doesn't enforce any particular way of implementing things, even if most implementations follow common sanity, there are many good reasons to be 100% explicit and have full control over how things are implemented in a OS kernel.
This allows (as long as you know what you are doing) to reduce memory footprint, optimize data layout based on access patterns rather than OOP paradigms, thus improve cache-friendliness and performance, and avoid potential bugs that might come hidden in the tons of high level features of C++.
Note that even tho far more simple, even C is too unpredictable in some cases, which is one of the reasons there is also a lot of platform specific assembly in the kernel code.
Google's new-coming operating system Fuchsia is based on the kernel called Zircon, which is written mostly in C++, with some parts in assembly language[1] [2]. Plus, the rest of the OS is also written mostly in C++[3]. I think modern C++ gives programmers many reasons to use it as a general programming environment for huge codebases. It has lots of great features, and new features are added regularly. I think this is the main motive behind Google's decision. I think C++ could easily be the future of system programming.
One of the big benefits of C is it's readability. If you have a lot of code, which is more readable:
foo.do_something();
or:
my_class_do_something(&foo);
The C version is explicit about which type foo is every time foo is used. In C++ you have lots and lots of ambiguous "magic" going on behind the scenes. So readability is much worse if you are just looking at some small piece of code.