Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
Many embedded engineers use c++, but some argue it's bad because it's "object oriented"?
Is it true that being object oriented makes it bad for embedded systems, and if so, why is that really the case?
Edit: Here's a quick reference for those who asked:
so we
prefer people not to use divide ..., malloc ..., or other object
oriented practice that carry large
penalty.
I guess the question is are objects considered heavyweight in the context of an embedded system? Some of the answers here suggest they are and some suggest they're not.
Whilst I'm not sure it answers your question, I can summarise the reasons my previous companies source code was pure C.
It's firstly worth summarising the situation:
we wanted to write a large amount of "core" code that would be highly portable across a large number of ARM embedded systems (mostly mid-range mobile phones; both smart phones and ones running RTOSs of various ages)
the platforms generally had a workable C compiler, though some for example didn't support floating point "double"s.
in some cases the platform had a reasonable implementation of the standard library, but in many cases it didn't.
a C++ compiler was not available on most platforms, and where it was available support for the C++ standard library, STL or exceptions was highly variable.
debuggers often weren't available (a serial port you could send debug printfs to was considered a luxury)
we always had access to a reasonable amount of memory, but often not to a reasonable malloc() implementation
Given that, we worked entirely in C, and even then only a restricted set of C 89. The resulting code was highly portable. We often used object orientated concepts though.
These days "embedded" is a very wide definition. It covers everything from 8 bit microprocessors with no RAM or C compilers upto what are essentially high end PCs (albeit not running Microsoft Windows) - I don't know where your project/company sits in that range.
Taking your quote at face value, dynamic memory allocation is completely separate concept from object-oriented software design, so it's outright false. You can have object-oriented design, and not use dynamic memory allocation.
In fact, you can do OO in C to an extent (that's what Linux kernel does). The real reason that many embedded developers don't like C++ is that it's very complex and it is hard to write straight-forward and predictable code in it. Linus has a good recent rant on why he does not like C++ (it's better and more reasoned than his old one, I promise). Probably most folks just don't articulate it very well.
What makes you say that C++ is Object Oriented? C++ is multiparadigm, and not all of the features that C++ provides are useful for the embedded market due to their overheads. (So... Just don't use those features! Problem solved!)
Object Oriented is great for embedded systems. It focuses a lot on encapsulation, data hiding, and code sharing. One can have Object Oriented embedded systems without division or dynamic memory allocation.
Division and dynamic memory allocation are enemies of embedded systems regardless of Object Oriented, Data Oriented or procedural programming. These concepts may or may not be used in the implementation of Object Oriented designs.
Object Oriented allows for a UART class to transmit instances of Message objects without knowing the content of the Message objects. A Message could be the base class and have several descendant classes.
The C++ language helps promote safe coding in embedded systems by allowing constructors, copy constructors, and destructors, which would only remembered in the highest disciplined C language embedded systems.
Exception handling is also a pain to get working in the C language. The C++ provides better facilities embedded into the language.
The C++ language provides templates for writing common code to handle different data types. A classic example is a ring buffer or circular queue. In the C language, one would have to use "pointers to void" so that any object could be passed. C++ offers a template so one can write a Circular_Queue class that works with different data types and has better compile-time type checking.
Inheritance allows for better code sharing. The shared code is factored into a base class and child classes can be created that share the same functionality; through inheritance.
The C language profides function pointers. The C++ languages provides facilities for function objects (function pointers with attributes).
Sorry, I just don't like those people who limit embedded systems to C language because of rumors and little knowledge and experience with C++.
Object-oriented design by itself isn't bad. The answer lies in your quote. Especially in real-time embedded systems, you want to make your code as light and efficient as possible. The things mentioned in your quote (objects, division, dynamic memory allocation) are relatively heavyweight and can usually be replaced with simpler alternatives (for eg. using bit-manipulation to approximate division, allocating memory on the stack or with static pools) to improve performance in time-critical systems.
Nothing about 'object-oriented' is bad for embedded systems. OO is just a way of thinking about software.
What's bad for embedded systems is that, in general, they have less sophisticated debuggers, and C++ does a lot of crazy stuff 'behind your back', so to speak. Those pieces of hard-to-get-access-to code will drive you nuts.
C++ was designed with the philosophy of don't pay for what you don't use. So apart from the lack of good embedded compilers, there's no real reason.
Maybe CFront could have compiled C++ into C, which has a myriad of compilers...
Edit: The Comeau compiler transforms C++ into plain C, so the no-compiler argument doesn't hold.
As others have noted, 'embedded' encompasses a broad and varied range of hardware/software options. But...
The quote you give will give microcontroller embedded types shivers. Dynamic allocation is a no-no, if you have an error, you crash the system in unpredictable ways. Divides are heavily discouraged since they take forever in execution time. Objects are only discouraged insofar as they tend to carry lot's of 'stuff' around with them, all that 'stuff' takes up space, and microcontrollers don't have any.
I think of embedded as being projects that are small and specific, you don't worry much about extensibility or portability. You write clean code in C that does only and exactly what you want your device to do, reliably. You choose one chip family so you can move your (almost the) same code among different hardware options with minor tweaks to the port your writing too or initialization of configuration fuses.
So, you don't need to define
4 wheeled Transportation
Car
Toyota
Since you're only working on Toyotas. And the difference in accelerations between a Camry and Corolla are stored as constants in a register.
As said above, it's what object-oriented / malloc / math do behind your back that carries a penalty - both in code size and CPU cycles which are usually in short supply in embedded.
As an example, including the sqrt() function in a loop added so much overhead in recursive calculations that we had to remove it and work a fast approximation around it, using a lookup table if I remember correctly.
By all means use any tools/langauges you like, but you need to at least be able to lift the lid and check just how much extra code is being generated behind your back.
Programming is always about using the right tool for the job. There are no pat answers, and that is especially true in the embedded world. If you want to become skilled in embedded development you will be just as intimately familier with C as you are with C++.
First, let's pick this apart:
so we prefer people not to use divide ..., malloc ..., or other object oriented practice that carry large penalty.
Division and malloc are not unique to object oriented programming, they are present in procedural languages too (and presumably functional, and whatever other paradigms you might think of).
Division and malloc can be a problem on an embedded system if the system has sufficiently limited resources, that much is true, but they will be a problem no matter what programming paradigm you use.
Onto the main issue "Is object orientation bad for embedded systems?".
Firstly, 'object orientation' is quite a broad spectrum. Some people have conflicting ideas about what it actually means. There's a minimalist definition where an object is essentially just a bundle of functions (or 'methods') and data, and there's a more 'purist' definition that also includes features like inheritance and polymorphism.
If you take the minimalist definition of OOP then no - OOP is not bad for embedded systems. It does depend on the language, but it's entirely possible for using objects to be just as cheap as not using objects (and possibly cheaper in some situations). In C++, an object (without virtual methods, which I'll get to in a moment) takes up no more memory than its individual fields would if they weren't part of an object. In C++, (the size of) an object is equal to the sum of (the size of) its parts.
However, if you take the 'puritan' view of OOP and insist on including inheritance and polymorphism then the answer is 'yes, it is bad'. Inheritance is perhaps less of a concern (depending on the language), but polymorphism via virtual methods is a definite memory chewer. virtual functions are usually implemented by maintaining a 'vtable' (a virtual method table) that stores pointers to the correct functions, and each object must store a pointer to its vtable to enable dynamic dispatch (the process of calling virtual functions) to work properly. There are circumstances in which inheritance can use more memory than solutions that don't require inheritance - typically when comparing inheritance to composition, composition sometimes uses less memory.
One last point, particularly for the case of C++ (since that's usually what people mean when they talk about using OOP on embedded systems). People often say that "C++ does strange things behind your back" - it does do some things without those things being immediately obvious when looking at the code, such as producing the vtables I mentioned earlier, however it doesn't do these things "behind your back" in an attempt to thwart you, it does these things because they are simply needed to implement the features being used. Overall there are very few detrimental things that happen 'behind the scenes', and the things it does aren't exactly arcane or mysterious, they're generally quite well known and they're things the programmer ought to be aware of when programming for an embedded system. If you don't know about them then you don't know your tools properly and you should research them some more.
Having said all that, remember that people are free to be selective with which language features they do and don't use. It makes absolute sense to avoid the more expensive features like virtual functions, but it doesn't make sense to forgo an entire language (like C++) simply because it has a few expensive entirely optional features (like virtual functions) - that's just throwing the baby out with the bathwater.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
Just about everyone uses them, but many, including me simply take it for granted that they just work.
I am looking for high-quality material. Languages I use are: Java, C, C#, Python, C++, so these are of most interest to me.
Now, C++ is probably a good place to start since you can throw anything in that language.
Also, C is close to assembly. How would one emulate exceptions using pure C constructs and no assembly?
Finally, I heard a rumor that Google employees do not use exceptions for some projects due to speed considerations. Is this just a rumor? How can anything substantial be accomplished without them?
Thank you.
Exceptions are just a specific example of a more general case of advanced non-local flow control constructs. Other examples are:
notifications (a generalization of exceptions, originally from some old Lisp object system, now implemented in e.g. CommonLisp and Ioke),
continuations (a more structured form of GOTO, popular in high-level, higher-order languages),
coroutines (a generalization of subroutines, popular especially in Lua),
generators à la Python (essentially a restricted form of coroutines),
fibers (cooperative light-weight threads) and of course the already mentioned
GOTO.
(I'm sure there's many others I missed.)
An interesting property of these constructs is that they are all roughly equivalent in expressive power: if you have one, you can pretty easily build all the others.
So, how you best implement exceptions depends on what other constructs you have available:
Every CPU has GOTO, therefore you can always fall back to that, if you must.
C has setjmp/longjmp which are basically MacGyver continuations (built out of duct-tape and toothpicks, not quite the real thing, but will at least get you out of the immediate trouble if you don't have something better available).
The JVM and CLI have exceptions of their own, which means that if the exception semantics of your language match Java's/C#'s, you are home free (but if not, then you are screwed).
The Parrot VM as both exceptions and continuations.
Windows has its own framework for exception handling, which language implementors can use to build their own exceptions on top.
A very interesting use case, both of the usage of exceptions and the implementation of exceptions is Microsoft Live Lab's Volta Project. (Now defunct.) The goal of Volta was to provide architectural refactoring for Web applications at the push of a button. So, you could turn your one-tier web application into a two- or three-tier application just by putting some [Browser] or [DB] attributes on your .NET code and the code would then automagically run on the client or in the DB. In order to do that, the .NET code had to be translated to JavaScript source code, obviously.
Now, you could just write an entire VM in JavaScript and run the bytecode unmodified. (Basically, port the CLR from C++ to JavaScript.) There are actually projects that do this (e.g. the HotRuby VM), but this is both inefficient and not very interoperable with other JavaScript code.
So, instead, they wrote a compiler which compiles CIL bytecode to JavaScript sourcecode. However, JavaScript lacks certain features that .NET has (generators, threads, also the two exception models aren't 100% compatible), and more importantly it lacks certain features that compiler writers love (either GOTO or continuations) and that could be used to implement the above-mentioned missing features.
However, JavaScript does have exceptions. So, they used JavaScript Exceptions to implement Volta Continuations and then they used Volta Continuations to implement .NET Exceptions, .NET Generators and even .NET Managed Threads(!!!)
So, to answer your original question:
How are exceptions implemented under the hood?
With Exceptions, ironically! At least in this very specific case, anyway.
Another great example is some of the exception proposals on the Go mailing list, which implement exceptions using Goroutines (something like a mixture of concurrent coroutines ans CSP processes). Yet another example is Haskell, which uses Monads, lazy evaluation, tail call optimization and higher-order functions to implement exceptions. Some modern CPUs also support basic building blocks for exceptions (for example the Vega-3 CPUs that were specifically designed for the Azul Systems Java Compute Accelerators).
Here is a common way C++ exceptions are implemented:
http://www.codesourcery.com/public/cxx-abi/abi-eh.html
It is for the Itanium architecture, but the implementation described here is used in other architectures as well. Note that it is a long document, since C++ exceptions are complicated.
Here is a good description on how LLVM implements exceptions:
http://llvm.org/docs/ExceptionHandling.html
Since LLVM is meant to be a common intermediate representation for many runtimes, the mechanisms described can be applied to many languages.
In his book C Interfaces and Implementations: Techniques for Creating Reusable Software, D. R. Hanson provides a nice implementation of exceptions in pure C using a set of macros and setjmp/longjmp. He provides TRY/RAISE/EXCEPT/FINALLY macros that can emulate pretty much everything C++ exceptions do and more.
The code can be perused here (look at except.h/except.c).
P.S. re your question about Google. Their employees are actually allowed to use exceptions in new code, and the official reason for the ban in old code is because it was already written that way and it doesn't make sense to mix styles.
Personally, I also think that C++ without exceptions isn't the best idea.
C/C++ compilers use the underlying OS facilities for exception handling. Frameworks like .Net or Java also rely, in the VM, on the OS facilities. In Windows for instance, the real heavy lifting is done by SEH, the Structured Exception Handling infrastructure. You should absolutely read the old reference article: A Crash Course on the Depths of Win32™ Structured Exception Handling.
As for the cost of not using exceptions, they are expensive but compared to what? Compared to return error codes? After you factor in the cost of correctness and the quality of code, exceptions will always win for commercial applications. Short of few very critical OS level functions, exceptions are always better overall.
An last but not least there is the anti-pattern of using exceptions for flow control. Exceptions should be exceptional and code that abuses exceptions fro flow control will pay the price in performance.
The best paper ever written on the implementation of exceptions (under the hood) is Exception Handling in CLU by Barbara Liskov and Alan Snyder. I have referred to it every time I've started a new compiler.
For a somewhat higher-level view of an implementation in C using setjmp and longjmp, I recommend Dave Hanson's C Interfaces and Implementations (like Eli Bendersky).
setjmp() and longjmp() usually.
Exception catching does have a non-trivial cost, but for most purposes it's not a big deal.
The key thing an exception implementation needs to handle is how to return to the exception handler once an exception has been thrown. Since you may have made an arbitrary number of nested function calls since the try statement in C++, it must unwind the call stack searching for the handler. However implemented, this must incur the code size cost of maintaining sufficient information in order to perform this operation (and generally means a table of data for calls that can take exceptions). It also means that the dynamic code execution path will be longer than simply returning from functions calls (which is a fairly inexpensive operation on most platforms). There may be other costs as well depending on the implementation.
The relative cost will vary depending on the language used. The higher-level language used, the less likely the code size cost will matter, and the information may be retained regardless of whether exceptions are used.
An application where the use of exceptions (and C++ in general) is often avoided for good reasons is embedded firmware. In typical small bare metal or RTOS platforms, you might have 1MB of code space, or 64K, or even smaller. Some platforms are so small, even C is not practical to use. In this kind of environment, the size impact is relevant because of the cost mentioned above. It also impacts the standard library itself. Embedded toolchain vendors will often produce a library without exception capability, which has a huge impact on code size. Highly optimizing compilers may also analyze the callgraph and optimize away needed call frame information for the unwind operation for considerable space reduction. Exceptions also make it more difficult to analyze hard real-time requirements.
In more typical environments, the code size cost is almost certainly irrelevant and the performance factor is likely key. Whether you use them will depend on your performance requirements and how you want to use them. Using exceptions in non-exceptional cases can make an elegant design, but at a performance cost that may be unacceptable for high performance systems. Implementations and relative cost will vary by platform and compiler, so the best way to truly understand if exceptions are a problem is to analyze your own code's performance.
C++ code at Google (save for some Windows-specific cases) don't use exceptions: cfr the guidelines, short form: "We do not use C++ exceptions". Quoting from the discussion (hit the arrow to expand on the URL):
Our advice against using exceptions is
not predicated on philosophical or
moral grounds, but practical ones.
Because we'd like to use our
open-source projects at Google and
it's difficult to do so if those
projects use exceptions, we need to
advise against exceptions in Google
open-source projects as well. Things
would probably be different if we had
to do it all over again from scratch.
This rule does not apply to Google code in other languages, such as Java and Python.
Regarding performance - sparse use of exceptions will probably have negligible effects, but do not abuse them.
I have personally seen Java code which performed two orders of magnitude worse than it could have (took about x100 the time) because exceptions were used in an important loop instead of more standard if/returns.
Some runtimes like the Objective-C runtime have zero-cost 64-bit exceptions. What that means is that it doesn't cost anything to enter a try block. However, this is quite costly when the exception is thrown. This follows the paradigm of "optimize for the average case" - exceptions are meant to be exceptional, so it is better to make the case when there are no exceptions really fast, even if it comes at the cost of significantly slower exceptions.
When programming a CPU intensive or GPU intensive application on the iPhone or other portable hardware, you have to make wise algorithmic decisions to make your code fast.
But even great algorithm choices can be slow if the language you're using performs more poorly than another.
Is there any hard data comparing Objective-C to C++, specifically on the iPhone but maybe just on the Mac desktop, for performance of various similar language aspects? I am very familiar with this article comparing C and Objective-C, but this is a larger question of comparing two object oriented languages to each other.
For example, is a C++ vtable lookup really faster than an Obj-C message? How much faster? Threading, polymorphism, sorting, etc. Before I go on a quest to build a project with duplicate object models and various test code, I want to know if anybody has already done this and what the results where. This type of testing and comparison is a project in and of itself and can take a considerable amount of time. Maybe this isn't one project, but two and only the outputs can be compared.
I'm looking for hard data, not evangelism. Like many of you I love and hate both languages for various reasons. Furthermore, if there is someone out there actively pursuing this same thing I'd be interesting in pitching in some code to see the end results, and I'm sure others would help out too. My guess is that they both have strengths and weaknesses, my goal is to find out precisely what they are so that they can be avoided/exploited in real-world scenarios.
Mike Ash has some hard numbers for performance of various Objective-C method calls versus C and C++ in his post "Performance Comparisons of Common Operations". Also, this post
by Savoy Software is an interesting read when it comes to tuning the performance of an iPhone application by using Objective-C++.
I tend to prefer the clean, descriptive syntax of Objective-C over Objective-C++, and have not found the language itself to be the source of my performance bottlenecks. I even tend to do things that I know sacrifice a little bit of performance if they make my code much more maintainable.
Yes, well written C++ is considerably faster. If you're writing performance critical programs and your C++ is not as fast as C (or within a few percent), something's wrong. If your ObjC implementation is as fast as C, then something's usually wrong -- i.e. the program is likely a bad example of ObjC OOD because it probably uses some 'dirty' tricks to step below the abstraction layer it is operating within, such as direct ivar accesses.
The Mike Ash 'comparison' is very misleading -- I would never recommend the approach to compare execution times of programs you have written, or recommend it to compare C vs C++ vs ObjC. The results presented are provided from a test with compiler optimizations disabled. A program compiled with optimizations disabled is rarely relevant when you are measuring execution times. To view it as a benchmark which compares C++ against Objective-C is flawed. The test also compares individual features, rather than entire, real world optimized implementations -- individual features are combined in very different ways with both languages. This is far from a realistic performance benchmark for optimized implementations. Examples: With optimizations enabled, IMP cache is as slow as virtual function calls. Static dispatch (as opposed to dynamic dispatch, e.g. using virtual) and calls to known C++ types (where dynamic dispatch may be bypassed) may be optimized aggressively. This process is called devirtualization, and when it is used, a member function which is declared virtual may even be inlined. In the case of the Mike Ash test where many calls are made to member functions which have been declared virtual and have empty bodies: these calls are optimized away entirely when the type is known because the compiler sees the implementation and is able to determine dynamic dispatch is unnecessary. The compiler can also eliminate calls to malloc in optimized builds (favoring stack storage). So, enabling compiler optimizations in any of C, C++, or Objective-C can produce dramatic differences in execution times.
That's not to say the presented results are entirely useless. You could get some useful information about external APIs if you want to determine if there are measurable differences between the times they spend in pthread_create or +[NSObject alloc] on one platform or architecture versus another. Of course, these two examples will be using optimized implementations in your test (unless you happen to be developing them). But for comparing one language to another in programs you compile… the presented results are useless with optimizations disabled.
Object Creation
Consider also object creation in ObjC - every object is allocated dynamically (e.g. on the heap). With C++, objects may be created on the stack (e.g. approximately as fast as creating a C struct and calling a simple function in many cases), on the heap, or as elements of abstract data types. Each time you allocate and free (e.g. via malloc/free), you may introduce a lock. When you create a C struct or C++ object on the stack, no lock is required (although interior members may use heap allocations) and it often costs just a few instructions or a few instructions plus a function call.
As well, ObjC objects are reference counted instances. The actual need for an object to be a std::shared_ptr in performance critical C++ is very rare. It's not necessary or desirable in C++ to make every instance a shared, reference counted instance. You have much more control over ownership and lifetime with C++.
Arrays and Collections
Arrays and many collections in C and C++ also use strongly typed containers and contiguous memory. Since the address of the next element's members are often known, the optimizer can do much more, and you have great cache and memory locality. With ObjC, that's far from reality for standard objects (e.g. NSObject).
Dispatch
Regarding methods, many C++ implementations use few virtual/dynamic calls, particularly in highly optimized programs. These are static method calls and fodder for the optimizers.
With ObjC methods, each method call (objc message send) is dynamic, and is consequently a firewall for the optimizer. Ultimately, that results in many restrictions or inconveniences regarding what you can and cannot do to keep performance at a minimum when writing performance critical ObjC. This may result in larger methods, IMP caching, frequent use of C.
Some realtime applications cannot use any ObjC messaging in their render paths. None -- audio rendering is a good example of this. ObjC dispatch is simply not designed for realtime purposes; Allocations and locks may happen behind the scenes when messaging objects, making the complexity/time of objc messaging unpredictable enough that the audio rendering may miss its deadline.
Other Features
C++ also provides generics/template implementations for many of its libraries. These optimize very well. They are typesafe, and a lot of inlining and optimizations may be made with templates (consider it polymorphism, optimization, and specialization which takes place at compilation). C++ adds several features which just are not available or comparable in strict ObjC. Trying to directly compare langs, objects, and libraries which are very different is not so useful -- it's a very small subset of actual realizations. It's better to expand the question to a library/framework or real program, considering many aspects of design and implementation.
Other Points
C and C++ symbols can be more easily removed and optimized away in various stages of the build (stripping, dead code elimination, inlining and early inlining, as well as Link Time Optimization). The benefits of this include reduced binary sizes, reduced launch/load times, reduced memory consumption, etc.. For a single app, that may not be such a big deal; but if you reuse a lot of code, and you should, then your shared libraries could add a lot of unnecessary weight to the program, if implemented ObjC -- unless you are prepared to jump through some flaming hoops. So scalability and reuse are also factors in medium/large projects, and groups where reuse is high.
Included Libraries
ObjC library implementors also optimize for the environment, so its library implementors can make use of some language and environment features to offer optimized implementations. Although there are some pretty significant restrictions when writing an optimized program in pure ObjC, some highly optimized implementations exist in Cocoa. This is one of Cocoa's strong points, although the C++ standard library (what some people call the STL) is no slouch either. Cocoa operates at a much higher level of abstraction than C++ -- if you don't know well what you're doing (or should be doing), operating closer to the metal can really cost you. Falling back on to a good library implementation if you are not an expert in some domain is a good thing, unless you are really prepared to learn. As well, Cocoa's environments are limited; you can find implementations/optimizations which make better use of the OS.
If you're writing optimized programs and have experience doing so in both C++ and ObjC, clean C++ implementations will often be twice as fast or faster than clean ObjC (yes, you can compare against Cocoa). If you know how to optimize, you can often do better than higher level, general purpose abstractions. Although, some optimized C++ implementations will be as fast as or slower than Cocoa's (e.g. my initial attempt at file I/O was slower than Cocoa's -- primarily because the C++ implementation initializes its memory).
A lot of it comes down to the language features you are familiar with. I use both langs, they both have different strengths and models/patterns. They complement each other quite well, and there are great libraries for both. If you're implementing a complex, performance critical program, correct use of C++'s features and libraries will give you much more control and provide significant advantages for optimization, such that in the right hands, "several times faster" is a good default expectation (don't expect to win every time, or without some work, however). Remember, it takes years to understand C++ well enough to really reach that point.
I keep the majority of my performance critical paths as C++, but also recognize that ObjC is also a very good solution for some problems, and that there are some very good libraries available.
It's very hard to collect "hard data" for this that's not misguiding.
The biggest problem with doing a feature-to-feature comparison like you suggest is that the two languages encourage very different coding styles. Objective-C is a dynamic language with duck typing, where typical C++ usage is static. The same object-oriented architecture problem would likely have very different ideal solutions using C++ or Objective-C.
My feeling (as I have programmed much in both languages, mostly on huge projects): To maximize Objective-C performance, it has to be written very close to C. Whereas with C++, it's possible to make much more use of the language without any performance penalty compared to C.
Which one is better? I don't know. For pure performance, C++ will always have the edge. But the OOP style of Objective-C definitely has its merits. I definitely think it is easier to keep a sane architecture with it.
This really isn't something that can be answered in general as it really depends on how you use the language features. Both languages will have things that they are fast at, things that they are slow at, and things that are sometimes fast and sometimes slow. It really depends on what you use and how you use it. The only way to be certain is to profile your code.
In Objective C you can also write c++ code, so it might be easier to code in Objective C for the most part, and if you find something that doesn't perform well in it, then you can have a go at writting a c++ version of it and seeing if that helps (C++ tends to optimize better at compile time). Objective C will be easier to use if APIs you are interfacing with are also written in it, plus you might find it's style of OOP is easier or more flexible.
In the end, you should go with what you know you can write safe, robust code in and if you find an area that needs special attention from the other language, then you can swap to that. X-Code does allow you to compile both in the same project.
I have a couple of tests I did on an iPhone 3G almost 2 years ago, there was no documentation or hard numbers around in those days. Not sure how valid they still are but the source code is posted and attached.
This isn't a very extensive test, I was mainly interested in NSArray vs C Array for iterating a large number of objects.
http://memo.tv/nsarray_vs_c_array_performance_comparison
http://memo.tv/nsarray_vs_c_array_performance_comparison_part_ii_makeobjectsperformselector
You can see the C Array is much faster at high iterations. Since then I've realized that the bottleneck is probably not the iteration of the NSArray but the sending of the message. I wanted to try methodForSelector and calling the methods directly to see how big the difference would be but never got round to it. According to Mike Ash's benchmarks it's just over 5x faster.
I don't have hard data for Objective C, but I do have a good place to look for C++.
C++ started as C with Classes according to Bjarne Stroustroup in his reflection on the early years of C++ (http://www2.research.att.com/~bs/hopl2.pdf), so C++ can be thought of (like Objective C) as pushing C to its limits for object orientation.
What are those limits? In the 1994-1997 time frame, a lot of researchers figured out that object-orientation came at a cost due to dynamic binding, e.g. when C++ functions are marked virtual and there may/may not be children classes that override these functions. (In Java and C#, all functions expect ctors are inherently virtual, and there isnt' much you can do about it.) In "A Study of Devirtualization Techniques for a Java Just-In-Time Compiler" from researchers at IBM Research Tokyo, they contrast the techniques used to deal with this, including one from Urz Hölzle and Gerald Aigner. Urz Hölzle, in a separate paper with Karel Driesen, had shown that on average 5.7% of time in C++ programs (and up to ~50%) was spent in calling virtual functions (e.g. vtables + thunks). He later worked with some Smalltalk researachers in what ended up the Java HotSpot VM to solve these problems in OO. Some of these features are being backported to C++ (e.g. 'protected' and Exception handling).
As I mentioned, C++ is static typed where Objective C is duck typed. The performance difference in execution (but not lines of code) probably is a result of this difference.
This study says to really get the performance in a CPU intensive game, you have to use C. The linked article is complete with a XCode project that you can run.
I believe the bottom line is: Use Objective-C where you must interact with the iPhone's functions (after all, putting trampolines everywhere can't be good for anyone), but when it comes to loops, things like vector object classes, or intensive array access, stick with C++ STL or C arrays to get good performance.
I mean it would be totally silly to see position = [[Vector3 alloc] init] ;. You're just asking for a performance hit if you use references counts on basic objects like a position vector.
yes. c++ reign supreme in performance/expresiveness/resource tradeoff.
"I'm looking for hard data, not evangelism". google is your best friend.
obj-c nsstring is swapped with c++'s by apple enginneers for performance. in a resource constrained devices, only c++ cuts it as a MAINSTREAM oop language.
NSString stringWithFormat is slow
obj-c oop abstraction is deconstructed into procedural-based c-structs for performance, otherwise a MAGNITUDE order slower than java! the author is also aware of message caching - yet no-go. so modeling lots of small players/enemies objects is done in oop with c++ or else, lots of Procedural structs with a simple OOP wrapper around it with obj-c. there can be one paradigm that equates Procedural + Object-Oriented Programming = obj-c.
http://ejourneyman.wordpress.com/2008/04/23/writing-a-ray-tracer-for-cocoa-objective-c/
a theoretical question. After reading Armstrongs 'programming erlang' book I was wondering the following:
It will take some time to learn Erlang. Let alone master it. It really is fundamentally different in a lot of respects.
So my question: Is it possible to write 'like erlang' or with some 'erlang like framework', which given that you take care not to create functions with sideffects, you can create scaleable reliable apps as well as in Erlang? Maybe with the same msgs sending, loads of 'mini processes' paradigm.
The advantage would be to not throw all your accumulated C/C++ knowledge over the fence.
Any thoughts about this would be welcome
Yes, it is possible, but...
Probably the best answer for this question is given by Robert Virding’s First Rule:
“Any sufficiently complicated
concurrent program in another language
contains an ad hoc,
informally-specified, bug-ridden, slow
implementation of half of Erlang.”
Very good rule is use the right tool for the task. Erlang excels in concurrency and reliability. C/C++ was not designed with these properties in mind.
If you don't want to throw away your C/C++ knowledge and experience and your project allows this kind of division, good approach is to create a mixed solution. Write concurrent, communication and error handling code in Erlang, then add C/C++ parts, which will do CPU and IO bound stuff.
You clearly can - the Erlang/OTP system is largely written in C (and Erlang). The question is 'why would you want to?'
In 'ye olde days' people used to write their own operating system - but why would you want to?
If you elect to use an operating system your unwritten software has certain properties - it can persist to hard disk, it can speak to a network, it can draw on screens, it can run from the command line, it can be invoked in batch mode, etc, etc...
The Erlang/OTP system is 1.5M lines of code which has been demonstrated to give 99.9999999% uptime in large systems (the UK phone system) - that's 31ms downtime a year.
With Erlang/OTP your unwritten software has high reliability, it can hot-swap itself, your unwritten application can failover when a physical computer dies.
Why would you want to rewrite that functionality?
I would break this into 2 questions
Can you write concurrent, scalable C++ applications
Yes. It's certainly possible to create the low level constructs needed in order to achieve this.
Would you want to write concurrent, scalable, C++ applications
Perhaps. But if I was going for a highly concurrent application, I would choose a language that was either designed to fill that void or easily lent itself to doing so (Erlang, F# and possibly C#).
C++ was not designed to build highly concurrent applications. But it can certainly be tweaked into doing so. The cost might be higher than you expect though once you factor in memory management.
Yes, but you will be doing some extra work.
Regarding side effects, consider how the .net/plinq team is approaching. Plinq won't be able to enforce you hand it stuff with no side effects, but it will assume you do so and play by its rules so we get to use a simpler api. Even if the language doesn't have built-in support for it, it will still simplify things as you can break the operations more easily.
What I can do in one Turing complete language I can do in any other Turing complete language.
So I interpret your question to read, is it as easy to write a reliable and scalable application in C++ as it is in Erlang?
The answer to that is highly subjective. For me it is easier to write it in C++ for the following reasons:
I have already done it in C++ (at least three times).
I don't know Erlang.
I have read a great deal about Stackless Python, which feels to me like a highly concurrent message based cooperative multitasking system in python, but of course python is written on top of C.
Having said that. If you already know both languages, and you have the problem well defined, you can then make the best choice based on all the information you have at hand.
the main 'problem' with C (or C++) for writing reliable and easy to extend programs is that in C you can do anything. so, the first step would be to write a simple framework that restricts just a bit. most good programmers do that anyway.
in this case, the restrictions would be mostly to make it easy to define a 'process' within whatever level of isolation you want. fork() has a reputation of being slow, and threads also need significant time to spawn, so you might want to use a cooperative multitasking, which can be far more efficient, and you could even make it preemptive (i think that's what Erlang does). to get multi-core efficiency, set a pool of threads and make all of them complete to run the tasks.
another important part would be to create an appropriate library of immutable data structures, so that using them (instead of the standard lib) your functions would be (mostly) side-effect-free.
then it's just a matter of setting a good API for message passing and futures... not easy, but at least it doesn't seem like changing the language itself.
I still feel C++ offers some things that can't be beaten. It's not my intention to start a flame war here, please, if you have strong opinions about not liking C++ don't vent them here. I'm interested in hearing from C++ gurus about why they stick with it.
I'm particularly interested in aspects of C++ that are little known, or underutilised.
RAII / deterministic finalization. No, garbage collection is not just as good when you're dealing with a scarce, shared resource.
Unfettered access to OS APIs.
I have stayed with C++ as it is still the highest performing general purpose language for applications that need to combine efficiency and complexity. As an example, I write real time surface modelling software for hand-held devices for the surveying industry. Given the limited resources, Java, C#, etc... just don't provide the necessary performance characteristics, whereas lower level languages like C are much slower to develop in given the weaker abstraction characteristics. The range of levels of abstraction available to a C++ developer is huge, at one extreme I can be overloading arithmetic operators such that I can say something like MaterialVolume = DesignSurface - GroundSurface while at the same time running a number of different heaps to manage the memory most efficiently for my app on a specific device. Combine this with a wealth of freely available source for solving pretty much any common problem, and you have one heck of a powerful development language.
Is C++ still the optimal development solution for most problems in most domains? Probably not, though at a pinch it can still be used for most of them. Is it still the best solution for efficient development of high performance applications? IMHO without a doubt.
Shooting oneself in the foot.
No other language offers such a creative array of tools. Pointers, multiple inheritance, templates, operator overloading and a preprocessor.
A wonderfully powerful language that also provides abundant opportunities for foot shooting.
Edit: I apologize if my lame attempt at humor has offended some. I consider C++ to be the most powerful language that I have ever used -- with abilities to code at the assembly language level when desired, and at a high level of abstraction when desired. C++ has been my primary language since the early '90s.
My answer was based on years of experience of shooting myself in the foot. At least C++ allows me to do so elegantly.
Deterministic object destruction leads to some magnificent design patterns. For instance, while RAII is not as general a technique as garbage collection, it leads to some impressive capabilities which you cannot get with GC.
C++ is also unique in that it has a Turing-complete preprocessor. This allows you to prefer (as in the opposite of defer) a lot of code tasks to compile time instead of run time. For instance, in real code you might have an assert() statement to test for a never-happen. The reality is that it will sooner or later happen... and happen at 3:00am when you're on vacation. The C++ preprocessor assert does the same test at compile time. Compile-time asserts fail between 8:00am and 5:00pm while you're sitting in front of the computer watching the code build; run-time asserts fail at 3:00am when you're asleep in Hawai'i. It's pretty easy to see the win there.
In most languages, strategy patterns are done at run-time and throw exceptions in the event of a type mismatch. In C++, strategies can be done at compile-time through the preprocessor facility and can be guaranteed typesafe.
Write inline assembly (MMX, SSE, etc.).
Deterministic object destruction. I.e. real destructors. Makes managing scarce resources easier. Allows for RAII.
Easier access to structured binary data. It's easier to cast a memory region as a struct than to parse it and copy each value into a struct.
Multiple inheritance. Not everything can be done with interfaces. Sometimes you want to inherit actual functionality too.
I think i'm just going to praise C++ for its ability to use templates to catch expressions and execute it lazily when it's needed. For those not knowing what this is about, here is an example.
Template mixins provide reuse that I haven't seen elsewhere. With them you can build up a large object with lots of behaviour as though you had written the whole thing by hand. But all these small aspects of its functionality can be reused, it's particularly great for implementing parts of an interface (or the whole thing), where you are implementing a number of interfaces. The resulting object is lightning-fast because it's all inlined.
Speed may not matter in many cases, but when you're writing component software, and users may combine components in unthought-of complicated ways to do things, the speed of inlining and C++ seems to allow much more complex structures to be created.
Absolute control over the memory layout, alignment, and access when you need it. If you're careful enough you can write some very cache-friendly programs. For multi-processor programs, you can also eliminate a lot of slow downs from cache coherence mechanisms.
(Okay, you can do this in C, assembly, and probably Fortran too. But C++ lets you write the rest of your program at a higher level.)
This will probably not be a popular answer, but I think what sets C++ apart are its compile-time capabilities, e.g. templates and #define. You can do all sorts of text manipulation on your program using these features, much of which has been abandoned in later languages in the name of simplicity. To me that's way more important than any low-level bit fiddling that's supposedly easier or faster in C++.
C#, for instance, doesn't have a real macro facility. You can't #include another file directly into the source, or use #define to manipulate the program as text. Think about any time you had to mechanically type repetitive code and you knew there was a better way. You may even have written a program to generate code for you. Well, the C++ preprocessor automates all of these things.
The "generics" facility in C# is similarly limited compared to C++ templates. C++ lets you apply the dot operator to a template type T blindly, calling (for example) methods that may not exist, and checks-for-correctness are only applied once the template is actually applied to a specific class. When that happens, if all the assumptions you made about T actually hold, then your code will compile. C# doesn't allow this... type "T" basically has to be dealt with as an Object, i.e. using only the lowest common denominator of operations available to everything (assignment, GetHashCode(), Equals()).
C# has done away with the preprocessor, and real generics, in the name of simplicity. Unfortunately, when I use C#, I find myself reaching for substitutes for these C++ constructs, which are inevitably more bloated and layered than the C++ approach. For example, I have seen programmers work around the absence of #include in several bloated ways: dynamically linking to external assemblies, re-defining constants in several locations (one file per project) or selecting constants from a database, etc.
As Ms. Crabapple from The Simpson's once said, this is "pretty lame, Milhouse."
In terms of Computer Science, these compile-time features of C++ enable things like call-by-name parameter passing, which is known to be more powerful than call-by-value and call-by-reference.
Again, this is perhaps not the popular answer- any introductory C++ text will warn you off of #define, for example. But having worked with a wide variety of languages over many years, and having given consideration to the theory behind all of this, I think that many people are giving bad advice. This seems especially to be the case in the diluted sub-field known as "IT."
Passing POD structures across processes with minimum overhead. In other words, it allows us to easily handle blobs of binary data.
C# and Java force you to put your 'main()' function in a class. I find that weird, because it dilutes the meaning of a class.
To me, a class is a category of objects in your problem domain. A program is not such an object. So there should never be a class called 'Program' in your program. This would be equivalent to a mathematical proof using a symbol to notate itself -- the proof -- alongside symbols representing mathematical objects. It'll be just weird and inconsistent.
Fortunately, unlike C# and Java, C++ allows global functions. That lets your main() function to exist outside. Therefore C++ offers a simpler, more consistent and perhaps truer implementation of the the object-oriented idiom. Hence, this is one thing C++ can do, but C# and Java cannot.
I think that operator overloading is a quite nice feature. Of course it can be very much abused (like in Boost lambda).
Tight control over system resources (esp. memory) while offering powerful abstraction mechanisms optionally. The only language I know of that can come close to C++ in this regard is Ada.
C++ provides complete control over memory and as result a makes the the flow of program execution much more predictable.
Not only can you say precisely at what time allocations and deallocations of memory occurs, you can define you own heaps, have multiple heaps for different purposes and say precisely where in memory data is allocated to. This is frequently useful when programming on embedded/real time systems, such as games consoles, cell phones, mp3 players, etc..., which:
have strict upper limits on memory that is easy to reach (constrast with a PC which just gets slower as you run out of physical memory)
frequently have non homogeneous memory layout. You may want to allocate objects of one type in one piece of physical memory, and objects of another type in another piece.
have real time programming constraints. Unexpectedly calling the garbage collector at the wrong time can be disastrous.
AFAIK, C and C++ are the only sensible option for doing this kind of thing.
Well to be quite honest, you can do just about anything if your willing to write enough code.
So to answer your question, no, there is nothing you can't do in another language that C++ can't do. It's just how much patience do you have and are you willing to devote the long sleepless nights to get it to work?
There are things that C++ wrappers make it easy to do (because they can read the header files), like Office development. But again, it's because someone wrote lots of code to "wrap" it for you in an RCW or "Runtime Callable Wrapper"
EDIT: You also realize this is a loaded question.