I've always considered C++ to be one of the most strongly typed languages out there.
So I was quite shocked to see Table 3 of this paper state that C++ is weakly typed.
Apparently,
C and C++ are considered weakly typed since, due to type-casting, one can interpret a field of a structure that was an integer as a pointer.
Is the existence of type casting all that matters? Does the explicit-ness of such casts not matter?
More generally, is it really generally accepted that C++ is weakly typed? Why?
That paper first claims:
In contrast, a language is weakly-typed if type-confusion can occur silently (undetected), and eventually cause errors that are difficult to localize.
And then claims:
Also, C and C++ are considered weakly typed since, due to type-casting, one can interpret a field of a structure that was an integer as a pointer.
This seems like a contradiction to me. In C and C++, the type-confusion that can occur as a result of casts will not occur silently -- there's a cast! This does not demonstrate that either of those languages is weakly-typed, at least not by the definition in that paper.
That said, by the definition in the paper, C and C++ may still be considered weakly-typed. There are, as noted in the comments on the question already, cases where the language supports implicit type conversions. Many types can be implicitly converted to bool, a literal zero of type int can be silently converted to any pointer type, there are conversions between integers of varying sizes, etc, so this seems like a good reason to consider C and C++ weakly-typed for the purposes of the paper.
For C (but not C++), there are also more dangerous implicit conversions that are worth mentioning:
int main() {
int i = 0;
void *v = &i;
char *c = v;
return *c;
}
For the purposes of the paper, that must definitely be considered weakly-typed. The reinterpretation of bits happens silently, and can be made far worse by modifying it to use completely unrelated types, which has silent undefined behaviour that typically has the same effect as reinterpreting bits, but blows up in mysterious yet sometimes amusing ways when optimisations are enabled.
In general, though, I think there isn't a fixed definition of "strongly-typed" and "weakly-typed". There are various grades, a language that is strongly-typed compared to assembly may be weakly-typed compared to Pascal. To determine whether C or C++ is weakly-typed, you first have to ask what you want weakly-typed to mean.
"weakly typed" is a quite subjective term. I prefer the terms "strictly typed" and "statically typed" vs. "loosely typed" and "dynamically typed", because they are more objective and more precise words.
From what I can tell, people generally use "weakly typed" as a diminutive-pejorative term which means "I don't like the notion of types in this language". It's sort of an argumentum ad hominem (or rather, argumentum ad linguam) for those who can't bring up professional or technical arguments against a particular language.
The term "strictly typed" also has slightly different interpretations; the generally accepted meaning, in my experience, is "the compiler generates errors if types don't match up". Another interpretation is that "there are no or few implicit conversions". Based on this, C++ can actually be considered a strictly typed language, and most often it is considered as such. I would say that the general consensus on C++ is that it is a strictly typed language.
Of course we could try a more nuanced approach to the question and say that parts of the language are strictly typed (this is the majority of the cases), other parts are loosely typed (a few implicit conversions, e. g. arithmetic conversions and the four types of explicit conversion).
Furthermore, there are some programmers, especially beginners who are not familiar with more than a few languages, who don't intend to or can't make the distinction between "strict" and "static", "loose" and "dynamic", and conflate the two - otherwise orthogonal - concepts based on their limited experience (usually the correlation of dynamism and loose typing in popular scripting languages, for example).
In reality, parts of C++ (virtual calls) impose the requirement that the type system be partially dynamic, but other things in the standard require that it be strict. Again, this is not a problem, since these are orthogonal concepts.
To sum up: probably no language fits completely, perfectly into one category or another, but we can say which particular property of a given language dominates. In C++, strictness definitely does dominate.
In contrast, a language is weakly-typed if type-confusion can occur silently (undetected), and eventually cause errors that are difficult to localize.
Well, that can happen in C++, for example:
#define _USE_MATH_DEFINES
#include <iostream>
#include <cmath>
#include <limits>
void f(char n) { std::cout << "f(char)\n"; }
void f(int n) { std::cout << "f(int)\n"; }
void g(int n) { std::cout << "f(int)\n"; }
int main()
{
float fl = M_PI; // silent conversion to float may lose precision
f(8 + '0'); // potentially unintended treatment as int
unsigned n = std::numeric_limits<unsigned>::max();
g(n); // potentially unintended treatment as int
}
Also, C and C++ are considered weakly typed since, due to type-casting, one can interpret a field of a structure that was an integer as a pointer.
Ummmm... not via any implicit conversion, so that's a silly argument. C++ allows explicit casting between types, but that's hardly "weak" - it doesn't happen accidentally/silently as required by the site's own definition above.
Is the existence of type casting all that matters? Does the explicit-ness of such casts not matter?
Explicitness is a crucial consideration IMHO. Letting a programmer override the compiler's knowledge of types is one of the "power" features of C++, not some weakness. It's not prone to accidental use.
More generally, is it really generally accepted that C++ is weakly typed? Why?
No - I don't think it is accepted. C++ is reasonably strongly typed, and the ways in which it has been lenient that have historically caused trouble have been pruned back, such as implicit casts from void* to other pointer types, and finer grained control with explicit casting operators and constructors.
Well, since the creator of C++, Bjarne Stroustrup, says in The C++ Programming Language (4th edition) that the language is strongly typed, I would take his word for it:
C++ programming is based on strong static type checking, and most techniques aim at achieving a high level of abstraction and a direct representation of the programmer’s ideas. This can usually be done without compromising run-time and space efficiency compared to lower-level techniques. To gain the benefits of C++, programmers coming to it from a different language must
learn and internalize idiomatic C++ programming style and technique. The same applies to programmers used to earlier and less expressive versions of C++.
In this video lecture from 1994 he also states that the weak type system of C really bothered him, and that's why he made C++ strongly typed: The Design of C++ , lecture by Bjarne Stroustrup
In General:
There is a confusion around the subject. Some terms differ from book to book (not considering the internet here), and some may have changed over the years.
Below is what I've understood from the book "Engineering a Compiler" (2nd Edition).
1. Untyped Languages
Languages that have no types at all, like for example in assembly.
2. Weakly Typed Languages:
Languages that have poor type system.
The definition here is intentionally ambiguous.
3. Strongly Typed Languages:
Languages where each expression have unambiguous type. PL can further categorised to:
A. Statically Typed: when every expression is assigned a type at compile time.
B. Dynamically Typed: when some expressions can only be typed at runtime.
What is C++ then?
Well, it's strongly typed for sure. And mostly it is statically typed.
But as some expressions can only be typed at runtime, I guess it falls under the 3.B category.
PS1: A note from the book:
A strongly typed language, that can be statically checkable, might be implemented (for some reason) just with runtime checking.
PS2: Third Edition was recently released
I don't own it, so I don't know if anything had changed on this regard.
But in general, the "Semantic Analysis" Chapter had changed both title and order in Table of Contents.
Let me give you a simple example:
if ( a + b )
C/C+= allows an implicit conversion from float to int to Boolean.
A strongly-typed language would not allow such an implicit conversion.
Related
Implementations of the C++ standard typedef the (u)int_fastX types as one of their built in types. This requires research in which type is the fastest, but there cannot be one fastest type for every case.
Wouldn't it increase performance to resolve such types at compile time to account for the case by chosing the optimal type for the actual use? The compiler would analyze the use of a _fast variable and then chose the optimal type. Factors coming into play could be alignment and the kind of operations used with the variable.
This would effectively make those types a language feature.
This could introduce bugs when the compiler suddenly decides to choose another width for such a variable. But one shouldn't use a _fast type in such use cases, where the behaviour depends on the width, anyways.
Is such compile time resolval permitted by the standard?
If yes, why isn't it implemented as of today?
If no, why isn't it in the standard?
No, this is not permitted by the standard. Keep in mind the C++ standard defers to C for this particular area, for example, C++11 defers to C99, as per C++11 1.1 /2. Specifically, C++11 18.4.1 Header <cstdint> synopsis /2 states:
The header defines all functions, types, and macros the same as 7.18 in the C standard.
So let's get your first contention out of the way, you state:
Implementations of the C++ standard typedef the (u)int_fastX types as one of their built in types. This requires research in which type is the fastest, but there cannot be one fastest type for every case.
The C standard has this to say, in c99 7.18.1.3 Fastest minimum-width integer types (my italics):
Each of the following types designates an integer type that is usually fastest to operate with among all integer types that have at least the specified width.
The designated type is not guaranteed to be fastest for all purposes; if the implementation has no clear grounds for choosing one type over another, it will simply pick some integer type satisfying the signedness and width requirements.
So you're indeed correct that a type cannot be fastest for all possible uses but this seems to not be what the authors had in mind in defining these aspects.
The introduction of the fixed-width types was (in my opinion) to solve the problem all those developers had in having different int widths across the various implementations.
Similarly, once a developer knows the range of values they want, the fast minimum-width types give them a way to do arithmetic on those values at the maximum possible speed.
Covering your three specific questions in your final paragraph (in bold below):
(1) Is such compile time resolution permitted by the standard?
I don't believe so. The relevant part of the C standard has this little piece of text:
For each type described herein that the implementation provides, <stdint.h> shall declare that typedef name and define the associated macros.
That seems to indicate that it must be a typedef provided by the implementation and, since there are no "variable" typedefs, it has to be fixed.
There may be wiggle room because it could be possible to provide a different typedef depending on certain environmental considerations but the difficulty in actually implementing this seems very high (see my answer to your third question below).
Chief amongst these is that these adaptable types, should they have external linkage, would require agreement amongst all the compiled translation units when linked together. Having one unit with a 16-bit type and another with a 32-bit type is going to cause all sorts of problems.
(2) If yes, why isn't it implemented as of today?
I'm pushing "no" as an answer to your first question so I'm not going to speculate on this other than by referring you to the answer to the third question below (it's probably not implemented because it's very hard, with dubious benefits).
(3) If no, why isn't it in the standard?
A standard is a contract between the implementor and the user and describes what the implementor will provide. It's usual that the standards committees tend to be more populated by the former (who aren't that keen on making too much extra work for themselves) than the latter.
For example, I would love to have all the you-beaut C++ data structures in C but this would have the consequence that standards versions would be decades apart rather than years :-)
For my projects, I usually define a lot of aliases for types like unsigned int, char and double as well as std::string and others.
I also aliased and to &&, or to ||, not to !, etc.
Is this considered bad practice or okay to do?
Defining types to add context within your code is acceptable, and even encouraged. Screwing around with operators will only encourage the next person that has to maintain your code to bring a gun to your house.
Well, consider the newcomers who are accustomed to C++. They will have difficulties maintaining your project.
Mind that there are many possibilities for the more valid aliasing. A good example is complicated nested STL containers.
Example:
typedef int ClientID;
typedef double Price;
typedef map<ClientID, map<Date, Price> > ClientPurchases;
Now, instead of
map<int, map<Date, double> >::iterator it = clientPurchases.find(clientId);
you can write
ClientPurchases::iterator it = clientPurchases.find(clientId);
which seems to be more clear and readable.
If you're only using it for pointlessly renaming language features (as opposed to the example #Vlad gives), then it's the Wrong Thing.
It definitely makes the code less readable - anyone proficient in C++ will see (x ? y : z) and know that it's a ternary conditional operator. Although ORLY x YARLY y NOWAI z KTHX could be the same thing, it will confuse the viewer: "is this YARLY NOWAI the exact same thing as ? :, renamed for the author's convenience, or does it have subtle differences?" If these "aliases" are the same thing as the standard language elements, they will only slow down the next person to maintain your code.
TLDR: Reading code, any code, is hard enough without having to look up your private alternate syntax all the time.
That’s horrible. Don’t do it, please. Write idiomatic C++, not some macro-riddled monstrosity. In general, it’s extremely bad practice to define such macros, except in very specific cases (such as the BOOST_FOREACH macro).
That said, and, or and not are actually already valid aliases for &&, || and ! in C++!
It’s just that Visual Studio only knows them if you first include the standard header <ciso646>. Other compilers don’t need this.
Types are something else. Using typedef to create type aliases depending on context makes sense if it augments the expressiveness of the code. Then it’s encouraged. However, even better would be to create own types instead of aliases. For example, I can’t imagine it ever being beneficial to create an alias for std::string – why not just use std::string directly? (An exception are of course generic data structures and algorithms.)
"and", "or", "not" are OK because they're part of the language, but it's probably better to write C++ in a style that other C++ programmers use, and very few people bother using them. Don't alias them yourself: they're reserved names and it's not valid in general to use reserved names even in the preprocessor. If your compiler doesn't provide them in its default mode (i.e. it's not standard-compliant), you could fake them up with #define, but you may be setting yourself up for trouble in future if you change compiler, or change compiler options.
typedefs for builtin types might make sense in certain circumstances. For example in C99 (but not in C++03), there are extended integer types such as int32_t, which specifies a 32 bit integer, and on a particular system that might be a typedef for int. They come from stdint.h (<cstdint> in C++0x), and if your C++ compiler doesn't provide that as an extension, you can generally hunt down or write a version of it that will work for your system. If you have some purpose in mind for which you might in future want to use a different integer type (on a different system perhaps), then by all means hide the "real" type behind a name that describes the important properties that are the reason you chose that type for the purpose. If you just think "int" is unnecessarily brief, and it should be "integer", then you're not really helping anyone, even yourself, by trying to tweak the language in such a superficial way. It's an extra indirection for no gain, and in the long run you're better off learning C++, than changing C++ to "make more sense" to you.
I can't think of a good reason to use any other name for string, except in a case similar to the extended integer types, where your name will perhaps be a typedef for string on some builds, and wstring on others.
If you're not a native English-speaker, and are trying to "translate" C++ to another language, then I sort of see your point but I don't think it's a good idea. Even other speakers of your language will know C++ better than they know the translations you happen to have picked. But I am a native English speaker, so I don't really know how much difference it makes. Given how many C++ programmers there are in the world who don't translate languages, I suspect it's not a huge deal compared with the fact that all the documentation is in English...
If every C++ developer was familiar with your aliases then why not, but you are with these aliases essentially introducing a new language to whoever needs to maintain your code.
Why add this extra mental step that for the most part does not add any clarity (&& and || are pretty obvious what they are doing for any C/C++ programmer, and any way in C++ you can use the and and or keywords)
Why are reference variables not present/used in C?
Why are they designed for C++?
Because C was invented first. I don't know if they hadn't thought about references at the time (being mostly unnecessary), or if there was some particular reason not to include them (perhaps compiler complexity). They're certainly much more useful for object-oriented and generic constructs than the procedural style of C.
Reference arguments were originally invented, AFAIK, for one thing: operator overloading semantics. For example, operator[] just must return a reference.
It was then a subject of great debate whether the 'concealed pointer' should be used for anything else ever.
Many development convention documents of many firms said "never use references. If you need a pointer, say so".
However, it was then discovered that references have one major advantage (no, not the syntax sugar). It is this: a reference is guaranteed to be valid, unless you work really hard to break it.
Personally, I still don't understand why I cannot do this in C++:
int a1, a2;
int &b = a1;
&b = a2; // Error. address of referenced is not an lvalue. Why?!
They're not present in C because they're not required. C has very few 'extraneous' features. You can write any program without using references, so they're just not included. C++ was developed much later than C was, so its designers threw in all kinds of stuff that wasn't originally present in C.
As you may know, C predates C++ by approximately a decade. References were a feature introduced in the C++ language. Some features of the C++ language have been adopted by subsequent versions of the C standard (such as const and // comment). The concept of references has not been so far.
One can hypothesize that their usefulness in object oriented programming does not translate as usefully to the procedural programming of C.
I think I agree with Pavel's idea that they were invented to make overloaded operators work properly. It's pretty clear that the first versions of C++ (C with classes) did not have references as if they did, this would be a reference instead of a pointer.
I guess C was born with a minimalist hat, and references are just syntactic sugar for pointers.
I'm reading a very silly paper and it keeps on talking about how Giotto defines a "formal semantics".
Giotto has a formal semantics that specifies the meaning of mode switches, of intertask communication, and of communication with the program environment.
I'm on the edge of, but just cannot quite grasp what it means by "formal semantics."
To expand on Michael Madsen's answer a little, an example might be the behaviour of the ++ operator. Informally, we describe the operator using plain English. For instance:
If x is a variable of type int, ++x causes x to be incremented by one.
(I'm assuming no integer overflows, and that ++x doesn't return anything)
In a formal semantics (and I'm going to use operational semantics), we'd have a bit of work to do. Firstly, we need to define a notion of types. In this case, I'm going to assume that all variables are of type int. In this simple language, the current state of the program can be described by a store, which is a mapping from variables to values. For instance, at some point in the program, x might be equal to 42, while y is equal to -5351. The store can be used as a function -- so, for instance, if the store s has the variable x with the value 42, then s(x) = 42.
Also included in the current state of the program is the remaining statements of the program we have to execute. We can bundle this up as <C, s>, where C is the remaining program, and s is the store.
So, if we have the state <++x, {x -> 42, y -> -5351}>, this is informally a state where the only remaining command to execute is ++x, the variable x has value 42, and the variable y has value -5351.
We can then define transitions from one state of the program to another -- we describe what happens when we take the next step in the program. So, for ++, we could define the following semantics:
<++x, s> --> <skip, s{x -> (s(x) + 1)>
Somewhat informally, by executing ++x, the next command is skip, which has no effect, and the variables in the store are unchanged, except for x, which now has the value that it originally had plus one. There's still some work to be done, such as defining the notation I used for updating the store (which I've not done to stop this answer getting even longer!). So, a specific instance of the general rule might be:
<++x, {x -> 42, y -> -5351}> --> <skip, {x -> 43, y -> -5351}>
Hopefully that gives you the idea. Note that this is just one example of formal semantics -- along with operational semantics, there's axiomatic semantics (which often uses Hoare logic) and denotational semantics, and probably plenty more that I'm not familiar with.
As I mentioned in a comment to another answer, an advantage of formal semantics is that you can use them to prove certain properties of your program, for instance that it terminates. As well as showing your program doesn't exhibit bad behaviour (such as non-termination), you can also prove that your program behaves as required by proving your program matches a given specification. Having said that, I've never found the idea of specifying and verifying a program all that convincing, since I've found the specification usually just being the program rewritten in logic, and so the specification is just as likely to be buggy.
Formal semantics describe semantics in - well, a formal way - using notation which expresses the meaning of things in an unambiguous way.
It is the opposite of informal semantics, which is essentially just describing everything in plain English. This may be easier to read and understand, but it creates the potential for misinterpretation, which could lead to bugs because someone didn't read a paragraph the way you intended them to read it.
A programming language can have both formal and informal semantics - the informal semantics would then serve as a "plain-text" explanation of the formal semantics, and the formal semantics would be the place to look if you're not sure what the informal explanation really means.
Just like the syntax of a language can be described by a formal grammar (e.g. BNF), it's possible to use different kinds of formalisms to map that syntax to mathematical objects (i.e. the meaning of the syntax).
This page from A Practical Introduction to Denotational Semantics is a nice introduction to how [denotational] semantics relates to syntax. The beginning of the book also gives a brief history of other, non-denotational approaches to formal semantics (although the wikipedia link Michael gave goes into even more detail, and is probably more up-to-date).
From the author's site:
Models for semantics have not
caught-on to the same extent that BNF
and its descendants have in syntax.
This may be because semantics does
seem to be just plain harder than
syntax. The most successful system is
denotational semantics which describes
all the features found in imperative
programming languages and has a sound
mathematical basis. (There is still
active research in type systems and
parallel programming.) Many
denotational definitions can be
executed as interpreters or translated
into "compilers" but this has not yet
led to generators of efficient
compilers which may be another reason
that denotational semantics is less
popular than BNF.
What is meant in the context of a programming language like Giotto is, that a language with formal semantics, has a mathematically rigorous interpretation of its individual language constructs.
Most programming languages today are not rigorously defined. They may adhere to standard documents that are fairly detailed, but it's ultimately the compiler's responsibility to emit code that somehow adheres to those standard documents.
A formally specified language on the other hand is normally used when it is necessary to do reasoning about program code using, e.g., model checking or theorem proving. Languages that lend themselves to these techniques tend to be functional ones, such as ML or Haskell, since these are defined using mathematical functions and transformations between them; that is, the foundations of mathematics.
C or C++ on the other hand are informally defined by technical descriptions, although there exist academic papers which formalise aspects of these languages (e.g., Michael Norrish: A formal semantics for C++, https://publications.csiro.au/rpr/pub?pid=nicta:1203), but that often do not find their way into the official standards (possibly due to a lack of practicality, esp. difficulty to maintain).
I have read that C++ is super-set of C and provide a real-time implementation by creating objects. Also C++ is closed to real world as it is enriched with Object Oriented concepts.
What all concepts are there in C++ that can not be implemented in C?
Some say that we can not over write methods in C then how can we have different flavors of printf()?
For example printf("sachin"); will print sachin and printf("%d, %s",count ,name); will print 1,sachin assuming count is an integer whose value is 1 and name is a character array initililsed with "sachin".
Some say data abstraction is achieved in C++, so what about structures?
Some responders here argues that most things that can be produced with C++ code can also be produced with C with enough ambition. This is true in some parts, but some things are inherently impossible to achieve unless you modify the C compiler to deviate from the standard.
Fakeable:
Inheritance (pointer to parent-struct in the child-struct)
Polymorphism (Faking vtable by using a group of function pointers)
Data encapsulation (opaque sub structures with an implementation not exposed in public interface)
Impossible:
Templates (which might as well be called preprocessor step 2)
Function/method overloading by arguments (some try to emulate this with ellipses, but never really comes close)
RAII (Constructors and destructors are automatically invoked in C++, so your stack resources are safely handled within their scope)
Complex cast operators (in C you can cast almost anything)
Exceptions
Worth checking out:
GLib (a C library) has a rather elaborate OO emulation
I posted a question once about what people miss the most when using C instead of C++.
Clarification on RAII:
This concept is usually misinterpreted when it comes to its most important aspect - implicit resource management, i.e. the concept of guaranteeing (usually on language level) that resources are handled properly. Some believes that achieving RAII can be done by leaving this responsibility to the programmer (e.g. explicit destructor calls at goto labels), which unfortunately doesn't come close to providing the safety principles of RAII as a design concept.
A quote from a wikipedia article which clarifies this aspect of RAII:
"Resources therefore need to be tied to the lifespan of suitable objects. They are acquired during initialization, when there is no chance of them being used before they are available, and released with the destruction of the same objects, which is guaranteed to take place even in case of errors."
How about RAII and templates.
It is less about what features can't be implemented, and more about what features are directly supported in the language, and therefore allow clear and succinct expression of the design.
Sure you can implement, simulate, fake, or emulate most C++ features in C, but the resulting code will likely be less readable, or maintainable. Language support for OOP features allows code based on an Object Oriented Design to be expressed far more easily than the same design in a non-OOP language. If C were your language of choice, then often OOD may not be the best design methodology to use - or at least extensive use of advanced OOD idioms may not be advisable.
Of course if you have no design, then you are likely to end up with a mess in any language! ;)
Well, if you aren't going to implement a C++ compiler using C, there are thousands of things you can do with C++, but not with C. To name just a few:
C++ has classes. Classes have constructors and destructors which call code automatically when the object is initialized or descrtucted (going out of scope or with delete keyword).
Classes define an hierarchy. You can extend a class. (Inheritance)
C++ supports polymorphism. This means that you can define virtual methods. The compiler will choose which method to call based on the type of the object.
C++ supports Run Time Information.
You can use exceptions with C++.
Although you can emulate most of the above in C, you need to rely on conventions and do the work manually, whereas the C++ compiler does the job for you.
There is only one printf() in the C standard library. Other varieties are implemented by changing the name, for instance sprintf(), fprintf() and so on.
Structures can't hide implementation, there is no private data in C. Of course you can hide data by not showing what e.g. pointers point to, as is done for FILE * by the standard library. So there is data abstraction, but not as a direct feature of the struct construct.
Also, you can't overload operators in C, so a + b always means that some kind of addition is taking place. In C++, depending on the type of the objects involved, anything could happen.
Note that this implies (subtly) that + in C actually is overridden; int + int is not the same code as float + int for instance. But you can't do that kind of override yourself, it's something for the compiler only.
You can implement C++ fully in C... The original C++ compiler from AT+T was infact a preprocessor called CFront which just translated C++ code into C and compiled that.
This approach is still used today by comeau computing who produce one of the most C++ standards compliant compilers there is, eg. It supports all of C++ features.
namespace
All the rest is "easily" faked :)
printf is using a variable length arguments list, not an overloaded version of the function
C structures do not have constructors and are unable to inherit from other structures they are simply a convenient way to address grouped variables
C is not an OO langaueage and has none of the features of an OO language
having said that your are able to imitate C++ functionality with C code but, with C++ the compiler will do all the work for you in compile time
What all concepts are there in C++
that can not be implemented in C?
This is somewhat of an odd question, because really any concept that can be expressed in C++ can be expressed in C. Even functionality similar to C++ templates can be implemented in C using various horrifying macro tricks and other crimes against humanity.
The real difference comes down to 2 things: what the compiler will agree to enforce, and what syntactic conveniences the language offers.
Regarding compiler enforcement, in C++ the compiler will not allow you to directly access private data members from outside of a class or friends of the class. In C, the compiler won't enforce this; you'll have to rely on API documentation to separate "private" data from "publicly accessible" data.
And regarding syntactic convenience, C++ offers all sorts of conveniences not found in C, such as operator overloading, references, automated object initialization and destruction (in the form of constructors/destructors), exceptions and automated stack-unwinding, built-in support for polymorphism, etc.
So basically, any concept expressed in C++ can be expressed in C; it's simply a matter of how far the compiler will go to help you express a certain concept and how much syntactic convenience the compiler offers. Since C++ is a newer language, it comes with a lot more bells and whistles than you would find in C, thus making the expression of certain concepts easier.
One feature that isn't really OOP-related is default arguments, which can be a real keystroke-saver when used correctly.
Function overloading
I suppose there are so many things namespaces, templates that could not be implemented in C.
There shouldn't be too much such things, because early C++ compilers did produce C source code from C++ source code. Basically you can do everything in Assembler - but don't WANT to do this.
Quoting Joel, I'd say a powerful "feature" of C++ is operator overloading. That for me means having a language that will drive you insane unless you maintain your own code. For example,
i = j * 5;
… in C you know, at least, that j is
being multiplied by five and the
results stored in i.
But if you see that same snippet of
code in C++, you don’t know anything.
Nothing. The only way to know what’s
really happening in C++ is to find out
what types i and j are, something
which might be declared somewhere
altogether else. That’s because j
might be of a type that has operator*
overloaded and it does something
terribly witty when you try to
multiply it. And i might be of a type
that has operator= overloaded, and the
types might not be compatible so an
automatic type coercion function might
end up being called. And the only way
to find out is not only to check the
type of the variables, but to find the
code that implements that type, and
God help you if there’s inheritance
somewhere, because now you have to
traipse all the way up the class
hierarchy all by yourself trying to
find where that code really is, and if
there’s polymorphism somewhere, you’re
really in trouble because it’s not
enough to know what type i and j are
declared, you have to know what type
they are right now, which might
involve inspecting an arbitrary amount
of code and you can never really be
sure if you’ve looked everywhere
thanks to the halting problem (phew!).
When you see i=j*5 in C++ you are
really on your own, bubby, and that,
in my mind, reduces the ability to
detect possible problems just by
looking at code.
But again, this is a feature. (I know I will be modded down, but at the time of writing only a handful of posts talked about downsides of operator overloading)