What is "formal semantics"? - formal-semantics

I'm reading a very silly paper and it keeps on talking about how Giotto defines a "formal semantics".
Giotto has a formal semantics that specifies the meaning of mode switches, of intertask communication, and of communication with the program environment.
I'm on the edge of, but just cannot quite grasp what it means by "formal semantics."

To expand on Michael Madsen's answer a little, an example might be the behaviour of the ++ operator. Informally, we describe the operator using plain English. For instance:
If x is a variable of type int, ++x causes x to be incremented by one.
(I'm assuming no integer overflows, and that ++x doesn't return anything)
In a formal semantics (and I'm going to use operational semantics), we'd have a bit of work to do. Firstly, we need to define a notion of types. In this case, I'm going to assume that all variables are of type int. In this simple language, the current state of the program can be described by a store, which is a mapping from variables to values. For instance, at some point in the program, x might be equal to 42, while y is equal to -5351. The store can be used as a function -- so, for instance, if the store s has the variable x with the value 42, then s(x) = 42.
Also included in the current state of the program is the remaining statements of the program we have to execute. We can bundle this up as <C, s>, where C is the remaining program, and s is the store.
So, if we have the state <++x, {x -> 42, y -> -5351}>, this is informally a state where the only remaining command to execute is ++x, the variable x has value 42, and the variable y has value -5351.
We can then define transitions from one state of the program to another -- we describe what happens when we take the next step in the program. So, for ++, we could define the following semantics:
<++x, s> --> <skip, s{x -> (s(x) + 1)>
Somewhat informally, by executing ++x, the next command is skip, which has no effect, and the variables in the store are unchanged, except for x, which now has the value that it originally had plus one. There's still some work to be done, such as defining the notation I used for updating the store (which I've not done to stop this answer getting even longer!). So, a specific instance of the general rule might be:
<++x, {x -> 42, y -> -5351}> --> <skip, {x -> 43, y -> -5351}>
Hopefully that gives you the idea. Note that this is just one example of formal semantics -- along with operational semantics, there's axiomatic semantics (which often uses Hoare logic) and denotational semantics, and probably plenty more that I'm not familiar with.
As I mentioned in a comment to another answer, an advantage of formal semantics is that you can use them to prove certain properties of your program, for instance that it terminates. As well as showing your program doesn't exhibit bad behaviour (such as non-termination), you can also prove that your program behaves as required by proving your program matches a given specification. Having said that, I've never found the idea of specifying and verifying a program all that convincing, since I've found the specification usually just being the program rewritten in logic, and so the specification is just as likely to be buggy.

Formal semantics describe semantics in - well, a formal way - using notation which expresses the meaning of things in an unambiguous way.
It is the opposite of informal semantics, which is essentially just describing everything in plain English. This may be easier to read and understand, but it creates the potential for misinterpretation, which could lead to bugs because someone didn't read a paragraph the way you intended them to read it.
A programming language can have both formal and informal semantics - the informal semantics would then serve as a "plain-text" explanation of the formal semantics, and the formal semantics would be the place to look if you're not sure what the informal explanation really means.

Just like the syntax of a language can be described by a formal grammar (e.g. BNF), it's possible to use different kinds of formalisms to map that syntax to mathematical objects (i.e. the meaning of the syntax).
This page from A Practical Introduction to Denotational Semantics is a nice introduction to how [denotational] semantics relates to syntax. The beginning of the book also gives a brief history of other, non-denotational approaches to formal semantics (although the wikipedia link Michael gave goes into even more detail, and is probably more up-to-date).
From the author's site:
Models for semantics have not
caught-on to the same extent that BNF
and its descendants have in syntax.
This may be because semantics does
seem to be just plain harder than
syntax. The most successful system is
denotational semantics which describes
all the features found in imperative
programming languages and has a sound
mathematical basis. (There is still
active research in type systems and
parallel programming.) Many
denotational definitions can be
executed as interpreters or translated
into "compilers" but this has not yet
led to generators of efficient
compilers which may be another reason
that denotational semantics is less
popular than BNF.

What is meant in the context of a programming language like Giotto is, that a language with formal semantics, has a mathematically rigorous interpretation of its individual language constructs.
Most programming languages today are not rigorously defined. They may adhere to standard documents that are fairly detailed, but it's ultimately the compiler's responsibility to emit code that somehow adheres to those standard documents.
A formally specified language on the other hand is normally used when it is necessary to do reasoning about program code using, e.g., model checking or theorem proving. Languages that lend themselves to these techniques tend to be functional ones, such as ML or Haskell, since these are defined using mathematical functions and transformations between them; that is, the foundations of mathematics.
C or C++ on the other hand are informally defined by technical descriptions, although there exist academic papers which formalise aspects of these languages (e.g., Michael Norrish: A formal semantics for C++, https://publications.csiro.au/rpr/pub?pid=nicta:1203), but that often do not find their way into the official standards (possibly due to a lack of practicality, esp. difficulty to maintain).

Related

How is structure sharing broken in Standard ML?

In a 2013 presentation about the future of Standard ML, Bob Harper says, on slide 9, that "structure sharing is broken". Can someone give more detail on that? I don't have enough experience with sharing to understand what he meant.
It is broken because as specified, it cannot be applied to structures that have transparent type components. For example:
signature S = sig type t; type u = int end
signature T =
sig
structure A : S
structure B : S
sharing A = B
end
would already be illegal, although you would naturally expect this to be fine.
The history here is that structure sharing was introduced in SML'90, where no transparent type components existed. With SML'97, those were added. At that point, the whole business with sharing constraints became somewhat obsolete because they were (to some extent) superseded by "where type" constraints. So, the semantics of sharing was vastly simplified, and structure sharing degraded from a primitive to syntactic sugar. But this sugar was defined such that it only worked with SML'90 programs -- which makes sense if you only view it as a backwards compatibility hack, but not if you consider structure sharing a central feature of SML'97.
People in the SML community disagree about the relevance of sharing constraints. Some consider them obsolete, some still important. Unfortunately, SML'97 failed to add a "where structure" constraint, which could have properly replaced structure shairing.
Andreas Rossberg's answer has already clarified matters, but I've written to professor Harper before Andreas' answer. I'm posting his reply here for the curious:
It's a purely technical problem with the definition. In SML 90 there
was a notion of structure sharing above and beyond the constituent
type sharing. In SML 97 structure sharing was redefined to mean
sharing of the constituent types, but the formulation is incorrect
(there are several candidate replacements, so compilers differ in
their behavior). It's a dark corner anyway, so in the scheme of
things it is very minor, but the mistake and the associated
incompatibilities render it useless any more.

Is C++ considered weakly typed? Why?

I've always considered C++ to be one of the most strongly typed languages out there.
So I was quite shocked to see Table 3 of this paper state that C++ is weakly typed.
Apparently,
C and C++ are considered weakly typed since, due to type-casting, one can interpret a field of a structure that was an integer as a pointer.
Is the existence of type casting all that matters? Does the explicit-ness of such casts not matter?
More generally, is it really generally accepted that C++ is weakly typed? Why?
That paper first claims:
In contrast, a language is weakly-typed if type-confusion can occur silently (undetected), and eventually cause errors that are difficult to localize.
And then claims:
Also, C and C++ are considered weakly typed since, due to type-casting, one can interpret a field of a structure that was an integer as a pointer.
This seems like a contradiction to me. In C and C++, the type-confusion that can occur as a result of casts will not occur silently -- there's a cast! This does not demonstrate that either of those languages is weakly-typed, at least not by the definition in that paper.
That said, by the definition in the paper, C and C++ may still be considered weakly-typed. There are, as noted in the comments on the question already, cases where the language supports implicit type conversions. Many types can be implicitly converted to bool, a literal zero of type int can be silently converted to any pointer type, there are conversions between integers of varying sizes, etc, so this seems like a good reason to consider C and C++ weakly-typed for the purposes of the paper.
For C (but not C++), there are also more dangerous implicit conversions that are worth mentioning:
int main() {
int i = 0;
void *v = &i;
char *c = v;
return *c;
}
For the purposes of the paper, that must definitely be considered weakly-typed. The reinterpretation of bits happens silently, and can be made far worse by modifying it to use completely unrelated types, which has silent undefined behaviour that typically has the same effect as reinterpreting bits, but blows up in mysterious yet sometimes amusing ways when optimisations are enabled.
In general, though, I think there isn't a fixed definition of "strongly-typed" and "weakly-typed". There are various grades, a language that is strongly-typed compared to assembly may be weakly-typed compared to Pascal. To determine whether C or C++ is weakly-typed, you first have to ask what you want weakly-typed to mean.
"weakly typed" is a quite subjective term. I prefer the terms "strictly typed" and "statically typed" vs. "loosely typed" and "dynamically typed", because they are more objective and more precise words.
From what I can tell, people generally use "weakly typed" as a diminutive-pejorative term which means "I don't like the notion of types in this language". It's sort of an argumentum ad hominem (or rather, argumentum ad linguam) for those who can't bring up professional or technical arguments against a particular language.
The term "strictly typed" also has slightly different interpretations; the generally accepted meaning, in my experience, is "the compiler generates errors if types don't match up". Another interpretation is that "there are no or few implicit conversions". Based on this, C++ can actually be considered a strictly typed language, and most often it is considered as such. I would say that the general consensus on C++ is that it is a strictly typed language.
Of course we could try a more nuanced approach to the question and say that parts of the language are strictly typed (this is the majority of the cases), other parts are loosely typed (a few implicit conversions, e. g. arithmetic conversions and the four types of explicit conversion).
Furthermore, there are some programmers, especially beginners who are not familiar with more than a few languages, who don't intend to or can't make the distinction between "strict" and "static", "loose" and "dynamic", and conflate the two - otherwise orthogonal - concepts based on their limited experience (usually the correlation of dynamism and loose typing in popular scripting languages, for example).
In reality, parts of C++ (virtual calls) impose the requirement that the type system be partially dynamic, but other things in the standard require that it be strict. Again, this is not a problem, since these are orthogonal concepts.
To sum up: probably no language fits completely, perfectly into one category or another, but we can say which particular property of a given language dominates. In C++, strictness definitely does dominate.
In contrast, a language is weakly-typed if type-confusion can occur silently (undetected), and eventually cause errors that are difficult to localize.
Well, that can happen in C++, for example:
#define _USE_MATH_DEFINES
#include <iostream>
#include <cmath>
#include <limits>
void f(char n) { std::cout << "f(char)\n"; }
void f(int n) { std::cout << "f(int)\n"; }
void g(int n) { std::cout << "f(int)\n"; }
int main()
{
float fl = M_PI; // silent conversion to float may lose precision
f(8 + '0'); // potentially unintended treatment as int
unsigned n = std::numeric_limits<unsigned>::max();
g(n); // potentially unintended treatment as int
}
Also, C and C++ are considered weakly typed since, due to type-casting, one can interpret a field of a structure that was an integer as a pointer.
Ummmm... not via any implicit conversion, so that's a silly argument. C++ allows explicit casting between types, but that's hardly "weak" - it doesn't happen accidentally/silently as required by the site's own definition above.
Is the existence of type casting all that matters? Does the explicit-ness of such casts not matter?
Explicitness is a crucial consideration IMHO. Letting a programmer override the compiler's knowledge of types is one of the "power" features of C++, not some weakness. It's not prone to accidental use.
More generally, is it really generally accepted that C++ is weakly typed? Why?
No - I don't think it is accepted. C++ is reasonably strongly typed, and the ways in which it has been lenient that have historically caused trouble have been pruned back, such as implicit casts from void* to other pointer types, and finer grained control with explicit casting operators and constructors.
Well, since the creator of C++, Bjarne Stroustrup, says in The C++ Programming Language (4th edition) that the language is strongly typed, I would take his word for it:
C++ programming is based on strong static type checking, and most techniques aim at achieving a high level of abstraction and a direct representation of the programmer’s ideas. This can usually be done without compromising run-time and space efficiency compared to lower-level techniques. To gain the benefits of C++, programmers coming to it from a different language must
learn and internalize idiomatic C++ programming style and technique. The same applies to programmers used to earlier and less expressive versions of C++.
In this video lecture from 1994 he also states that the weak type system of C really bothered him, and that's why he made C++ strongly typed: The Design of C++ , lecture by Bjarne Stroustrup
In General:
There is a confusion around the subject. Some terms differ from book to book (not considering the internet here), and some may have changed over the years.
Below is what I've understood from the book "Engineering a Compiler" (2nd Edition).
1. Untyped Languages
Languages that have no types at all, like for example in assembly.
2. Weakly Typed Languages:
Languages that have poor type system.
The definition here is intentionally ambiguous.
3. Strongly Typed Languages:
Languages where each expression have unambiguous type. PL can further categorised to:
A. Statically Typed: when every expression is assigned a type at compile time.
B. Dynamically Typed: when some expressions can only be typed at runtime.
What is C++ then?
Well, it's strongly typed for sure. And mostly it is statically typed.
But as some expressions can only be typed at runtime, I guess it falls under the 3.B category.
PS1: A note from the book:
A strongly typed language, that can be statically checkable, might be implemented (for some reason) just with runtime checking.
PS2: Third Edition was recently released
I don't own it, so I don't know if anything had changed on this regard.
But in general, the "Semantic Analysis" Chapter had changed both title and order in Table of Contents.
Let me give you a simple example:
if ( a + b )
C/C+= allows an implicit conversion from float to int to Boolean.
A strongly-typed language would not allow such an implicit conversion.

Should I avoid using Monad fail?

I'm fairly new to Haskell and have been slowly getting the idea that there's something wrong with the existence of Monad fail. Real World Haskell warns against its use ("Once again, we recommend that you almost always avoid using fail!"). I just noticed today that Ross Paterson called it "a wart, not a design pattern" back in 2008 (and seemed to get quite some agreement in that thread).
While watching Dr Ralf Lämmel talk on the essence of functional programming, I started to understand a possible tension which may have led to Monad fail. In the lecture, Ralf talks about adding various monadic effects to a base monadic parser (logging, state etc). Many of the effects required changes to the base parser and sometimes the data types used. I figured that the addition of 'fail' to all monads might have been a compromise because 'fail' is so common and you want to avoid changes to the 'base' parser (or whatever) as much as possible. Of course, some kind of 'fail' makes sense for parsers but not always, say, put/get of State or ask/local of Reader.
Let me know if I could be off onto the wrong track.
Should I avoid using Monad fail?
What are the alternatives to Monad fail?
Are there any alternative monad libraries that do not include this "design wart"?
Where can I read more about the history around this design decision?
Some monads have a sensible failure mechanism, e.g. the terminal monad:
data Fail x = Fail
Some monads don't have a sensible failure mechanism (undefined is not sensible), e.g. the initial monad:
data Return x = Return x
In that sense, it's clearly a wart to require all monads to have a fail method. If you're writing programs that abstract over monads (Monad m) =>, it's not very healthy to make use of that generic m's fail method. That would result in a function you can instantiate with a monad where fail shouldn't really exist.
I see fewer objections to using fail (especially indirectly, by matching Pat <- computation) when working in a specific monad for which a good fail behaviour has been clearly specified. Such programs would hopefully survive a return to the old discipline where nontrivial pattern matching created a demand for MonadZero instead of just Monad.
One might argue that the better discipline is always to treat failure-cases explicitly. I object to this position on two counts: (1) that the point of monadic programming is to avoid such clutter, and (2) that the current notation for case analysis on the result of a monadic computation is so awful. The next release of SHE will support the notation (also found in other variants)
case <- computation of
Pat_1 -> computation_1
...
Pat_n -> computation_n
which might help a little.
But this whole situation is a sorry mess. It's often helpful to characterize monads by the operations which they support. You can see fail, throw, etc as operations supported by some monads but not others. Haskell makes it quite clumsy and expensive to support small localized changes in the set of operations available, introducing new operations by explaining how to handle them in terms of the old ones. If we seriously want to do a neater job here, we need to rethink how catch works, to make it a translator between different local error-handling mechanisms. I often want to bracket a computation which can fail uninformatively (e.g. by pattern match failure) with a handler that adds more contextual information before passing on the error. I can't help feeling that it's sometimes more difficult to do that than it should be.
So, this is a could-do-better issue, but at the very least, use fail only for specific monads which offer a sensible implementation, and handle the 'exceptions' properly.
In Haskell 1.4 (1997) there was no fail. Instead, there was a MonadZero type class which contained a zero method. Now, the do notation used zero to indicate pattern match failure; this caused surprises to people: whether their function needed Monad or MonadZero depended on how they used the do notation in it.
When Haskell 98 was designed a bit later, they did several changes to make programming simpler to the novice. For example, monad comprehensions were turned into list comprehensions. Similarly, to remove the do type class issue, the MonadZero class was removed; for the use of do, the method fail was added to Monad; and for other uses of zero, a mzero method was added to MonadPlus.
There is, I think, a good argument to be made that fail should not be used for anything explicitly; its only intended use is in the translation of the do notation. Nevertheless, I myself am often naughty and use fail explicitly, too.
You can access the original 1.4 and 98 reports here. I'm sure the discussion leading to the change can be found in some email list archives, but I don't have links handy.
I try to avoid Monad fail wherever possible, and there are a whole variety of ways to capture fail depending on your circumstance. Edward Yang has written a good overview on his blog in the article titled 8 ways to report errors in Haskell revisited.
In summary, the different ways to report errors that he identifies are:
Use error
Use Maybe a
Use Either String a
Use Monad and fail to generalize 1-3
Use MonadError and a custom error type
Use throw in the IO monad
Use ioError and catch
Go nuts with monad transformers
Checked exceptions
Failure
Of these, I would be tempted to use option 3, with Either e b if I know how to handle the error, but need a little more context.
If you use GHC 8.6.1 or above, there is a new class called MonadFail and a language extension called MonadFailDesugaring. If you have access to these, use them. You can then use fail without a problem that some monad actually has no fail implementation.
This extension makes the fail desugaring requires a more restrictive MonadFail instead of any monad.

condition execution using if or logic

when using objects I sometimes test for their existence
e.g
if(object)
object->Use();
could i just use
(object && object->Use());
and what differences are there, if any?
They're the same assuming object->Use() returns something that's valid in a boolean context; if it returns void the compiler will complain that a void return isn't being ignored like it should be, and other return types that don't fit will give you something like no match for 'operator&&'
One enormous difference is that the two function very differently if operator&& has been overloaded. Short circuit evaluation is only provided for the built in operators. In the case of an overloaded operator, both sides will be evaluated [in an unspecified order; operator&& also does not define a sequence point in this case], and the results passed to the actual function call.
If object and the return type of object->Use() are both primitive types, then you're okay. But if either are of class type, then it is possible object->Use() will be called even if object evaluates to false.
They are effectively the same thing but the second is not as clear as your first version, whose intent is obvious. Execution speed is probably no different, either.
Functionally they are the same, and a decent compiler should be able to optimize both equally well. However, writing an expression with operators like this and not checking the result is very odd. Perhaps if this style were common, it would be considered concise and easy to read, but it's not - right now it's just weird. You may get used to it and it could make perfect sense to you, but to others who read your code, their first impression will be, "What the heck is this?" Thus, I recommend going with the first, commonly used version if only to avoid making your fellow programmers insane.
When I was younger I think I would have found that appealing. I always wanted to trim down lines of code, but I realized later on that when you deviate too far from the norm, it'll bite you in the long run when you start working with a team. If you want to achieve zen-programming with minimum lines of code, focus on the logic more than the syntax.
I wouldn't do that. If you overloaded operator&& for pointer type pointing to object and class type returned by object->Use() all bets are off and there is no short-circuit evaluation.
Yes, you can. You see, C language, as well as C++, is a mix of two fairly independent worlds, or realms, if you will. There's the realm of statements and the realm of expressions. Each one can be seen as a separate sub-language in itself, with its own implementations of basic programming constructs.
In the realm of statements, the sequencing is achieved by the ; at the end of the single statement or by the } at the end of compound statement. In the realm of expressions the sequencing is provided by the , operator.
Branching in the realm of statements is implemented by if statement, while in the realm of expressions it can be implemented by either ?: operator or by use of the short-circuit evaluation properties of && and || operators (which is what you just did, assuming your expression is valid).
The realm of expressions has no cycles, but it has recursion that can replace it (requires function calls though, which inevitable forces us to switch to statements).
Obviously these realms are far from being equivalent in their power. C and C++ are languages dominated by statements. However, often one can implement fairly complex constructs using the language of expressions alone.
What you did above does implement equivalent branching in the language of expressions. Keep in mind that many people will find it hard to read in the normal code (mostly because, once again, they are used by statement-dominated C and C++ code). But it often comes very handy in some specific contexts, like template metaprogramming, for one example.

Features of C++ that can't be implemented in C?

I have read that C++ is super-set of C and provide a real-time implementation by creating objects. Also C++ is closed to real world as it is enriched with Object Oriented concepts.
What all concepts are there in C++ that can not be implemented in C?
Some say that we can not over write methods in C then how can we have different flavors of printf()?
For example printf("sachin"); will print sachin and printf("%d, %s",count ,name); will print 1,sachin assuming count is an integer whose value is 1 and name is a character array initililsed with "sachin".
Some say data abstraction is achieved in C++, so what about structures?
Some responders here argues that most things that can be produced with C++ code can also be produced with C with enough ambition. This is true in some parts, but some things are inherently impossible to achieve unless you modify the C compiler to deviate from the standard.
Fakeable:
Inheritance (pointer to parent-struct in the child-struct)
Polymorphism (Faking vtable by using a group of function pointers)
Data encapsulation (opaque sub structures with an implementation not exposed in public interface)
Impossible:
Templates (which might as well be called preprocessor step 2)
Function/method overloading by arguments (some try to emulate this with ellipses, but never really comes close)
RAII (Constructors and destructors are automatically invoked in C++, so your stack resources are safely handled within their scope)
Complex cast operators (in C you can cast almost anything)
Exceptions
Worth checking out:
GLib (a C library) has a rather elaborate OO emulation
I posted a question once about what people miss the most when using C instead of C++.
Clarification on RAII:
This concept is usually misinterpreted when it comes to its most important aspect - implicit resource management, i.e. the concept of guaranteeing (usually on language level) that resources are handled properly. Some believes that achieving RAII can be done by leaving this responsibility to the programmer (e.g. explicit destructor calls at goto labels), which unfortunately doesn't come close to providing the safety principles of RAII as a design concept.
A quote from a wikipedia article which clarifies this aspect of RAII:
"Resources therefore need to be tied to the lifespan of suitable objects. They are acquired during initialization, when there is no chance of them being used before they are available, and released with the destruction of the same objects, which is guaranteed to take place even in case of errors."
How about RAII and templates.
It is less about what features can't be implemented, and more about what features are directly supported in the language, and therefore allow clear and succinct expression of the design.
Sure you can implement, simulate, fake, or emulate most C++ features in C, but the resulting code will likely be less readable, or maintainable. Language support for OOP features allows code based on an Object Oriented Design to be expressed far more easily than the same design in a non-OOP language. If C were your language of choice, then often OOD may not be the best design methodology to use - or at least extensive use of advanced OOD idioms may not be advisable.
Of course if you have no design, then you are likely to end up with a mess in any language! ;)
Well, if you aren't going to implement a C++ compiler using C, there are thousands of things you can do with C++, but not with C. To name just a few:
C++ has classes. Classes have constructors and destructors which call code automatically when the object is initialized or descrtucted (going out of scope or with delete keyword).
Classes define an hierarchy. You can extend a class. (Inheritance)
C++ supports polymorphism. This means that you can define virtual methods. The compiler will choose which method to call based on the type of the object.
C++ supports Run Time Information.
You can use exceptions with C++.
Although you can emulate most of the above in C, you need to rely on conventions and do the work manually, whereas the C++ compiler does the job for you.
There is only one printf() in the C standard library. Other varieties are implemented by changing the name, for instance sprintf(), fprintf() and so on.
Structures can't hide implementation, there is no private data in C. Of course you can hide data by not showing what e.g. pointers point to, as is done for FILE * by the standard library. So there is data abstraction, but not as a direct feature of the struct construct.
Also, you can't overload operators in C, so a + b always means that some kind of addition is taking place. In C++, depending on the type of the objects involved, anything could happen.
Note that this implies (subtly) that + in C actually is overridden; int + int is not the same code as float + int for instance. But you can't do that kind of override yourself, it's something for the compiler only.
You can implement C++ fully in C... The original C++ compiler from AT+T was infact a preprocessor called CFront which just translated C++ code into C and compiled that.
This approach is still used today by comeau computing who produce one of the most C++ standards compliant compilers there is, eg. It supports all of C++ features.
namespace
All the rest is "easily" faked :)
printf is using a variable length arguments list, not an overloaded version of the function
C structures do not have constructors and are unable to inherit from other structures they are simply a convenient way to address grouped variables
C is not an OO langaueage and has none of the features of an OO language
having said that your are able to imitate C++ functionality with C code but, with C++ the compiler will do all the work for you in compile time
What all concepts are there in C++
that can not be implemented in C?
This is somewhat of an odd question, because really any concept that can be expressed in C++ can be expressed in C. Even functionality similar to C++ templates can be implemented in C using various horrifying macro tricks and other crimes against humanity.
The real difference comes down to 2 things: what the compiler will agree to enforce, and what syntactic conveniences the language offers.
Regarding compiler enforcement, in C++ the compiler will not allow you to directly access private data members from outside of a class or friends of the class. In C, the compiler won't enforce this; you'll have to rely on API documentation to separate "private" data from "publicly accessible" data.
And regarding syntactic convenience, C++ offers all sorts of conveniences not found in C, such as operator overloading, references, automated object initialization and destruction (in the form of constructors/destructors), exceptions and automated stack-unwinding, built-in support for polymorphism, etc.
So basically, any concept expressed in C++ can be expressed in C; it's simply a matter of how far the compiler will go to help you express a certain concept and how much syntactic convenience the compiler offers. Since C++ is a newer language, it comes with a lot more bells and whistles than you would find in C, thus making the expression of certain concepts easier.
One feature that isn't really OOP-related is default arguments, which can be a real keystroke-saver when used correctly.
Function overloading
I suppose there are so many things namespaces, templates that could not be implemented in C.
There shouldn't be too much such things, because early C++ compilers did produce C source code from C++ source code. Basically you can do everything in Assembler - but don't WANT to do this.
Quoting Joel, I'd say a powerful "feature" of C++ is operator overloading. That for me means having a language that will drive you insane unless you maintain your own code. For example,
i = j * 5;
… in C you know, at least, that j is
being multiplied by five and the
results stored in i.
But if you see that same snippet of
code in C++, you don’t know anything.
Nothing. The only way to know what’s
really happening in C++ is to find out
what types i and j are, something
which might be declared somewhere
altogether else. That’s because j
might be of a type that has operator*
overloaded and it does something
terribly witty when you try to
multiply it. And i might be of a type
that has operator= overloaded, and the
types might not be compatible so an
automatic type coercion function might
end up being called. And the only way
to find out is not only to check the
type of the variables, but to find the
code that implements that type, and
God help you if there’s inheritance
somewhere, because now you have to
traipse all the way up the class
hierarchy all by yourself trying to
find where that code really is, and if
there’s polymorphism somewhere, you’re
really in trouble because it’s not
enough to know what type i and j are
declared, you have to know what type
they are right now, which might
involve inspecting an arbitrary amount
of code and you can never really be
sure if you’ve looked everywhere
thanks to the halting problem (phew!).
When you see i=j*5 in C++ you are
really on your own, bubby, and that,
in my mind, reduces the ability to
detect possible problems just by
looking at code.
But again, this is a feature. (I know I will be modded down, but at the time of writing only a handful of posts talked about downsides of operator overloading)