Should I use C(99) booleans ? ( also c++ booleans in c++ ?) - c++

I haven't done much c programming but when I do when I need a false I put 0 when I want true I put 1, (ex. while(1)), in other cases I use things like "while(ptr)" or "if(x)".
Should I try using C99 booleans, should I recommend them to others if I'm helping people new to programming learn c basics(thinking of cs 1?? students)?
I'm pretty sure the Visual Studio compiler supports c99 bools, but do a lot of projects (open source and c apps in industry) compile for c89? If I don't use C bools should I at least do something like #define TRUE 1 #define FALSE 0?
Also what about c++ Booleans (for c++)?

In C++ there is no reason to not use it. In C, i only use int for this task, without any #define or something like that. Variable-names like isDefinition are clear enough to indicate what's going on, in my opinion.
Of course, there is nothing wrong with defining your own bool or using the ones of <stdbool.h>.

Yes, you should use the language abstractions when they are available. When I use an older C compiler I still create some abstraction of a bool. Using literals in your code is a very poor practice.

C++ booleans are fine as they are part of the language and are supported by basically any C++ compiler these days.
C99 booleans seem like a good idea but just keep in mind whether the code you write today will ever need to be used in a C89 project...

The compiler can do better optimization when it knows a variable is boolean. Also when using ints it's easier to introduce bugs when used in a bitwise context, as ints can be inadvertently set to values other than 1

Use bool in C++. It has been there for years and every C++ compiler supports it,
Use it in C if your code requires other C99 features.
Don't use it in pre-C99 code, since any non-zero value will be interpreted as true, and using defines may lead to bugs that are hard to track down (some C library functions are documented to return any non-zero int value, and even if it's generally poor practice to write something like
if (var==TRUE) { ... }
things like this may break, and might even behave differently under different compilers/operating systems.

Related

Any reason to define types in dll? Like #define MODULE_NAME_INT8 __int8

So I am going through some code at work and while I am not an expert in c++ yet I think it is safe to say the code is a bit of mess. Still I would like confirmation on something. In the code every type is defined like so:
#define MODULE_NAME_INT8 __int8
The same for unsigned, 16, 32 and 64 bit as well as bool, null, true and false. Is there possibly some reason for this? It is in a dll, and other parts of the application is written in C# so maybe there is something with this I have not thought of?
It really seems like a complete waste and a misuse of define to me. I mean the type name is in the define, it is basically just renaming the type, why clutter the code like this? Am I right or am I missing something?
I think those defines make sense. If this code is a bit old, MSVC 2008 and before do not have stdint.h nor cstdint. So the standard type int8_t cannot be used and the programmer decided to hide the Windows specific __int8 behind a define to only have to change the define if the code has to be ported to another platform (or simply a different compiler)
Double underscore before type is compiler-specific. Typically you can see defines like this one if the code is supposed to be built using different compiler or platform, so that you can build from the same source code, and types remain the same.
Although this clutters the code, using for example just "int" may be dangerous in case you want to build for multiple platforms, because in C++ int has no standard-defined size (int may be 1,2,4,8 bytes).
C++11 introduced fixed types for this purpose: int8_t, int16_t, and so on, but since your code was most probably created long before this, it does not take advantage of this, and goes the "old-fashioned" way.
Having a header file with all data types used by the application is very common practice in commercial software.
It's more of a thinking ahead kind of measure than a maintain compatibility. The purpose is to make the code easily portable and easy to maintain.
Since there is no guarantee that the code will not have to run a different architecture which uses different sizes for the common data types, the practice is a very good one.

What is the cost of compiling a C program with a C++ compiler?

I want to use C with templates on a embedded environment and I wanted to know what is the cost of compiling a C program with a C++ compiler?
I'm interested in knowing if there will be more code than the one the C compiler will generate.
Note that as the program is a C program, is expect to call the C++ compiler without exception and RTTI support.
Thanks,
Vicente
The C++ compiler may take longer to compile the code (since it has to build data structures for overload resolution, it can't know ahead of time that the program doesn't use overloads), but the resulting binary should be quite similar.
Actually, one important optimization difference is that C++ follows strict aliasing rules by default, while C requires the restrict keyword to enable aliasing optimizations. This isn't likely to affect code size much, but it could affect correctness and performance significantly.
There's probably no 'cost', assuming that the two compilers are of equivalent quality. The traditional objection to this is that C++ is much more complex and so it's more likely that a C++ compiler will have bugs in it.
Realistically, this is much less of a problem that it used to be, and I tend to do most of my embedded stuff now as a sort of horrible C/C++ hybrid - taking advantage of stronger typing and easier variable declaration rules, without incurring RTTI or exception handling overheads. If you're taking a given compiler (GCC, etc) and switching it from C to C++ mode, then much of what you have to worry about is common to the two languages anyway.
The only way to really know is for you to try it with the compilers you care about. A quick experiment here on a trivial program shows that the output is the same.
Your program will be linked to the C++ runtime library, not the C one. The C++ is larger as well.
Also, there are a couple of differences between C and C++ (aliases were already pointed out) so it may happen that your C code just does not compile in C++.
If it's C, then you can expect it will be exactly the same.
To elaborate: both C and C++ will forward their parse tree into the same backend that generates code (possibly via another intermediate representation), which means that if the code is functionally identical, the output will look the same (or nearly so).
Templates do "inflate" code, but you would otherwise have to write the same code or use macros to the same effect, so this is no "extra cost". Contrarily, the compiler may be able to optimize templates better in some cases.
A C++ compiler cannot compile C code. It can only compile C++, including a very ugly language which is the intersection of C and C++ and the worst of both worlds. Some C code will fail to compile at all on a C++ compiler, for example:
char *s = malloc(len+1);
While other C code will be compiled to the wrong thing, for example:
sizeof 'a'
I have found this extra-ordinary document Technical Report on C++ Performance. I have found there all the answers i was looking for.
Thanks to all that have answered this question.
There will be more code because that is what templates do. They are a stencil for generating (more) code.
Otherwise, you should see no differences between compiling a C program with a C compiler versus compiling with a C++ compiler.
If you don't use any of the extra "features" there should be no difference in size or behavior of the end result.
Although the C code will likely compile to something very similar (assuming there's no exception support enabled), using templates can very rapidly result in large binaries - you have to be careful, because every template instantiation can recursively result in other templates being implicitly instantiated as well.
There was a time when the C++ compiler linked in a bunch of C++ stuff even if the program didnt use it and you would see binaries that were 10 to 100 times larger than the C compiler would produce. I think a lot of that has gone away.
Since this is tagged "embedded", I assume its for embedded systems?
In that case, the major difference between C and C++ is the way C++ treats structs. All structs will be treated like classes, meaning they will have constructors.
All instances of structs/classes declared at file scope or as static will then have their constructors called before main() is executed, in a similar manner to static initialization, which you already have there no matter C or C++.
All these constructor calls at bootup is a major disadvantage in efficiency for embedded systems, where the code resides in NVM and not in RAM. Just like static initialization, it will create an ugly, undesired workload peak at the start of the program, where values from NVM are copied into the RAM.
There are ways around the static initialization in C/C++: most embedded compilers have an option to disable it. But since that is a non-standard setup, all code using statics would then have to be written so that it never uses any initialization values, but instead sets all static variables in runtime.
But as far as I know, there is no way around calling constructors, without violating the standard.
EDIT:
Here is source code executed in one such C++ system, Freescale HCS08 Codewarrior 6.3. This code is injected in the user program after static initialization, but before main() is executed:
static void Call_Constructors(void) {
int i;
...
i = (int)(_startupData.nofInitBodies - 1);
while (i >= 0) {
(&_startupData.initBodies->initFunc)[i](); /* call C++ constructors */
i--;
}
...
At the very least, this overhead code must be executed at program startup, no matter how efficient the compiler is at converting constructors into static initializtion.
C++ runtime start-up differs slightly from C start-up because it must invoke the constructors for global static objects before main() is called. This call loop is trivial and should not add much.
In the case of C++ code that is also entirely C compilable no static constructors will be present so the loop will not iterate.
In most cases apart from that, you will normally see no significant difference, in C++ you only pay for what you use.

Moving from C++ to C

After a few years coding in C++, I was recently offered a job coding in C, in the embedded field.
Putting aside the question of whether it's right or wrong to dismiss C++ in the embedded field, there are some features/idioms in C++ I would miss a lot. Just to name a few:
Generic, type-safe data structures (using templates).
RAII. Especially in functions with multiple return points, e.g. not having to remember to release the mutex on each return point.
Destructors in general. I.e. you write a d'tor once for MyClass, then if a MyClass instance is a member of MyOtherClass, MyOtherClass doesn't have to explicitly deinitialize the MyClass instance - its d'tor is called automatically.
Namespaces.
What are your experiences moving from C++ to C?
What C substitutes did you find for your favorite C++ features/idioms? Did you discover any C features you wish C++ had?
Working on an embedded project, I tried working in all C once, and just couldn't stand it. It was just so verbose that it made it hard to read anything. Also, I liked the optimized-for-embedded containers I had written, which had to turn into much less safe and harder to fix #define blocks.
Code that in C++ looked like:
if(uart[0]->Send(pktQueue.Top(), sizeof(Packet)))
pktQueue.Dequeue(1);
turns into:
if(UART_uchar_SendBlock(uart[0], Queue_Packet_Top(pktQueue), sizeof(Packet)))
Queue_Packet_Dequeue(pktQueue, 1);
which many people will probably say is fine but gets ridiculous if you have to do more than a couple "method" calls in a line. Two lines of C++ would turn into five of C (due to 80-char line length limits). Both would generate the same code, so it's not like the target processor cared!
One time (back in 1995), I tried writing a lot of C for a multiprocessor data-processing program. The kind where each processor has its own memory and program. The vendor-supplied compiler was a C compiler (some kind of HighC derivative), their libraries were closed source so I couldn't use GCC to build, and their APIs were designed with the mindset that your programs would primarily be the initialize/process/terminate variety, so inter-processor communication was rudimentary at best.
I got about a month in before I gave up, found a copy of cfront, and hacked it into the makefiles so I could use C++. Cfront didn't even support templates, but the C++ code was much, much clearer.
Generic, type-safe data structures (using templates).
The closest thing C has to templates is to declare a header file with a lot of code that looks like:
TYPE * Queue_##TYPE##_Top(Queue_##TYPE##* const this)
{ /* ... */ }
then pull it in with something like:
#define TYPE Packet
#include "Queue.h"
#undef TYPE
Note that this won't work for compound types (e.g. no queues of unsigned char) unless you make a typedef first.
Oh, and remember, if this code isn't actually used anywhere, then you don't even know if it's syntactically correct.
EDIT: One more thing: you'll need to manually manage instantiation of code. If your "template" code isn't all inline functions, then you'll have to put in some control to make sure that things get instantiated only once so your linker doesn't spit out a pile of "multiple instances of Foo" errors.
To do this, you'll have to put the non-inlined stuff in an "implementation" section in your header file:
#ifdef implementation_##TYPE
/* Non-inlines, "static members", global definitions, etc. go here. */
#endif
And then, in one place in all your code per template variant, you have to:
#define TYPE Packet
#define implementation_Packet
#include "Queue.h"
#undef TYPE
Also, this implementation section needs to be outside the standard #ifndef/#define/#endif litany, because you may include the template header file in another header file, but need to instantiate afterward in a .c file.
Yep, it gets ugly fast. Which is why most C programmers don't even try.
RAII.
Especially in functions with multiple return points, e.g. not having to remember to release the mutex on each return point.
Well, forget your pretty code and get used to all your return points (except the end of the function) being gotos:
TYPE * Queue_##TYPE##_Top(Queue_##TYPE##* const this)
{
TYPE * result;
Mutex_Lock(this->lock);
if(this->head == this->tail)
{
result = 0;
goto Queue_##TYPE##_Top_exit:;
}
/* Figure out `result` for real, then fall through to... */
Queue_##TYPE##_Top_exit:
Mutex_Lock(this->lock);
return result;
}
Destructors in general.
I.e. you write a d'tor once for MyClass, then if a MyClass instance is a member of MyOtherClass, MyOtherClass doesn't have to explicitly deinitialize the MyClass instance - its d'tor is called automatically.
Object construction has to be explicitly handled the same way.
Namespaces.
That's actually a simple one to fix: just tack a prefix onto every symbol. This is the primary cause of the source bloat that I talked about earlier (since classes are implicit namespaces). The C folks have been living this, well, forever, and probably won't see what the big deal is.
YMMV
I moved from C++ to C for a different reason (some sort of allergic reaction ;) and there are only a few thing that I miss and some things that I gained. If you stick to C99, if you may, there are constructs that let you program quite nicely and safely, in particular
designated initializers (eventually
combined with macros) make
initialization of simple classes as
painless as constructors
compound literals for temporary variables
for-scope variable may help you to do scope bound resource management, in particular to ensure to unlock of mutexes or free of arrays, even under preliminary function returns
__VA_ARGS__ macros can be used to have default arguments to functions and to do code unrolling
inline functions and macros that combine well to replace (sort of) overloaded functions
The difference between C and C++ is the predictability of the code's behavior.
It is a easier to predict with great accuracy what your code will do in C, in C++ it might become a bit more difficult to come up with an exact prediction.
The predictability in C gives you better control of what your code is doing, but that also means you have to do more stuff.
In C++ you can write less code to get the same thing done, but (at leas for me) I have trouble occasionally knowing how the object code is laid out in memory and it's expected behavior.
Nothing like the STL exists for C.
There are libs available which provide similar functionality, but it isn't builtin anymore.
Think that would be one of my biggest problems... Knowing with which tool I could solve the problem, but not having the tools available in the language I have to use.
In my line of work - which is embedded, by the way - I am constantly switching back & forth between C and C++.
When I'm in C, I miss from C++:
templates (including but not limited to STL containers). I use them for things like special counters, buffer pools, etc. (built up my own library of class templates & function templates that I use in different embedded projects)
very powerful standard library
destructors, which of course make RAII possible (mutexes, interrupt disable, tracing, etc.)
access specifiers, to better enforce who can use (not see) what
I use inheritance on larger projects, and C++'s built-in support for it is much cleaner & nicer than the C "hack" of embedding the base class as the first member (not to mention automatic invocation of constructors, init. lists, etc.) but the items listed above are the ones I miss the most.
Also, probably only about a third of the embedded C++ projects I work on use exceptions, so I've become accustomed to living without them, so I don't miss them too much when I move back to C.
On the flip side, when I move back to a C project with a significant number of developers, there are whole classes of C++ problems that I'm used to explaining to people which go away. Mostly problems due to the complexity of C++, and people who think they know what's going on, but they're really at the "C with classes" part of the C++ confidence curve.
Given the choice, I'd prefer using C++ on a project, but only if the team is pretty solid on the language. Also of course assuming it's not an 8K μC project where I'm effectively writing "C" anyway.
Couple of observations
Unless you plan to use your c++ compiler to build your C (which is possible if you stick to a well define subset of C++) you will soon discover things that your compiler allows in C that would be a compile error in C++.
No more cryptic template errors (yay!)
No (language supported) object oriented programming
Pretty much the same reasons I have for using C++ or a mix of C/C++ rather than pure C. I can live without namespaces but I use them all the time if the code standard allows it. The reasons is that you can write much more compact code in C++. This is very usefull for me, I write servers in C++ which tend to crash now and then. At that point it helps a lot if the code you are looking at is short and consist. For example consider the following code:
uint32_t
ScoreList::FindHighScore(
uint32_t p_PlayerId)
{
MutexLock lock(m_Lock);
uint32_t highScore = 0;
for(int i = 0; i < m_Players.Size(); i++)
{
Player& player = m_Players[i];
if(player.m_Score > highScore)
highScore = player.m_Score;
}
return highScore;
}
In C that looks like:
uint32_t
ScoreList_getHighScore(
ScoreList* p_ScoreList)
{
uint32_t highScore = 0;
Mutex_Lock(p_ScoreList->m_Lock);
for(int i = 0; i < Array_GetSize(p_ScoreList->m_Players); i++)
{
Player* player = p_ScoreList->m_Players[i];
if(player->m_Score > highScore)
highScore = player->m_Score;
}
Mutex_UnLock(p_ScoreList->m_Lock);
return highScore;
}
Not a world of difference. One more line of code, but that tends to add up. Nomally you try your best to keep it clean and lean but sometimes you have to do something more complex. And in those situations you value your line count. One more line is one more thing to look at when you try to figure out why your broadcast network suddenly stops delivering messages.
Anyway I find that C++ allows me to do more complex things in a safe fashion.
yes! i have experienced both of these languages and what i found is C++ is more friendly language. It facilitates with more features. It is better to say that C++ is superset of C language as it provide additional features like polymorphism, interitance, operator and function overloading, user defined data types which is not really supported in C. The thousand lines of code is reduce to few lines with the help of object oriented programming that's the main reason of moving from C to C++.
I think the main problem why c++ is harder to be accepted in embedded environment is because of the lack of engineers that understand how to use c++ properly.
Yes, the same reasoning can be applied to C as well, but luckily there aren't that many pitfalls in C that can shoot yourself in the foot. C++ on the other hand, you need to know when not to use certain features in c++.
All in all, I like c++. I use that on the O/S services layer, driver, management code, etc.
But if your team doesn't have enough experience with it, it's gonna be a tough challenge.
I had experience with both. When the rest of the team wasn't ready for it, it was a total disaster. On the other hand, it was good experience.
Certainly, the desire to escape complex/messy syntax is understandable. Sometimes C can appear to be the solution. However, C++ is where the industry support is, including tooling and libraries, so that is hard to work around.
C++ has so many features today including lambdas.
A good approach is to leverage C++ itself to make your code simpler. Objects are good for isolating things under the hood so that at a higher level, the code is simpler. The core guidelines recommend concrete (simple) objects, so that approach can help.
The level of complexity is under the engineer's control. If multiple inheritance (MI) is useful in a scenario and one prefers that option, then one may use MI.
Alternatively, one can define interfaces, inherit from the interface(s), and contain implementing objects (composition/aggregation) and expose the objects through the interface using inline wrappers. The inline wrappers compile down to nothing, i.e., compile down to simple use of the internal (contained) object, yet the container object appears to have that functionality as if multiple inheritance was used.
C++ also has namespaces, so one should leverage namespaces even if coding in a C-like style.
One can use the language itself to create simpler patterns and the STL is full of examples: array, vector, map, queue, string, unique_ptr,... And one can control (to a reasonable extent) how complex their code is.
So, going back to C is not the way, nor is it necessary. One may use C++ in a C-like way, or use C++ multiple inheritance, or use any option in-between.

How to limit the impact of implementation-dependent language features in C++?

The following is an excerpt from Bjarne Stroustrup's book, The C++ Programming Language:
Section 4.6:
Some of the aspects of C++’s fundamental types, such as the size of an int, are implementation- defined (§C.2). I point out these dependencies and often recommend avoiding them or taking steps to minimize their impact. Why should you bother? People who program on a variety of systems or use a variety of compilers care a lot because if they don’t, they are forced to waste time finding and fixing obscure bugs. People who claim they don’t care about portability usually do so because they use only a single system and feel they can afford the attitude that ‘‘the language is what my compiler implements.’’ This is a narrow and shortsighted view. If your program is a success, it is likely to be ported, so someone will have to find and fix problems related to implementation-dependent features. In addition, programs often need to be compiled with other compilers for the same system, and even a future release of your favorite compiler may do some things differently from the current one. It is far easier to know and limit the impact of implementation dependencies when a program is written than to try to untangle the mess afterwards.
It is relatively easy to limit the impact of implementation-dependent language features.
My question is: How to limit the impact of implementation-dependent language features? Please mention implementation-dependent language features then show how to limit their impact.
Few ideas:
Unfortunately you will have to use macros to avoid some platform specific or compiler specific issues. You can look at the headers of Boost libraries to see that it can quite easily get cumbersome, for example look at the files:
boost/config/compiler/gcc.hpp
boost/config/compiler/intel.hpp
boost/config/platform/linux.hpp
and so on
The integer types tend to be messy among different platforms, you will have to define your own typedefs or use something like Boost cstdint.hpp
If you decide to use any library, then do a check that the library is supported on the given platform
Use the libraries with good support and clearly documented platform support (for example Boost)
You can abstract yourself from some C++ implementation specific issues by relying heavily on libraries like Qt, which provide an "alternative" in sense of types and algorithms. They also attempt to make the coding in C++ more portable. Does it work? I'm not sure.
Not everything can be done with macros. Your build system will have to be able to detect the platform and the presence of certain libraries. Many would suggest autotools for project configuration, I on the other hand recommend CMake (rather nice language, no more M4)
endianness and alignment might be an issue if you do some low level meddling (i.e. reinterpret_cast and friends things alike (friends was a bad word in C++ context)).
throw in a lot of warning flags for the compiler, for gcc I would recommend at least -Wall -Wextra. But there is much more, see the documentation of the compiler or this question.
you have to watch out for everything that is implementation-defined and implementation-dependend. If you want the truth, only the truth, nothing but the truth, then go to ISO standard.
Well, the variable sizes one mentioned is a fairly well known issue, with the common workaround of providing typedeffed versions of the basic types that have well defined sizes (normally advertised in the typedef name). This is done use preprocessor macros to give different code-visibility on different platforms. E.g.:
#ifdef __WIN32__
typedef int int32;
typedef char char8;
//etc
#endif
#ifdef __MACOSX__
//different typedefs to produce same results
#endif
Other issues are normally solved in the same way too (i.e. using preprocessor tokens to perform conditional compilation)
The most obvious implementation dependency is size of integer types. There are many ways to handle this. The most obvious way is to use typedefs to create ints of the various sizes:
typedef signed short int16_t;
typedef unsigned short uint16_t;
The trick here is to pick a convention and stick to it. Which convention is the hard part: INT16, int16, int16_t, t_int16, Int16, etc. C99 has the stdint.h file which uses the int16_t style. If your compiler has this file, use it.
Similarly, you should be pedantic about using other standard defines such as size_t, time_t, etc.
The other trick is knowing when not to use these typedef. A loop control variable used to index an array, should just take raw int types so the compile will generate the best code for your processor. for (int32_t i = 0; i < x; ++i) could generate a lot of needless code on a 64-bite processor, just like using int16_t's would on a 32-bit processor.
A good solution is to use common headings that define typedeff'ed types as neccessary.
For example, including sys/types.h is an excellent way to deal with this, as is using portable libraries.
There are two approaches to this:
define your own types with a known size and use them instead of built-in types (like typedef int int32 #if-ed for various platforms)
use techniques which are not dependent on the type size
The first is very popular, however the second, when possible, usually results in a cleaner code. This includes:
do not assume pointer can be cast to int
do not assume you know the byte size of individual types, always use sizeof to check it
when saving data to files or transferring them across network, use techniques which are portable across changing data sizes (like saving/loading text files)
One recent example of this is writing code which can be compiled for both x86 and x64 platforms. The dangerous part here is pointer and size_t size - be prepared it can be 4 or 8 depending on platform, when casting or differencing pointer, cast never to int, use intptr_t and similar typedef-ed types instead.
One of the key ways of avoiding dependancy on particular data sizes is to read & write persistent data as text, not binary. If binary data must be used then all read/write operations must be centralised in a few methods and approaches like the typedefs already described here used.
A second rhing you can do is to enable all your your compilers warnings. for example, using the -pedantic flag with g++ will warn you of lots of potential portability problems.
If you're concerned about portability, things like the size of an int can be determined and dealt with without much difficulty. A lot of C++ compilers also support C99 features like the int types: int8_t, uint8_t, int16_t, uint32_t, etc. If yours doesn't support them natively, you can always include <cstdint> or <sys/types.h>, which, more often than not, has those typedefed. <limits.h> has these definitions for all the basic types.
The standard only guarantees the minimum size of a type, which you can always rely on: sizeof(char) < sizeof(short) <= sizeof(int) <= sizeof(long). char must be at least 8 bits. short and int must be at least 16 bits. long must be at least 32 bits.
Other things that might be implementation-defined include the ABI and name-mangling schemes (the behavior of export "C++" specifically), but unless you're working with more than one compiler, that's usually a non-issue.
The following is also an excerpt from Bjarne Stroustrup's book, The C++ Programming Language:
Section 10.4.9:
No implementation-independent guarantees are made about the order of construction of nonlocal objects in different compilation units. For example:
// file1.c:
Table tbl1;
// file2.c:
Table tbl2;
Whether tbl1 is constructed before tbl2 or vice versa is implementation-dependent. The order isn’t even guaranteed to be fixed in every particular implementation. Dynamic linking, or even a small change in the compilation process, can alter the sequence. The order of destruction is similarly implementation-dependent.
A programmer may ensure proper initialization by implementing the strategy that the implementations usually employ for local static objects: a first-time switch. For example:
class Zlib {
static bool initialized;
static void initialize() { /* initialize */ initialized = true; }
public:
// no constructor
void f()
{
if (initialized == false) initialize();
// ...
}
// ...
};
If there are many functions that need to test the first-time switch, this can be tedious, but it is often manageable. This technique relies on the fact that statically allocated objects without constructors are initialized to 0. The really difficult case is the one in which the first operation may be time-critical so that the overhead of testing and possible initialization can be serious. In that case, further trickery is required (§21.5.2).
An alternative approach for a simple object is to present it as a function (§9.4.1):
int& obj() { static int x = 0; return x; } // initialized upon first use
First-time switches do not handle every conceivable situation. For example, it is possible to create objects that refer to each other during construction. Such examples are best avoided. If such objects are necessary, they must be constructed carefully in stages.

Why doesn't anyone upgrade their C compiler with advanced features?

struct elem
{
int i;
char k;
};
elem user; // compile error!
struct elem user; // this is correct
In the above piece of code we are getting an error for the first declaration. But this error doesn't occur with a C++ compiler. In C++ we don't need to use the keyword struct again and again.
So why doesn't anyone update their C compiler, so that we can use structure without the keyword as in C++ ?
Why doesn't the C compiler developer remove some of the glitches of C, like the one above, and update with some advanced features without damaging the original concept of C?
Why it is the same old compiler not updated from 1970's ?
Look at visual studio etc.. It is frequently updated with new releases and for every new release we have to learn some new function usage (even though it is a problem we can cope up with it). We will also get updated with the new compiler if there is any.
Don't take this as a silly question. Why it is not possible? It could be developed without any incompatibility issues (without affecting the code that was developed on the present / old compiler)
Ok, lets develop the new C language, C+, which is in between C and C++ which removes all glitches of C and adds some advanced features from C++ while keeping it useful for specific applications like system level applications, embedded systems etc.
Because it takes years for a new Standard to evolve.
They are working on a new C++ Standard (C++0x), and also on a new C standard (C1x), but if you remember that it usually takes between 5 and 10 years for each iteration, i don't expect to see it before 2010 or so.
Also, just like in any democracy, there are compromises in a Standard. You got the hardliners who say "If you want all that fancy syntactic sugar, go for a toy language like Java or C# that takes you by the hand and even buys you a lollipop", whereas others say "The language needs to be easier and less error-prone to survive in these days or rapidly reducing development cycles".
Both sides are partially right, so standardization is a very long battle that takes years and will lead to many compromises. That applies to everything where multiple big parties are involved, it's not just limited to C/C++.
typedef struct
{
int i;
char k;
} elem;
elem user;
will work nicely. as other said, it's about standard -- when you implement this in VS2008, you can't use it in GCC and when you implement this even in GCC, you certainly not compile in something else. Method above will work everywhere.
On the other side -- when we have C99 standard with bool type, declarations in a for() cycle and in the middle of blocks -- why not this feature as well?
First and foremost, compilers need to support the standard. That's true even if the standard seems awkward in hindsight. Second, compiler vendors do add extensions. For example, many compilers support this:
(char *) p += 100;
to move a pointer by 100 bytes instead of 100 of whatever type p is a pointer to. Strictly speaking that's non-standard because the cast removes the lvalue-ness of p.
The problem with non-standard extensions is that you can't count on them. That's a big problem if you ever want to switch compilers, make your code portable, or use third-party tools.
C is largely a victim of its own success. One of the main reasons to use C is portability. There are C compilers for virtually every hardware platform and OS in existence. If you want to be able to run your code anywhere you write it in C. This creates enormous inertia. It's almost impossible to change anything without sacrificing one of the best things about using the language in the first place.
The result for software developers is that you may need to write to the lowest common denominator, typically ANSI C (C89). For example: Parrot, the virtual machine that will run the next version of Perl, is being written in ANSI C. Perl6 will have an enormously powerful and expressive syntax with some mind-bending concepts baked right into the language. The implementation, though, is being built using a language that is almost the complete opposite. The reason is that this will make it possible for perl to run anywhere: PCs, Macs, Windows, Linux, Unix, VAX, BSD...
This "feature" will never be adopted by future C standards for one reason only: it would badly break backward compatibility. In C, struct tags have separate namespaces to normal identifiers, and this may or may not be considered a feature. Thus, this fragment:
struct elem
{
int foo;
};
int elem;
Is perfectly fine in C, because these two elems are in separate namespaces. If a future standard allowed you to declare a struct elem without a struct qualifier or appropriate typedef, the above program would fail because elem is being used as an identifier for an int.
An example where a future C standard does in fact break backward compatibiity is when C99 disallowed a function without an explicit return type, ie:
foo(void); /* declare a function foo that takes no parameters and returns an int */
This is illegal in C99. However, it is trivial to make this C99 compliant just by adding an int return type. It is not so trivial to "fix" C programs if suddenly struct tags didn't have a separate namespace.
I've found that when I've implemented non-standard extensions to C and C++, even when people request them, they do not get used. The C and C++ world definitely revolves around strict standard compliance. Many of these extensions and improvements have found fertile ground in the D programming language.
Walter Bright, Digital Mars
Most people still using C use it because they're either:
Targeting a very specific platform (ie, embedded) and therefore must use the compiler provided by that platform vendor
Concerned about portability, in which case a non-standard compiler would defeat the purpose
Very comfortable with plain C and see no reason to change, in which case they just don't want to.
As already mentioned, C has a standard that needs to be adhered to. But can't you just write your code using slightly modified C syntax, but use a C++ compiler so that things like
struct elem
{
int i;
char k;
};
elem user;
will compile?
Actually, many C compilers do add features - doesn't pretty much every C compiler support C++ style // comments?
Most of the features added to updates of the C standard (C99 being the most recent) come from extensions that 'caught on'.
For example, even though the compiler I'm using right now on an embedded platform does not claim to conform to the C99 standard (and it is missing quite a bit from it), it does add the following extensions (all of which are borrowed from C++ or C99) to it's 'C90' support:
declarations mixed with statements
anonymous structs and unions
inline
declaration in the for loop initialization expression
and, of course, C++ style // comments
The problem I run into with this is that when I try to compile those files using MSVC (either for testing or because the code is useful on more than just the embedded platform), it'll choke on most of them (I'm honestly not sure about anonymous structs/unions).
So, extensions do get added to C compilers, it's just that they're done at different rates and in different ways (so code using them becomes more difficult to port) and the process of moving them into a standard occurs at a near glacial pace.
We have a typedef for exactly this purpose.
And please do not change the standard we have enough compatibility problems already....
# Manoj Doubts comment
I have no problem with you or somebody else to define C+ or C- or Cwhatever unless you don't touch C :)
I still need a language that capable to complete my task - have a same piece of code (not a small one) to be able to run on tens of Operating system compiled by significant number of different compilers and be able to run on tens of different hardware platform at the moment there is only one language that allow me complete my task and i prefer not to experiment with this ability :) Especially for reason you provided. Do you really think that ability to write
foo test;
instead
struct foo test;
will make you code better from any point of view ?
The following program outputs "1" when compiled as standard C or something else, probably 2, when compiled as C++ or your suggested syntax. That's why the C language can't make this change, it would give new meaning to existing code. And that's bad!
#include <stdio.h>
typedef struct
{
int a;
int b;
} X;
int main(void)
{
union X
{
int a;
int b;
};
X x;
x.a = 1;
x.b = 2;
printf("%d\n", x.a);
return 0;
}
Because C is Standardized. Compiler could offer that feature and some do, but using it means that the source code doesn't follow the standard and could only be compiled on that vendor's compiler.
Well,
1 - None of the compilers that are in use today are from the 70s...
2 - There are standarts for both C and C++ languages and compilers are developed according to those standarts. They can't just change some behaviour !
3 - What happens if you develop on VS2008 and then try to compile that code by another compiler whose last version was released 10 years ago ?
4 - What happens when you play with the options on the C/C++ / Language tab ?
5 - Why don't Microsoft compilers target all the possible processors ? They only target x86, x86_64 and Itanium, that's all...
6 - Believe me , this is not even considered as a problem !!!
You don't need to develop a new language if you want to use C with C++ typedefs and the like (but without classes, templates etc).
Just write your C-like code and use the C++ compiler.
As far as new functionality in new releases go, Visual C++ is not completely standard-conforming (see http://msdn.microsoft.com/en-us/library/x84h5b78.aspx), By the time Visual Studio 2010 is out, the next C++ standard will likely have been approved, giving the VC++ team more functionality to change.
There are also changes to the Microsoft libraries (which have little or nothing to do with the standard), and to what the compiler puts out (C++/CLI). There's plenty of room for changes without trying to deviate from the standard.
Nor do you need anything like C+. Just write in C, use whatever C++ features you like, and compile as C++. One of the Bjarne Stroustrup's original design goals for C++ was to make it unnecessary to write anything in C. It should compile perfectly efficiently provided you limit the C++ features you use (and even then will compile very efficiently; modern C++ compilers do a very good job).
And the unanswered question: Why would you want to use non-standard C, when you could write standard C or standard C++ with almost equal facility?
This sounds like the embrace and extend concept.
Life under your scenario.
I develop code using a C compiler that has the C "glitches" removed.
I move to a different platform with another C compiler that has the C "glitches" removed, but in a slightly different way.
My code doesn't compile or runs differently on the new platform, I waste time "porting" my code to the new platform.
Some vendors actually like to fix "glitches" because this tends to lock people into a single platform.
If you want to write in standard C, follow the standards. That's it.
If you want more freedom use C# or C++.NET or anything else your hardware supports.