Related
I have been working in C and C++ and when it comes to file handling I get confused. Let me state the things I know.
In C, we use functions:
fopen, fclose, fwrite, fread, ftell, fseek, fprintf, fscanf, feof, fileno, fgets, fputs, fgetc, fputc.
FILE *fp for file pointer.
Modes like r, w, a
I know when to use these functions (Hope I didn't miss anything important).
In C++, we use functions / operators:
fstream f
f.open, f.close, f>>, f<<, f.seekg, f.seekp, f.tellg, f.tellp, f.read, f.write, f.eof.
Modes like ios::in, ios::out, ios::bin , etc...
So is it possible (recommended) to use C compatible file operations in C++?
Which is more widely used and why?
Is there anything other than these that I should be aware of?
Sometimes there's existing code expecting one or the other that you need to interact with, which can affect your choice, but in general the C++ versions wouldn't have been introduced if there weren't issues with the C versions that they could fix. Improvements include:
RAII semantics, which means e.g. fstreams close the files they manage when they leave scope
modal ability to throw exceptions when errors occur, which can make for cleaner code focused on the typical/successful processing (see http://en.cppreference.com/w/cpp/io/basic_ios/exceptions for API function and example)
type safety, such that how input and output is performed is implicitly selected using the variable type involved
C-style I/O has potential for crashes: e.g. int my_int = 32; printf("%s", my_int);, where %s tells printf to expect a pointer to an ASCIIZ character buffer but my_int appears instead; firstly, the argument passing convention may mean ints are passed differently to const char*s, secondly sizeof int may not equal sizeof const char*, and finally, even if printf extracts 32 as a const char* at best it will just print random garbage from memory address 32 onwards until it coincidentally hits a NUL character - far more likely the process will lack permissions to read some of that memory and the program will crash. Modern C compilers can sometimes validate the format string against the provided arguments, reducing this risk.
extensibility for user-defined types (i.e. you can teach streams how to handle your own classes)
support for dynamically sizing receiving strings based on the actual input, whereas the C functions tend to need hard-coded maximum buffer sizes and loops in user code to assemble arbitrary sized input
Streams are also sometimes criticised for:
verbosity of formatting, particularly "io manipulators" setting width, precision, base, padding, compared to the printf-style format strings
a sometimes confusing mix of manipulators that persist their settings across multiple I/O operations and others that are reset after each operation
lack of convenience class for RAII pushing/saving and later popping/restoring the manipulator state
being slow, as Ben Voigt comments and documents here
The performance differences between printf()/fwrite style I/O and C++ IO streams formatting are very much implementation dependent.
Some implementations (visual C++ for instance), build their IO streams on top of FILE * objects and this tends to increase the run-time complexity of their implementation. Note, however, that there was no particular constraint to implement the library in this fashion.
In my own opinion, the benefits of C++ I/O are as follows:
Type safety.
Flexibility of implementation. Code can be written to do specific formatting or input to or from a generic ostream or istream object. The application can then invoke this code with any kind of derived stream object. If the code that I have written and tested against a file now needs to be applied to a socket, a serial port, or some other kind of internal stream, you can create a stream implementation specific to that kind of I/O. Extending the C style I/O in this fashion is not even close to possible.
Flexibility in locale settings: the C approach of using a single global locale is, in my opinion, seriously flawed. I have experienced cases where I invoked library code (a DLL) that changed the global locale settings underneath my code and completely messed up my output. A C++ stream allows you to imbue() any locale to a stream object.
An interesting critical comparison can be found here.
C++ FQA io
Not exactly polite, but makes to think...
Disclaimer
The C++ FQA (that is a critical response to the C++ FAQ) is often considered by the C++ community a "stupid joke issued by a silly guy the even don't understand what C++ is or wants to be"(cit. from the FQA itself).
These kind of argumentation are often used to flame (or escape from) religion battles between C++ believers, Others languages believers or language atheists each in his own humble opinion convinced to be in something superior to the other.
I'm not interested in such battles, I just like to stimulate critical reasoning about the pros and cons argumentation. The C++ FQA -in this sens- has the advantage to place both the FQA and the FAQ one over the other, allowing an immediate comparison. And that the only reason why I referenced it.
Following TonyD comments, below (tanks for them, I makes me clear my intention need a clarification...), it must be noted that the OP is not just discussing the << and >> (I just talk about them in my comments just for brevity) but the entire function-set that makes up the I/O model of C and C++.
With this idea in mind, think also to other "imperative" languages (Java, Python, D ...) and you'll see they are all more conformant to the C model than C++. Sometimes making it even type safe (what the C model is not, and that's its major drawback).
What my point is all about
At the time C++ came along as mainstream (1996 or so) the <iostream.h> library (note the ".h": pre-ISO) was in a language where templates where not yet fully available, and, essentially, no type-safe support for varadic functions (we have to wait until C++11 to get them), but with type-safe overloaded functions.
The idea of oveloading << retuning it's first parameter over and over is -in fact- a way to chain a variable set of arguments using only a binary function, that can be overload in a type-safe manner. That idea extends to whatever "state management function" (like width() or precision()) through manipulators (like setw) appear as a natural consequence. This points -despite of what you may thing to the FQA author- are real facts. And is also a matter of fact that FQA is the only site I found that talks about it.
That said, years later, when the D language was designed starting offering varadic templates, the writef function was added in the D standard library providing a printf-like syntax, but also being perfectly type-safe. (see here)
Nowadays C++11 also have varadic templates ... so the same approach can be putted in place just in the same way.
Moral of the story
Both C++ and C io models appear "outdated" respect to a modern programming style.
C retain speed, C++ type safety and a "more flexible abstraction for localization" (but I wonder how many C++ programmers are in the world that are aware of locales and facets...) at a runtime-cost (jut track with a debugger the << of a number, going through stream, buffer locale and facet ... and all the related virtual functions!).
The C model, is also easily extensible to parametric messages (the one the order of the parameters depends on the localization of the text they are in) with format strings like
#1%d #2%i allowing scrpting like "text #2%i text #1%d ..."
The C++ model has no concept of "format string": the parameter order is fixed and itermixed with the text.
But C++11 varadic templates can be used to provide a support that:
can offer both compile-time and run-time locale selection
can offer both compile-time and run-time parametric order
can offer compile-time parameter type safety
... all using a simple format string methodology.
Is it time to standardize a new C++ i/o model ?
I was coding some C++ for a small hobby project when I noticed that I'm using C-style operations to access IO (printf, fopen, etc.).
Is it considered "bad practice" to involve C functions in C++ projects? What are the advantages of using streams over C-style IO access?
This is an heated topic.
Some people prefer to use the C++ IO since they are type-safe (you can't have divergence between the type of the object and the type specified in the format string), and flow more naturally with the rest of the C++ way of coding.
However, there is also arguments for C IO functions (my personal favorites). Some of them are:
They integrate more easily with localisation, as the whole string to localise is not broken up in smaller strings, and with some implementation the localizer can reorder the order of the inserted value, move them in the string, ...
You can directly see the format of the text that will be written (this can be really hard with stream operators).
As there is no inlining, and only one instance of the printf function, the generated code is smaller (this can be important in embedded environment).
Faster than C++ function in some implementation.
Personally, I wouldn't consider it bad practice to use C stream in C++ code. Some organisations even recommend to use them over C++ stream. What I would consider bad style is to use both in the same project. Consistency is the key here I think.
As other have noted, in a relatively large project, you would probably not use them directly, but you would use a set of wrapper function (or classes), that would best fit your coding standard, and your needs (localisation, type safety, ...). You can use one or the other IO interface to implement this higher level interface, but you'll probably only use one.
Edit: adding some information about the advantage of printf formatting function family relating to the localisation. Please note that those information are only valid for some implementation.
You can use %m$ instead of % to reference parameter by index instead of referencing them sequentially. This can be used to reorder values in the formatted string. The following program will write Hello World! on the standard output.
#include <stdio.h>
int main() {
printf("%2$s %1$s\n", "World!", "Hello");
return 0;
}
Consider translating this C++ code:
if (nb_files_deleted == 1)
stream << "One file ";
else
stream << nb_file_deleted << " files ";
stream << removed from directory \"" << directory << "\"\n";
This can be really hard. With printf (and a library like gettext to handle the localization), the code is not mixed with the string. We can thus pass the string to the localization team, and won't have to update the code if there are special case in some language (in some language, if count of object is 0, you use a plural form, in other language, there are three forms one for singular, one when there is two object and a plural form, ...).
printf (ngettext ("One file removed from directory \"%2$s\"",
"%1$d files removed from directory \"%2$s\"",
n),
n, dir);
printf and friends are hideously unsafe compared to <iostream>, and cannot be extended, plus of course fopen and friends have no RAII, which means that unless you've proved with a profiler that you definitely need the performance difference (that you've proved exists on your platform and in your code), you'd have to be an idiot to printf.
Edit: Localization is an interesting thing that I hadn't considered. I've never localized any code and cannot comment on the relative localizational ability of printf and <iostream>
Nothing can be considered bad practice if it has a definite purpose. I mean, if IO is the bottleneck of the program, then yes, C-style IO works faster than C++ IO. But if it is not, I would go with C++ stream approach. Cuz it's cuter :)
For a small hobby project I would probably go with the more type-safe C++ io streams.
Funny enough, I've never seen a non-trivial real-life project that uses either of them. In all cases we used some abstractions built on top of the native OS API for IO.
Advantages
Type safety: the argument types of C++ streaming operations are checked at compile time, while printf arguments are passed through ... causing undefined behaviour if they do not match the formatting.
Resource management: C++ stream objects have destructors to close file handles, free buffers, and what have you. C streams require you to remember to call fclose.
Disadvantages
Performance: this depends on the implementation, of course, but I've found formatting with C++ streams to be considerably slower than the equivalent printf formatting.
IMHO, a real C++ programmer tries to do things in idiomatic C++ way; the C converted programmer tries to cling on old ways of doing things. It has to do with readability and consistency.
For one, you don't have to convert C++ objects (notably strings) to C-compatible forms first.
Is it considered "bad practice" to involve C functions in C++ projects?
Usually, the file IO code should be encapsulated in a class or function implementation. I wouldn't consider any choices you make in the encapsulated implementations to be "bad practice", I reserve that term for what affects the user of your library or code (i.e. the interface). If you are exposing your file IO mechanism in the interface, then, IMO, it's bad practice whether you use IO stream or C-style IO functions.
I would rather say that C-style IO functions are (probably always) the worse choice.
What are the advantages of using streams over C-style IO access?
First, they integrate better with standard C++ constructs such as std::string. They integrate fairly well with STL <algorithms>. And they allow you to create encapsulated custom read/write operators (<< and >>) such that your custom classes almost look like primitive types when it comes to doing IO operations.
Finally, IO streams in C++ can use exception mechanism to report errors in the stream or read/write operations. These make the file IO code much nicer by avoid the terrible look of error-code mechanisms (sequence of if-statements and ugly while-loops, that check the error code after each operation).
As a general rule, you should prefer C++ operators, they're:
-- Type safe. You don't risk passing a double where the format
calls for an int.
-- Extensible. You can write your own inserters and
extracters, and use them.
-- Extensible. You can define your own manipulators (with
application specific logical meaning), and use them. If you
want to change the format of all of the WidgitNumber
(internally, an int) in your output, you change the
manipulator; you don't have to find all of the format
statements where %d is a WidgitNumber.
-- Extensible. You can write your own sinks and sources, and
these can forward to other sinks and sources, filtering or
expanding the input or output as desired.
(FWIW: I don't think I've ever written an application which
didn't use custom >> and << operators, custom manipulators, and
custom streambuf's.)
Is it considered "bad practice" to involve C functions in C++ projects?
No. C functions are often used in C++ projects. Regarding the streams, Google C++ Style Guide, for example, recommends using them only in limited cases such as "ad-hoc, local, human-readable, and targeted at other developers rather than end-users".
What are the advantages of using streams over C-style IO access?
The main advantages are type safety and extensibility. However C++ streams have serious flaws, see the answers to this question such as problems with localization, poor error reporting, code bloat and performance issues in some implementations.
After a few years coding in C++, I was recently offered a job coding in C, in the embedded field.
Putting aside the question of whether it's right or wrong to dismiss C++ in the embedded field, there are some features/idioms in C++ I would miss a lot. Just to name a few:
Generic, type-safe data structures (using templates).
RAII. Especially in functions with multiple return points, e.g. not having to remember to release the mutex on each return point.
Destructors in general. I.e. you write a d'tor once for MyClass, then if a MyClass instance is a member of MyOtherClass, MyOtherClass doesn't have to explicitly deinitialize the MyClass instance - its d'tor is called automatically.
Namespaces.
What are your experiences moving from C++ to C?
What C substitutes did you find for your favorite C++ features/idioms? Did you discover any C features you wish C++ had?
Working on an embedded project, I tried working in all C once, and just couldn't stand it. It was just so verbose that it made it hard to read anything. Also, I liked the optimized-for-embedded containers I had written, which had to turn into much less safe and harder to fix #define blocks.
Code that in C++ looked like:
if(uart[0]->Send(pktQueue.Top(), sizeof(Packet)))
pktQueue.Dequeue(1);
turns into:
if(UART_uchar_SendBlock(uart[0], Queue_Packet_Top(pktQueue), sizeof(Packet)))
Queue_Packet_Dequeue(pktQueue, 1);
which many people will probably say is fine but gets ridiculous if you have to do more than a couple "method" calls in a line. Two lines of C++ would turn into five of C (due to 80-char line length limits). Both would generate the same code, so it's not like the target processor cared!
One time (back in 1995), I tried writing a lot of C for a multiprocessor data-processing program. The kind where each processor has its own memory and program. The vendor-supplied compiler was a C compiler (some kind of HighC derivative), their libraries were closed source so I couldn't use GCC to build, and their APIs were designed with the mindset that your programs would primarily be the initialize/process/terminate variety, so inter-processor communication was rudimentary at best.
I got about a month in before I gave up, found a copy of cfront, and hacked it into the makefiles so I could use C++. Cfront didn't even support templates, but the C++ code was much, much clearer.
Generic, type-safe data structures (using templates).
The closest thing C has to templates is to declare a header file with a lot of code that looks like:
TYPE * Queue_##TYPE##_Top(Queue_##TYPE##* const this)
{ /* ... */ }
then pull it in with something like:
#define TYPE Packet
#include "Queue.h"
#undef TYPE
Note that this won't work for compound types (e.g. no queues of unsigned char) unless you make a typedef first.
Oh, and remember, if this code isn't actually used anywhere, then you don't even know if it's syntactically correct.
EDIT: One more thing: you'll need to manually manage instantiation of code. If your "template" code isn't all inline functions, then you'll have to put in some control to make sure that things get instantiated only once so your linker doesn't spit out a pile of "multiple instances of Foo" errors.
To do this, you'll have to put the non-inlined stuff in an "implementation" section in your header file:
#ifdef implementation_##TYPE
/* Non-inlines, "static members", global definitions, etc. go here. */
#endif
And then, in one place in all your code per template variant, you have to:
#define TYPE Packet
#define implementation_Packet
#include "Queue.h"
#undef TYPE
Also, this implementation section needs to be outside the standard #ifndef/#define/#endif litany, because you may include the template header file in another header file, but need to instantiate afterward in a .c file.
Yep, it gets ugly fast. Which is why most C programmers don't even try.
RAII.
Especially in functions with multiple return points, e.g. not having to remember to release the mutex on each return point.
Well, forget your pretty code and get used to all your return points (except the end of the function) being gotos:
TYPE * Queue_##TYPE##_Top(Queue_##TYPE##* const this)
{
TYPE * result;
Mutex_Lock(this->lock);
if(this->head == this->tail)
{
result = 0;
goto Queue_##TYPE##_Top_exit:;
}
/* Figure out `result` for real, then fall through to... */
Queue_##TYPE##_Top_exit:
Mutex_Lock(this->lock);
return result;
}
Destructors in general.
I.e. you write a d'tor once for MyClass, then if a MyClass instance is a member of MyOtherClass, MyOtherClass doesn't have to explicitly deinitialize the MyClass instance - its d'tor is called automatically.
Object construction has to be explicitly handled the same way.
Namespaces.
That's actually a simple one to fix: just tack a prefix onto every symbol. This is the primary cause of the source bloat that I talked about earlier (since classes are implicit namespaces). The C folks have been living this, well, forever, and probably won't see what the big deal is.
YMMV
I moved from C++ to C for a different reason (some sort of allergic reaction ;) and there are only a few thing that I miss and some things that I gained. If you stick to C99, if you may, there are constructs that let you program quite nicely and safely, in particular
designated initializers (eventually
combined with macros) make
initialization of simple classes as
painless as constructors
compound literals for temporary variables
for-scope variable may help you to do scope bound resource management, in particular to ensure to unlock of mutexes or free of arrays, even under preliminary function returns
__VA_ARGS__ macros can be used to have default arguments to functions and to do code unrolling
inline functions and macros that combine well to replace (sort of) overloaded functions
The difference between C and C++ is the predictability of the code's behavior.
It is a easier to predict with great accuracy what your code will do in C, in C++ it might become a bit more difficult to come up with an exact prediction.
The predictability in C gives you better control of what your code is doing, but that also means you have to do more stuff.
In C++ you can write less code to get the same thing done, but (at leas for me) I have trouble occasionally knowing how the object code is laid out in memory and it's expected behavior.
Nothing like the STL exists for C.
There are libs available which provide similar functionality, but it isn't builtin anymore.
Think that would be one of my biggest problems... Knowing with which tool I could solve the problem, but not having the tools available in the language I have to use.
In my line of work - which is embedded, by the way - I am constantly switching back & forth between C and C++.
When I'm in C, I miss from C++:
templates (including but not limited to STL containers). I use them for things like special counters, buffer pools, etc. (built up my own library of class templates & function templates that I use in different embedded projects)
very powerful standard library
destructors, which of course make RAII possible (mutexes, interrupt disable, tracing, etc.)
access specifiers, to better enforce who can use (not see) what
I use inheritance on larger projects, and C++'s built-in support for it is much cleaner & nicer than the C "hack" of embedding the base class as the first member (not to mention automatic invocation of constructors, init. lists, etc.) but the items listed above are the ones I miss the most.
Also, probably only about a third of the embedded C++ projects I work on use exceptions, so I've become accustomed to living without them, so I don't miss them too much when I move back to C.
On the flip side, when I move back to a C project with a significant number of developers, there are whole classes of C++ problems that I'm used to explaining to people which go away. Mostly problems due to the complexity of C++, and people who think they know what's going on, but they're really at the "C with classes" part of the C++ confidence curve.
Given the choice, I'd prefer using C++ on a project, but only if the team is pretty solid on the language. Also of course assuming it's not an 8K μC project where I'm effectively writing "C" anyway.
Couple of observations
Unless you plan to use your c++ compiler to build your C (which is possible if you stick to a well define subset of C++) you will soon discover things that your compiler allows in C that would be a compile error in C++.
No more cryptic template errors (yay!)
No (language supported) object oriented programming
Pretty much the same reasons I have for using C++ or a mix of C/C++ rather than pure C. I can live without namespaces but I use them all the time if the code standard allows it. The reasons is that you can write much more compact code in C++. This is very usefull for me, I write servers in C++ which tend to crash now and then. At that point it helps a lot if the code you are looking at is short and consist. For example consider the following code:
uint32_t
ScoreList::FindHighScore(
uint32_t p_PlayerId)
{
MutexLock lock(m_Lock);
uint32_t highScore = 0;
for(int i = 0; i < m_Players.Size(); i++)
{
Player& player = m_Players[i];
if(player.m_Score > highScore)
highScore = player.m_Score;
}
return highScore;
}
In C that looks like:
uint32_t
ScoreList_getHighScore(
ScoreList* p_ScoreList)
{
uint32_t highScore = 0;
Mutex_Lock(p_ScoreList->m_Lock);
for(int i = 0; i < Array_GetSize(p_ScoreList->m_Players); i++)
{
Player* player = p_ScoreList->m_Players[i];
if(player->m_Score > highScore)
highScore = player->m_Score;
}
Mutex_UnLock(p_ScoreList->m_Lock);
return highScore;
}
Not a world of difference. One more line of code, but that tends to add up. Nomally you try your best to keep it clean and lean but sometimes you have to do something more complex. And in those situations you value your line count. One more line is one more thing to look at when you try to figure out why your broadcast network suddenly stops delivering messages.
Anyway I find that C++ allows me to do more complex things in a safe fashion.
yes! i have experienced both of these languages and what i found is C++ is more friendly language. It facilitates with more features. It is better to say that C++ is superset of C language as it provide additional features like polymorphism, interitance, operator and function overloading, user defined data types which is not really supported in C. The thousand lines of code is reduce to few lines with the help of object oriented programming that's the main reason of moving from C to C++.
I think the main problem why c++ is harder to be accepted in embedded environment is because of the lack of engineers that understand how to use c++ properly.
Yes, the same reasoning can be applied to C as well, but luckily there aren't that many pitfalls in C that can shoot yourself in the foot. C++ on the other hand, you need to know when not to use certain features in c++.
All in all, I like c++. I use that on the O/S services layer, driver, management code, etc.
But if your team doesn't have enough experience with it, it's gonna be a tough challenge.
I had experience with both. When the rest of the team wasn't ready for it, it was a total disaster. On the other hand, it was good experience.
Certainly, the desire to escape complex/messy syntax is understandable. Sometimes C can appear to be the solution. However, C++ is where the industry support is, including tooling and libraries, so that is hard to work around.
C++ has so many features today including lambdas.
A good approach is to leverage C++ itself to make your code simpler. Objects are good for isolating things under the hood so that at a higher level, the code is simpler. The core guidelines recommend concrete (simple) objects, so that approach can help.
The level of complexity is under the engineer's control. If multiple inheritance (MI) is useful in a scenario and one prefers that option, then one may use MI.
Alternatively, one can define interfaces, inherit from the interface(s), and contain implementing objects (composition/aggregation) and expose the objects through the interface using inline wrappers. The inline wrappers compile down to nothing, i.e., compile down to simple use of the internal (contained) object, yet the container object appears to have that functionality as if multiple inheritance was used.
C++ also has namespaces, so one should leverage namespaces even if coding in a C-like style.
One can use the language itself to create simpler patterns and the STL is full of examples: array, vector, map, queue, string, unique_ptr,... And one can control (to a reasonable extent) how complex their code is.
So, going back to C is not the way, nor is it necessary. One may use C++ in a C-like way, or use C++ multiple inheritance, or use any option in-between.
Is it me, or does it seem like C++ asks for more use of the 'if' statement then C#?
I have this codebase and it contains lots of things such as this:
if (strcmp((char*)type,"double")==0)
I wondered isn't it a bit of a 'code smell' when there's just too many if statements?
I'm not saying there bad, but things like string comparisons, with lots of strings involved, can't they be done differently?
Is there an alternative to just writing sequences of if statements?
THIS IS JUST AN EXAMPLE, IT CAN BE ANY KIND OF IF STATEMENTS
instead of:
if (string a == "blah") then bla
if (string b == "blah") then blo
The reason you do if (strcmp((char*)type,"double")==0) is because you can't make "double" a case-expression and use a switch statement. That said, if you're doing a lot of these kinds of string matches, you may want to look at using a std::map<std::string, int> or something similar and then use the map to convert the string to an index which you then feed to switch.
Personally, in these cases, I'm a fan of things like std::map<std::string, int (Handler::*)(void)>, which lets me create a handler map of class methods, but YMMV.
EDIT: I forgot to mention: the other sweet thing about having a map of strings to methods is that you can alter (usually add to) it at run time. For example, a parser could change its list of keywords and their handlers at runtime after it knows what kind of file it's parsing.
This is code smell.
To minimize it, you should (in this case) use std::strings. Your code then becomes:
#include <string>
// [...]
std::string type = "whatever";
// [...]
if (type == "double")
This is almost identical to the C# equivalent: to compile this sample code in C# code just remove the include and the std::.
Usually, if you find code that uses char* directly in C++ it's usually doing it wrong (except maybe for some rare exceptions).
Edit: Mike DeSimone addressed the problem of further refactoring this in his answer (so I won't mention it here :) ).
I don't think C++ requires any more "ifs" than C#. The number of if statements in a program is really just a matter of coding style. You can always eliminate ifs through techniques like polymorphism, table driven methods, and so on. These same techniques are available in both C++ and C#. If there is a difference between programs written in these two languages, I suspect it has to do with the mentality of C# vs C++ programmers.
Note that I don't necessarily recommend "if" elimination. In my experience, if statements tend to be clearer than the alternatives. To directly address your second point: the way to eliminate chained string comparisons like that is to use a DFSA. Most of the time, however, string comparisons are perfectly suitable.
It's not something I've noticed; I've done 10 years C++ and 4 years of C# too!
Surely the number of if's relates to the design of your code rather than a difference between C# and C++?
To get rid of conditional expressions in either language you can consider the Inversion of Control pattern. It has the side effect of lessening those.
Based on the nature of 'bla' and 'blo' you can always try to use a std::map, with the strings as keys.
Too many if statement are code smell if you can replace them by a switch...case. Otherwise, I don't see the problem with using if.
Maybe you have used more event-driven programming in C#, while your C++ code is more sequential ?
There are better ways to implement a string parser than an endless set of if (strcmp...) statements.
One approach could be a map between strings and function pointers or functor objects.
Another design could involve a chain of responsibility pattern where the string is passed to a chain of objects that decide if they have a match or to pass it along.
I'm not aware of anything about C++ that makes it more prone to "if abuse" than any other language.
struct elem
{
int i;
char k;
};
elem user; // compile error!
struct elem user; // this is correct
In the above piece of code we are getting an error for the first declaration. But this error doesn't occur with a C++ compiler. In C++ we don't need to use the keyword struct again and again.
So why doesn't anyone update their C compiler, so that we can use structure without the keyword as in C++ ?
Why doesn't the C compiler developer remove some of the glitches of C, like the one above, and update with some advanced features without damaging the original concept of C?
Why it is the same old compiler not updated from 1970's ?
Look at visual studio etc.. It is frequently updated with new releases and for every new release we have to learn some new function usage (even though it is a problem we can cope up with it). We will also get updated with the new compiler if there is any.
Don't take this as a silly question. Why it is not possible? It could be developed without any incompatibility issues (without affecting the code that was developed on the present / old compiler)
Ok, lets develop the new C language, C+, which is in between C and C++ which removes all glitches of C and adds some advanced features from C++ while keeping it useful for specific applications like system level applications, embedded systems etc.
Because it takes years for a new Standard to evolve.
They are working on a new C++ Standard (C++0x), and also on a new C standard (C1x), but if you remember that it usually takes between 5 and 10 years for each iteration, i don't expect to see it before 2010 or so.
Also, just like in any democracy, there are compromises in a Standard. You got the hardliners who say "If you want all that fancy syntactic sugar, go for a toy language like Java or C# that takes you by the hand and even buys you a lollipop", whereas others say "The language needs to be easier and less error-prone to survive in these days or rapidly reducing development cycles".
Both sides are partially right, so standardization is a very long battle that takes years and will lead to many compromises. That applies to everything where multiple big parties are involved, it's not just limited to C/C++.
typedef struct
{
int i;
char k;
} elem;
elem user;
will work nicely. as other said, it's about standard -- when you implement this in VS2008, you can't use it in GCC and when you implement this even in GCC, you certainly not compile in something else. Method above will work everywhere.
On the other side -- when we have C99 standard with bool type, declarations in a for() cycle and in the middle of blocks -- why not this feature as well?
First and foremost, compilers need to support the standard. That's true even if the standard seems awkward in hindsight. Second, compiler vendors do add extensions. For example, many compilers support this:
(char *) p += 100;
to move a pointer by 100 bytes instead of 100 of whatever type p is a pointer to. Strictly speaking that's non-standard because the cast removes the lvalue-ness of p.
The problem with non-standard extensions is that you can't count on them. That's a big problem if you ever want to switch compilers, make your code portable, or use third-party tools.
C is largely a victim of its own success. One of the main reasons to use C is portability. There are C compilers for virtually every hardware platform and OS in existence. If you want to be able to run your code anywhere you write it in C. This creates enormous inertia. It's almost impossible to change anything without sacrificing one of the best things about using the language in the first place.
The result for software developers is that you may need to write to the lowest common denominator, typically ANSI C (C89). For example: Parrot, the virtual machine that will run the next version of Perl, is being written in ANSI C. Perl6 will have an enormously powerful and expressive syntax with some mind-bending concepts baked right into the language. The implementation, though, is being built using a language that is almost the complete opposite. The reason is that this will make it possible for perl to run anywhere: PCs, Macs, Windows, Linux, Unix, VAX, BSD...
This "feature" will never be adopted by future C standards for one reason only: it would badly break backward compatibility. In C, struct tags have separate namespaces to normal identifiers, and this may or may not be considered a feature. Thus, this fragment:
struct elem
{
int foo;
};
int elem;
Is perfectly fine in C, because these two elems are in separate namespaces. If a future standard allowed you to declare a struct elem without a struct qualifier or appropriate typedef, the above program would fail because elem is being used as an identifier for an int.
An example where a future C standard does in fact break backward compatibiity is when C99 disallowed a function without an explicit return type, ie:
foo(void); /* declare a function foo that takes no parameters and returns an int */
This is illegal in C99. However, it is trivial to make this C99 compliant just by adding an int return type. It is not so trivial to "fix" C programs if suddenly struct tags didn't have a separate namespace.
I've found that when I've implemented non-standard extensions to C and C++, even when people request them, they do not get used. The C and C++ world definitely revolves around strict standard compliance. Many of these extensions and improvements have found fertile ground in the D programming language.
Walter Bright, Digital Mars
Most people still using C use it because they're either:
Targeting a very specific platform (ie, embedded) and therefore must use the compiler provided by that platform vendor
Concerned about portability, in which case a non-standard compiler would defeat the purpose
Very comfortable with plain C and see no reason to change, in which case they just don't want to.
As already mentioned, C has a standard that needs to be adhered to. But can't you just write your code using slightly modified C syntax, but use a C++ compiler so that things like
struct elem
{
int i;
char k;
};
elem user;
will compile?
Actually, many C compilers do add features - doesn't pretty much every C compiler support C++ style // comments?
Most of the features added to updates of the C standard (C99 being the most recent) come from extensions that 'caught on'.
For example, even though the compiler I'm using right now on an embedded platform does not claim to conform to the C99 standard (and it is missing quite a bit from it), it does add the following extensions (all of which are borrowed from C++ or C99) to it's 'C90' support:
declarations mixed with statements
anonymous structs and unions
inline
declaration in the for loop initialization expression
and, of course, C++ style // comments
The problem I run into with this is that when I try to compile those files using MSVC (either for testing or because the code is useful on more than just the embedded platform), it'll choke on most of them (I'm honestly not sure about anonymous structs/unions).
So, extensions do get added to C compilers, it's just that they're done at different rates and in different ways (so code using them becomes more difficult to port) and the process of moving them into a standard occurs at a near glacial pace.
We have a typedef for exactly this purpose.
And please do not change the standard we have enough compatibility problems already....
# Manoj Doubts comment
I have no problem with you or somebody else to define C+ or C- or Cwhatever unless you don't touch C :)
I still need a language that capable to complete my task - have a same piece of code (not a small one) to be able to run on tens of Operating system compiled by significant number of different compilers and be able to run on tens of different hardware platform at the moment there is only one language that allow me complete my task and i prefer not to experiment with this ability :) Especially for reason you provided. Do you really think that ability to write
foo test;
instead
struct foo test;
will make you code better from any point of view ?
The following program outputs "1" when compiled as standard C or something else, probably 2, when compiled as C++ or your suggested syntax. That's why the C language can't make this change, it would give new meaning to existing code. And that's bad!
#include <stdio.h>
typedef struct
{
int a;
int b;
} X;
int main(void)
{
union X
{
int a;
int b;
};
X x;
x.a = 1;
x.b = 2;
printf("%d\n", x.a);
return 0;
}
Because C is Standardized. Compiler could offer that feature and some do, but using it means that the source code doesn't follow the standard and could only be compiled on that vendor's compiler.
Well,
1 - None of the compilers that are in use today are from the 70s...
2 - There are standarts for both C and C++ languages and compilers are developed according to those standarts. They can't just change some behaviour !
3 - What happens if you develop on VS2008 and then try to compile that code by another compiler whose last version was released 10 years ago ?
4 - What happens when you play with the options on the C/C++ / Language tab ?
5 - Why don't Microsoft compilers target all the possible processors ? They only target x86, x86_64 and Itanium, that's all...
6 - Believe me , this is not even considered as a problem !!!
You don't need to develop a new language if you want to use C with C++ typedefs and the like (but without classes, templates etc).
Just write your C-like code and use the C++ compiler.
As far as new functionality in new releases go, Visual C++ is not completely standard-conforming (see http://msdn.microsoft.com/en-us/library/x84h5b78.aspx), By the time Visual Studio 2010 is out, the next C++ standard will likely have been approved, giving the VC++ team more functionality to change.
There are also changes to the Microsoft libraries (which have little or nothing to do with the standard), and to what the compiler puts out (C++/CLI). There's plenty of room for changes without trying to deviate from the standard.
Nor do you need anything like C+. Just write in C, use whatever C++ features you like, and compile as C++. One of the Bjarne Stroustrup's original design goals for C++ was to make it unnecessary to write anything in C. It should compile perfectly efficiently provided you limit the C++ features you use (and even then will compile very efficiently; modern C++ compilers do a very good job).
And the unanswered question: Why would you want to use non-standard C, when you could write standard C or standard C++ with almost equal facility?
This sounds like the embrace and extend concept.
Life under your scenario.
I develop code using a C compiler that has the C "glitches" removed.
I move to a different platform with another C compiler that has the C "glitches" removed, but in a slightly different way.
My code doesn't compile or runs differently on the new platform, I waste time "porting" my code to the new platform.
Some vendors actually like to fix "glitches" because this tends to lock people into a single platform.
If you want to write in standard C, follow the standards. That's it.
If you want more freedom use C# or C++.NET or anything else your hardware supports.