Should i use the latest GCC version ( in general, and specifically today ) - c++

I was wondering if it is safe to use the latest GCC version, or do people usually go a few versions back (and if so how many). Are there trusted versions which can be assumed to be (relatively) bug free, or can i safely assume (for non life saving programs) that the latest GCC version is safe to use?
EDIT:
By safe - i mean mainly bug free, i.e. in terms of execution.

In the absence of specific requirements to the contrary, I tend to use whichever version of gcc is supplied by my (reasonably up-to-date) Linux distribution. This policy has worked pretty well for me so far.

I tend to use the latest version, because it implements the latest features, fix bugs, but unfortunately introduce new bugs. Introduces bugs are usually on some weird corner cases, so I would assume it is safe to always use the latest version.

Better C++ 11 support is on the latest gcc version. You might to compile it yourself though Note that gcc 4.7 is almost ready for release, so, you might want to give it a try. I have done it quite often, with almost all gcc versions starting from 4, for the improved C++ standards compliance, and, quite often, gains in compilation time and improvements on the optimizer.
In general, it is a good idea to use the latest g++ compiler.
However, in a few occasions I have had problems with the libraries that I use. For example, the version 4.5 of g++ broke the version of boost::xpressive that I was using. Or better said, it revealed something broken in the library. Plus, the higher you go with g++, the more problems you will have trying to compile your code with other compilers, lagging on new features implementation.
My take on that is to yes, use the latest compiler version, and use the good things that the new standard has, because they make me more productive and a happier programmer. Then, if I have to port my code to another compiler, I just tweak the parts of the code that I need to, which at the end doesn't take that much time.

On the normal host system, I would go with what is provided by the OS/distribution, and maybe install a few versions in parallel. On my macosx system gcc-4.2 (OSX standard), gcc-4.6.2, gcc-llvm (OSX standard) and gcc-HEAD installed. That way I can pretty easily try out things, update gcc-HEAD to have the bleeding edge, but continue to have working and supported versions for my day to day development.
In a commercial/work setting, I would recommend being very anal in writing down the version numbers used, and actually backup the whole compiler toolchain, so that you can come back to a working identical system later on if maintenance needs it. Nothing is more annoying than having a compiler change slightly in annoying ways (missing defines, etc...).
This is even more important for embedded development, so far that I actually save the compiler toolchain into git. A slight bump in version in gcc could mean either a horribly annoying compiler bug (those happen much more often on embedded platforms I have the impression), or a bump in size of for example 40 bytes, which could completely obliterate your project.

Related

How to compile c/c++ to ms-dos .com programs?

I use Code::Blocks with GNU GCC Compiler.
My question is: is there any way to compile c/c++ code to ms-dos 16bit (.com) executable format?
I tried to set the build options and search the compiler parameters on the net, but i couldn't find anything.
You can certainly compile C and/or (an ancient dialect of) C++ to a 16-bit MS-DOS .com file. The compiler/linker you have with Code::Blocks almost certainly can't do that though.
In particular, at least to my knowledge, gcc has never even attempted to generate code for a 16-bit, segmented-memory environment. There was at least one port of gcc to a DOS extender (DJGPP, but it produces .exe files, not .com and it uses a proprietary DOS extender. This originally used an ancient version of gcc, but has since been updated to a much newer version of gcc.
If you really need to generate a .com file, there are quite a few options, but all the compilers are quite old, so especially with respect to C++ the language they accept is quite limited.
Tool chains that target(ed) MS-DOS.
Caveat: As already noted, all of these are very old. Generally speaking, the C they accept is reasonably conformant C89, but only for fairly small programs (both in terms of code and data size--of necessity: .com files are basically limited to a combined total of 64Kbytes of data and code). The differences between the C++ they accept and anything even sort of close to modern is much more profound (e.g., some didn't support templates at all). All mention of conformance here is relative to other compilers of the time; by modern standards, their conformance is uniformly terrible.
Microsoft: Only sold C++ compilers for MS-DOS for a fairly short time--they were somewhat late into the market, and moved out of it to compilers that produced only 32-bit Windows executables fairly early. Known more for optimization than language conformance.
Borland: Mirror image of Microsoft. Better conformance, poorer optimization, probably the last to abandon the MS-DOS market. Their last few compilers for MS-DOS even supported C++ templates (fairly new at the time).
Watcom: one of the few that's still available as a free download, but without commercial support. When it was new, this was generally considered one of the best available for both conformance and optimization. It's apparently been updated (to at least some extent) relatively recently, but I haven't used a recent version so I can't really comment on those updates.
Metaware: Quite an expensive option at the time. I never used it, but some people I respected highly considered it the best compiler you could get. Mostly targeted embedded systems.
Datalight/Zortech/Symantec/Digital Mars: the other one that's still officially available. Had a small but extremely loyal following. I tried it for a while, but never found a compelling reason to prefer it over others. Digital Mars still maintains this compiler, so it's one of the few that still gets fairly regular updates.
There were quite a few more back then as well, but these probably account for well over 90% of the market at the time.
What you are looking for is exe2bin. This was a utility that came with DOS to convert .EXE format object code into the .COM format (code and data in one 64K segment). It came with DOS and some compiliers/assemblers.

gnu c++0x backwards compatibility status - can I just switch it on and go?

I have a pretty big c++ code base (not self written).
Numerous libraries, some not so syntactically heavy, some extremely so.
Among others there's heavy use of Boost, some Eigen.
I just love some of the new features of 0x and a quick compile/test tells me that it seems all good.
This question, and this one suggest that there are some things that smell funny.
My current state is:
gcc4.4.3
libstc++6-4.4
boost-1.40
eigen 3.0 - beta3
using the std=c++0x flag.
I know the standards committee has agonized about backwards compatibility and endured serious pain.
My question is, did it work? Can I take all that code, switch on c++0x and be certain, that everything does not only compile but also work as expected?
I don't use high 0x magic, just auto and some of the usual favorites explicitly marked "implemented" on GNU C++0x status.
I would certainly recommend using GCC 4.5, as it sports more bug fixes and a more solid implementation of the latest C++0x.
With regard to the questions you linked:
This is just a type definition on a new platform. Don't worry about this, it won't really break anything or be hard to fix.
This is one of the more complicated C++0x features, but shouldn't have that much of an influence on backwards compatibility (except if boost tries to hack itself into a feature that is to become a compiler/language feature).
The only real way to check if there are problems, is to turn on the compiler flag, and see if any problems pop up. Better yet, turn on full-fledged warnings (at least on GCC, MSVC has some nagging problems in this regard) to detect as many subverted problems as possible. This isn't waterproof though. You might want to use different compilers (GCC/MSVC/Intel/Clang) to cross-check compatibility, but in the case of c++0x you'll be very limited to the common subset of the compilers you use to check. Currently I use C++0x extensively, and intend to fix any problems that come into being when the finalized standard is implemented.
There is no answer to your question, it depends on your code. Try to compile, fix compile-time problems. Once it compiles, run your test cases, and fix whatever needs to be fixed.
If you don't have test code, start there.
This cannot be answered without knowledge of the code base. In particular, with 100% backwards compatibility you might find the same issues that you can find by upgrading/changing a compiler within the same standard:
If your code is not standard, if it uses features that are specific to the compiler or version, if it has any instance of undefined behavior or depends on unspecified behaviors that just happen to work in your current platform, then if might not compile (if you are lucky) or exhibit different behavior at runtime (if you are not lucky).

Why is it important for C / C++ Code to be compilable on different compilers?

I'm
interested in different aspects of portability (as you can see when browsing my other questions), so I read a lot about it. Quite often, I read/hear that Code should be written in a way that makes it compilable on different compilers.
Without any real life experience with gcc / g++, it seems to me that it supports every major platform one can imagine, so Code that compiles on g++ can run on almost any system. So why would someone bother to have his code run on the MS Compiler, the Intel compiler and others?
I can think of some reasons, too. As the FAQ suggest, I'll try to post them as an answer, opposed to including them into my own question.
Edit: Conclusion
You people got me completely convinced that there are several good reasons to support multiple compilers. There are so many reasons that it was hard to choose an answer to be the accepted one. The most important reasons for me:
Contributors are much more likely to work an my project or just use it if they can use the compiler of their choice
Being compilable everywhere, being usable with future compilers and tools, and adhering to the standards are enforcing each other, so it's a good idea
On the other hand, I still believe that there are other things which are more important, and now I know that sometimes it isn't important at all.
And last of all, there was no single answer that could convince me not to choose GCC as the primary or default compiler for my project.
Some reasons from the top of my head:
1) To avoid being locked with a single compiler vendor (open source or not).
2) Compiling code with different compilers is likely to discover more errors: warnings are different and different compilers support the Standard to a different degree.
It is good to be compilable on MSVC, because some people may have projects that they build in MSVC that they want to link your code into, without having to set up an entirely different build system.
It is good to be compilable under the Intel compiler, because it frequently compiles faster code.
It is good to be compilable under Clang, because it can give better error messages and provide a better development experience, and it is an easier project to work on than GCC and so may gain additional benefits in the future.
In general, it is good to keep your options open, because there is no one compiler that fits all needs. GCC is a good compiler, and is great for most purposes, but you sometimes need something else.
And even if you're usually only going to be compiling under GCC, making sure your code compiles under other compilers is also likely to help find problems that could prevent your code from working with past and future versions of GCC, for instance, if there's something that GCC is less strict about now, but later adds checks for, another compiler may catch in advance, helping you keep your code cleaner. I've found this helpful in the reverse case, where GCC caught more potential problems with warnings than MSVC did (MSVC is the only compiler we needed to support, as we were only shipping on Windows, but we did a partial port to the Mac under GCC in our free time), which allowed me to produce cleaner code than I would have otherwise.
Portability. If you want your code to be accessible by the maximum number of people possible, you have to make it work on the widest range of possible compilers. It the same idea as make a web site run on browsers other than IE.
Some of it is political. Companies have standards, people have favorite tools etc. Telling someone that they should use X, really puts some people off, and makes it really inaccessible to others.
Nemanja brings up a good point too, targeting for a certain compiler locks you into to using it. In the Open Source world, this might not be as big of a problem (although people could just stop developing on it and it becomes obsolete), but what if the company you buy it from discontinues the product, or goes out of business?
For most languages I care less about portability and more about conforming to international standards or accepted language definitions, from which properties portability is likely to follow. For C, however, portability is a useful idea, because it is very hard to write a program that is "strictly conforming" to the standard. (Why? Because the standards committees felt it necessary to grandfather some existing practice, including giving compilers some freedom you might not like them to have.)
So why try to conform to a standard or make your code acceptable to multiple compilers as opposed to simply writing whatever gcc (or your other favorite compiler) happens to accept?
Likely in 2015 gcc will accept a rather different language than it does today. You would prefer not to have to rewrite your old code.
Perhaps your code might be ported to very small devices, where the GNU toolchain is not as well supported.
If your code compiles with any ANSI C compiler straight out of the box with no errors and no warnings, your users' lives will be easier and your software may be widely ported and used.
Perhaps someone will invent a great new tool for analyzing C programs, refactoring C programs, improving performance of C programs, or finding bugs in C programs. We're not sure what version of C that tool will work on or what compiler it might be based on, but almost certainly the tool will accept standard C.
Of all these arguments, it's the tool argument I find most convincing. People forget that there are other things one can do with source code besides just compile it and run it. In another language, Haskell, tools for analysis and refactoring lagged far behind compilers, but people who stuck with the Haskell 98 standard have access to a lot more tools. A similar situation is likely for C: if I am going to go to the effort of building a tool, I'm going to base it on a standard with a lifetime of 10 years or so, not on a gcc version which might change before my tool is finished.
That said, lots of people can afford to ignore portability completely. For example, in 1995 I tried hard to persuade Linus Torvalds to make it possible to compile Linux with any ANSI C compiler, not just gcc. Linus had no interest whatever—I suspect he concluded that there was nothing in it for him or his project. And he was right. Having Linux compile only with gcc was a big loss for compiler researchers, but no loss for Linux. The "tool argument" didn't hold for Linux, because Linux became so wildly popular; people building analysis and bug-finding tools for C programs were willing to work with gcc because operating on Linux would allow their work to have a big impact. So if you can count on your project becoming a wild success like Linux or Mosaic/Netscape, you can afford to ignore standards :-)
If you are building for different platforms, you will end up using different compilers. Moreover, C++ compilers tend to be always slightly behind the C++ standard, which means they usually change their adherence to it as time passes. If you target the common denominator to all major compilers then the code maintenance cost will be lower.
It's very common for applications (especially open-source application) that other developers would desire to use different compilers. Some would rather be using Visual Studio with MS Compiler for development purposes. Some would rather use Intel compiler for claimed performance benefits and such.
So here are the reasons I can think of
if speed is the biggest concern and there is special, highly optimized compiler for some platforms
if you build a library with a C++ interface (classes and templates, instead of just functions). Because of name mangling and other stuff, the library must be compiled with the same compiler as the client code, and if the client wants to use Visual C++, he must be able to compile the lib with it
if you want to support some very rare platform that does not have gcc support
(For me, those reasons are not significant, since I want to build a library that uses C++ internally, but has a C interface.)
Typically these are the reasons that I've found:
cross-platform (windows, linux, mac)
different developers doing development on different OS's (while not optimal, it does happen - testing usually takes place on the target platform only).
Compiler companies go out of business - or stop development on that language. If you know your program compiles/runs well using another compiler, you've covered your bet.
I'm sure there are other answers as well, but these are the most common reasons I've run into so far.
Several projects use GCC/G++ as a "day-to-day" compiler for normal use, but every so often will check to make sure their code follows the standards with the Comeau C/C++ compiler. Their website looks like a nightmare, and the compiler isn't free, but it's known as possibly the most standards-compliant compiler around, and will warn you about things many compilers will silently accept or explicitly allow as a nonstandard extension (yes, I'm looking at you, Mr. I-don't-mind-and-actually-actively-support-your-efforts-to-do-pointer-arithmetic-on-void-pointers-GCC).
Compiling every so often with a compiler as strict as Comeau (or, even better, compiling with as many compilers as you can get your hands on) will let you know of errors people might experience when trying to compile your code, things your compiler allows you to do that it shouldn't, and potentially things that other compilers don't allow you to do that you should. Writing ANSI C or C++ should be an important goal for code you intend to use on multiple platforms, and using the most standards-compliant compiler around is a good way to do that.
(Disclaimer: I don't have Comeau, and don't plan on getting it, and can't get it because I'm on OS X. I do C, not C++, so I can actually know the whole language, and the average C compiler is much closer to the C standard than the average C++ compiler to the C++ standard, so it's less of an issue for me. Just wanted to put this in here because this started to look like an ad for Comeau. It should be seen more as an ad for compiling with many different compilers.)
This one of those "It depends" questions. For open source code, it's good to be portable to multiple compilers. After all having people in diverse environments build the code is sort of the point.
But for closed source, This is a lot less important. You never want to unnecessarily tie yourself to a specific compiler. But in most of the places I've worked, compiler portability didn't even make into the top 10 of things we cared about. Even if you never use anything other than standerd C/C++, switching a large code base to a new compiler is a dangerous thing to do. Compilers have bugs. Sometimes your code will have bugs that are benign on one compiler, but suddenly a problem on another.
I remember one transition, where one compiler thought this code was just fine:
for (int ii = 0; ii < n; ++ii) { /* some code */ }
for (int ii = 0; ii < y; ++ii) { /* some other code */ }
While the newer compiler complained that ii had been declared twice, so we had to go through all of our code and declare loop variables before the loop in order to switch.
One place I worked was so careful about unintended side effects of compiler switches, that they checked specific compilers into each source tree, and once the code shipped would only use that one compiler to do updates on that code base - forever.
Another place would try out a new compiler for 6 months to a year before they switched over to it.
I find gcc a slow compiler on windows (nothing to compare against under linux). So I (sometimes) want to compile my code under other compilers, just for faster development cycles.
I don't think anyone has mentioned it so far, but another reason may be access to certain platform-specific features: Many operating system vendors have special versions of GCC, or even their own home-grown (or licensed and modified) compilers. So if you want your code to run well on several platforms, you may need to choose the right compiler on each platform. Be that an embedded system, MacOS, Windows etc.
Also, speed may be an issue (both compilation speed and execution speed). Back in the PPC days, GCC produced notoriously slow code on PowerPC CPUs, so Apple put a bunch of engineers on GCC to improve that (GCC was very new for the Mac, and all other PowerPC platforms were small). Platforms that are used less may be optimized less in GCC, so using another compiler that's been written for that platform can be faster.
But as a final summary: While there is ideal value in compiling on several compilers, in practice, this is mainly interesting for cross-platform software (and open-source software, because it often gets made cross-platform fairly quickly, and contributors have it easier if they can use their compiler of choice instead of having to learn a new one). If you need to ship on one platform only, shipping and maintenance are usually much more important than investing in building on several compilers if you're only releasing the builds made with one of them. However, you will want to clearly document any deviations from the standard (GCC-isms, for instance) to make the job of porting easier, should you ever have to do it.
Both Intel compiler and llvm are faster than gcc. The real reasons to use gcc are
Infinite hardware support (on no other compiler can you compile a lego mindstorm code on your old DEC).
it's cheap
best spagety optimizer in the business.

Advantage of porting vc6 to vc2005/vc2008?

I was asking my team to port our vc6 application to vc2005, they are ready to allot sometime to do the same.Now they need to know what is the advantage of porting.
I don't thing they really understand what does it mean to adhere to standard compliance.
Help me list out the advantage to do the porting.
Problem I am facing are
1)No debugging support for standard containers
2)Not able to use boost libraries
3)We use lot of query generation but use CString format function that is not type safe
4)Much time is spent on trouble shooting vc6 problems like having >>
vector<vector<int>>
with out space between >>
Advantages:
More standards compliant compiler. This is a good thing because it will make it easier to port to another platform (if you ever want to do that). It also means you can look things up in the standard rather than in microsoft's documentation. In the end you will have to upgrade your compiler at some point in the feature. The sooner you do it, the less work it will be.
Not supported by MS. The new SDK doesn't work. 64-bit doesn't work. And I don't think they're still fixing bugs either.
Nicer IDE. Personally, I really prefer tabs to MDI. I also think that it's much easier to configure Visual Studio (create custom shortcuts, menu bars, etc.). Of course that's subjective. Check out an express edition and see if you agree.
Better plugin support. Some plugins aren't available for VC6.
Disadvantages:
Time it takes to port. This very much depends on what kind of code you have. If your code heavily uses non-standards compliant VC6 features, it might take some time. As Andrew said, if you're maintaining an old legacy project, it might not be worth it.
Worse Performance. If you're developing on really old computers, Visual Studio may be too slow.
Cost I just had a quick look and Visual Studio licenses seem to be a bit more expensive than VC6's.
Why VC2005? If you are going to invest the time (and testing!) to upgrade from VC6, why not target VC2008?
If you're maintaining a legacy project then there may be no advantage in porting. Simply converting projects and fixing up compiler problems could take weeks of time and introduce instability.
If you're actively developing a product then the main advantage is that you'll no longer be using a product that's over eight years old - which is clearly a good thing.
More recent versions of the Windows SDK don't work with VC6 - if you want to use the latest Windows features, you'll need a more recent compiler.
The later compilers are said to be more standards conforming. I'm sorry I can't be more specific. I do know that VC6 generates lots of compiler warnings just for using standard template classes.
If you use any external libraries that are compiled with a later compiler, you'll need to use something compatible.
Prepare for something of a harsh transition - the IDE's are more different than they should be.
To ensure complete compatibility of the application with different versions of the base platform. And to rectify any errors found thereby so as to give enough freedom to end user to use his own version of the base platform.
I'm not saying you shouldn't convert, but to take your specific points:
1)No debugging support for standard
containers
I debug code using standard containers with VC++ 6 all the time. What's your problem here?
2)Not able to use boost libraries
True. You may find you can use some of the simpler stuff.
3)Much time is spent on trouble
shooting vc6 problems like having >>
[can't get SO to stop mangling this, nb]
with out space between >>
Um, that is a syntax error (at least in the version of C++ understood by VC++6) and will be flagged as such. If your team is spending "much time" on this sort of thing, you need another team.
Edit:
3)We use lot of query generation but
use CString format function that is
not type safe
It will be equally type-unsafe under VS2005. I don't see why this is a reason for porting. If you want type safety use the standard C++ I/O mechanisms.
If your team can't see any advantage and you are unable to explain any advantage, why are you asking them to do this?
Sounds like you are porting just for the sake of it.

What are some convincing arguments to upgrade from Visual Studio 6?

I have a client who is still using Visual Studio 6 for building production systems. They write multi-threaded systems that use STL and run on mutli-processor machines.
Occasionally when they change the spec of or increase the load on one of their server machines they get 'weird' difficult to reproduce errors...
I know that there are several issues with Visual Studio 6 development and I'd like to convince them to move to Visual Stuio 2005 or 2008 (they have Visual Studio 2005 and use it for some projects).
The purpose of this question is to put together a list of known issues or reasons to upgrade along with links to where these issues are discussed or reported. It would also be useful to have real life 'horror stories' of how these issues have bitten you.
Not supported on 64-bit systems, compatibility issues with Vista, and it was moved out of extended support by Microsoft on April 8, 2008
http://msdn.microsoft.com/en-us/vbrun/ms788708.aspx
Unpatched VC6 STL is not thread safe. See here http://www.amanjit-gill.de/articles/vc6_stl.html, the patches aren't included in the service packs and you have to get them from Dinkumware directly (from here http://www.dinkumware.com/vc_fixes.html) and then apply them to each installation...
The biggest problem that we've seen at my workplace is it's inability to handle even marginally complex templated classes or functions. This fact alone has force some of the most devoted VS6 fans in the company to upgrade and start using VS2005. In addition to the template problem, intellisense is much better, debugging is easier and more accurate, and many people find the IDE easier to navigate. The only downside that we have seen thus far is that builds take a bit longer in 2005 than they did in 6 (but that's probably a side effect of the compiler being more robust)
You can also check out these sites for a sampling of known issues in VS6:
http://louisville.edu/~ecrouc01/CECS302/VisualCPP.htm
http://www.acceleratedcpp.com/details/msbugs.html
I'm sure you could find more if you poked around a bit.
VS6 does not compile code according to the current C/C++ standard. For example,
it has incorrect (outdated) scoping rules for loops. At least one MSFT SDK have been updated now with code that expects the correct semantics, so the SDK won't even compile with VS6 any more.
It has trouble being able to compile all but the most trivial template constructs.
It will compile some template constructs that have been declared illegal in recent standards updates (because the constructs don't actually do what normal users expect).
operator new doesn't conform to the C++ spec and doesn't throw exceptions on allocation failure, fixing this is non trivial.
see: http://msdn.microsoft.com/en-us/magazine/cc164087.aspx
One of the biggest reasons for me to upgrade was the standard compliant C++ compiler ( although still not 100% ), so I could leverage more C++ features in my projects and not worry about strange hacks and workarounds that can lead to hard to find bugs.
Not compatible with Vista. Heck, there's a long list of issues VS 2005 has with Vista.
That being said, most of the improvements in VS seem to apply to everything other than C++ native code. What I'm seeing is more standards compliance, which is important but hardly dramatic.
Visual Studio 6 is not compatible with the lasted Windows SDKs, so it cannot utilize (at least easily) the latest OS features.
Though I no longer have concrete details, I'll just throw in that when we upgraded at work, the new compiler found quite a few errors that VC 6 let slip through quietly. Improved product robustness just from the upgrade.
If they use the STL, they may be interested in the recently-released feature pack, which includes an implementation of TR1.
I have upgraded my stuff but it's relatively uncomplicated. A con to upgrade is VS 2005 DLL Hell
The VS 2008 version of the STL compiles with /clr, so if they're interested in transitioning to the managed world, they don't have to lose all their old code.
By defoult newer versions have better compiler and better libraries. But it's not always easy to port existing projects to newer studio, and you can upgrade both compiler and libraries manually.
I was using VS 6.0 with Intel compiler just year ago. We just had a bunch of old code then, which was threating iterators as pointers and vice versa, and it was all real messy and scary, so this holded us from an upgrade.
But I have had to upgrade after all, because the framework I'm currently using simply doesn't run on VS 6.0. Think this is the ultimative reason :-)
Third-part libraries support only a limited number of compilers, too. Your client may not be able to accept bugfixes or feature upgrades as a result.
For instance, even a widely used library as Boost supports only VS 7.1 and later (source)
And you might have some problems with Data Execution Prevention (DEP) as well, because VC6 ships with an old ATL version. As usual, see Raymond Chen for details.