Hello everyone as starting programmer in c++ i was looking into some differences in compilers I imported the same source files for both the gcc compiler (code blocks) and the visual c++ (Visual studio express) and i found some strange behavior that i did not expect.
The visual c++ threw a bunch of errors which were in my opinion quite big... like iterating through vector with different iterators , iterator was from another instance of the vector than the operation was done on with this iterator.... gcc compiled successfully and threw no errors in runtime... while the visual c++ threw a bunch of errors in compilation and then threw a runtime error of 'different iterator type', or dynamic char allocation with new char[str.length()+1] and strcpy_s() into them from string - visual c++ debuger threw runtime error of corrupted heap while code blocks debugger ran just fine.
My question is. Is there really this big of a difference in these compilers and debuggers? Should i worry that my programming is on a bad level if the code runs totally perfect with gcc and code blocks debuger but throws errors in visual studio?
I ve learned to programm in c++ in code blocks, visal c++ has shown me mistakes that i was totally not aware of..
The problem is with your code, not with your compilers or setup. The types of problems you are describing are examples of undefined behaviour that result from rather bad programming or coding techniques (in fact, some of them are fairly hard to achieve, without going out of your way to write very flawed code).
The thing is, compilers are not required to detect such things. Whether they do or not is a concern of compiler or library quality of implementation. In your case, it appears that your version of VC++ is detecting concerns that g++ is not, which is a point in favour of VC++.
My experience is actually the reverse of that: I find g++ detects more problems than VC++. However, both VC++ and g++ do diagnose problems that the other does not.
Which all just goes to show that your milage will vary. Personally, I'm an advocate of feeding all my code through multiple compilers when possible - precisely because that widens the net of what problems are diagnosed.
And then I exercise a policy of ensuring my code compiles cleanly with all compilers (no diagnostics at all, which includes no warnings) without having to disable any diagnostics, and avoiding use of any code constructs that are designed to suppress compiler diagnostics.
One thing to realise is that compilers, when installed, are typically configured to NOT produce many diagnostics. The reasons for this are historical. It is necessary to turn on the settings to make the compiler give warnings or errors. With g++, command options like -Wall -pedantic (which can be enabled through Code::Blocks) really increase the number of problems that will be reported. There are similar options for VC++ (although I don't remember them offhand).
MSVC has "checked iterators" for std::vector, which perform a number of useful checks. You can turn on some of these types of checks in GCC by compiling with -D_GLIBCXX_DEBUG. If you want your access to always be bounds-checked, then you need to use std::vector::at(). Often, for performance reasons, it is better to ensure bounds checking outside of your loop and then use unchecked iterators or indexing in your loop.
GCC's standard library does exactly what you told it, bugs and all. Most of the time, it behaves as you expected, and you don't realize that the bugs are there. Don't be fooled by the fact the program appears to work, it may still have bugs.
Visual Studio has two variants of the standard library. In Release builds, it acts the same as GCC. It does exactly what you told it to do, bugs and all. In Debug builds, it adds a ton of code behind the scenes to detect some of these errors, and will notify you, as you've observed. Fix these!. Note that some of these, like "heap corruption" mean that it detected that a bug occured several seconds ago, and does not mean that the bug is at the free/delete. You should also go to the project properties, and in C++/General, make sure your Warning Level is set to Level3 or even Level4. This will reveal even more bugs at compile time.
The differences in the compilers in this respect aren't that significant, except that in Debug builds, Visual Studio adds tons of error checking that's finding bugs. The other implementations, and Visual Studio in Release builds, don't go out of their way to help you find bugs.
Related
Sometimes when I compile C++ projects, the build goes successful by saying "Build Succeeded". But, if you clicked on error list, it may show some errors such as "IntelliSense: incomplete type is not allowed".
My question is what is that "IntelliSense" errors and should I have concerns on output executable file?
Intellisense errors are not necessarily real compiler errors. Remember Intellisense is a separate partial compiler designed for speed over accuracy. It partially compiles your code to help generate IDE completions and also is used by the IDE underline possible errors in the Visual Studio IDE.
The IntelliSense parser, starting with VS2010, is a product of a different company. EDG, the Edison Design Group, pretty famous in the C++ world for being the only ones that ever wrote a front-end for C++03 that was 100% compliant with that standard.
But it isn't 100% compatible with the MSVC++ compiler. VS2010 were training wheels, they've been chipping away at the incompatibilities. Some differences are pretty fundamental, MSVC++ uses a uncommon way to perform macro substitutions in the pre-processor for example. A detail that was never specified in the language standard and Microsoft committed, early, to a choice that's different from everybody else's. Very hard to fix, way too many of their customers took a dependency on that.
You could look on the bright side of this problem. Your code is dodgy and likely to be troublesome if you ever port to g++ of clang. If you need help to get it undodged then just ask a question about it.
I receive C++ source code from a developer who is compiling using Visual Studio 2010, that I then need to recompile under various different compilers: GCC, LLVM, other versions of Visual Studio, etc. Sometimes the code that he sends me (that compiles without warnings in VS2010) fails to compile under the other compilers.
Are there any compiler settings he can set in VS2010 to increase the likelihood that his code will be cleanly portable?
At the language level, there is no silver bullet. The best you can do is to stick to the language standard as closely as possible. Most compilers have options to issue warnings or errors if you take advantage of an extension that's specific to a particular compiler (with Visual C++, /Za will disable non-standard language extensions). But this isn't perfect, as no compiler yet implements absolutely 100% of the standard, so you can still have portability problems even with strictly-compliant code.
Also be aware that lots of everyday code actually takes advantage of extensions or undefined- or compiler-defined behaviors, often without realizing it, so it might not be practical to compile in a fully standards-compliant mode.
You also have to be aware of things that the standards allow to be different. For example, types like int may be different sizes on different systems. Windows is LLP64, while most Unix-derived OSes are LP64.
At the system level, I don't know of a perfect way to make sure a programmer is not relying on something system-specific (e.g., <windows.h> or <pthreads.h>).
Your best bet is to make it simple for all developers to run test builds on all target platforms.
I currently develop in C++ on Windows, using Visual Studio 2010. After the official announcement of C++11, I have begun to use some of its features that are already available in MSVC. But, as expected, the great majority of the new changes are not supported.
I thought maybe the upcoming version of Visual Studio would add these new features. However, after reading this it looks like very little is going to change.
And so, I'm curious about the feasibility of using GCC on Windows rather than MSVC, as it appears to support the great majority of C++11 already. As far as I can tell, this would mean using MinGW (I haven't seen any other native Windows versions of GCC). But I have questions about whether this would be worth trying:
Can it be used as a drop-in replacement for cl.exe, or would it involve a lot of hacks and compatibility issues to get Visual Studio to use a different compiler?
The main selling point for Visual Studio, in my opinion, is it's debugger. Is that still usable if you use a different compiler?
Since GCC comes from the *nix world, and isn't native to Windows, are there code quality issues with creating native Windows applications, versus using the native MSVC compiler? (If it matters: most of my projects are games.)
In other words, will the quality of my compiled exe's suffer from using a non-Windows-native compiler?
MSVC has the huge advantage of coming with an IDE that has no equals under Windows, including debugger support.
The probably best alternative for MinGW would be Code::Blocks, but there are worlds in between, especially regarding code completion and the debugger.
Also, MSVC lets you use some proprietary Microsoft stuff (MFC, ATL, and possibly others) that MinGW has no support for, and makes using GDI+ and DirectX easier and more straightforward (though it is possible to do both with MinGW).
Cygwin, as mentioned in another post, will have extra dependencies and possible license issues (the dependency is GPL, so your programs must be, too). MinGW does not have any such dependency or issue.
MinGW also compiles significantly slower than MSVC (though precompiled headers help a little).
Despite all that, GCC/MinGW is an entirely reliable quality compiler, which in my opinion outperforms any to date available version of MSVC in terms of quality of generated code.
This is somewhat less pronounced with the most recent versions of MSVC, but still visible. Especially for anything related to SSE, intrinsics, and inline assembly, GCC has been totally anihilating MSVC ever since (though they're slowly catching up).
Standards compliance is a lot better in GCC too, which can be a double-edged sword (because it can mean that some of your code won't compile on the more conforming compiler!), as is C++11 support.
MinGW optionally also supports DW2 exceptions, which are totally incompatible with the "normal" flavour and take more space in the executable, but on the positive side are "practically zero cost" in runtime.
I want to add some information because the field may have changed since the question was asked.
The main problem for switching away from MSVC was the lack of a good IDE that flawlessly integrates with MinGW . Visual Studio is a very powerful tool and was the only player on Windows for quite some time. However, Jetbrains released a preview version of their new C++ IDE CLion some days ago.
The main benefit comes when working on cross platform applications. In this case, a GCC based tool chain can make life much easier. Moreover, CLion narrowly integrates with CMake, which is also a big plus compared to Visual Studio. Therefore, in my opinion, it is worth to consider switching to MinGW now.
GCC's C++11 support is quite phenomenal (and quite up to par with standards conformance, now that <regex> has been implemented).
If you replace your compiler, you'll need to make sure every dependency can be built with that new compiler. They're not made to be substitutable plugins (although Clang is working on becoming that way).
GCC is a fine compiler, and can produce code that has pretty much the same performance, if not better, than MSVC. It is missing some low-level Windows-specific features though.
Apart from this, to answer your questions:
To get VS to use GCC as a compiler, you'd pretty much need to turn to makefiles or custom build steps all the way. You'd be much better off compiling from the commandline and using CMake or something similar.
You cannot use the VS debugger for GCC code. GCC outputs GDB compatible debug information, and the VS debug format is proprietary, so nothing will change in that area anytime soon.
Code quality is just as good as you'd want it. See above.
No, the quality of your code will actually increase, as GCC will point out several assumed standard extensions MSVC would hide from you. All self-respecting open source projects can be compiled with GCC.
I my humble opinion, it's depends how someone started to code in the first place. I've been using g++ and gcc for more than 20 years now but the reason why i keep using gcc is mainly for licensing reasons. Although i like it too when i don't have a bunch of runtime dependencies or dll's to bundle with my stuff since i came from the DOS era, i still like my stuff small and fast. gcc for windows comes with standard win32 libraries and common control but i had to develop my own win32 controls for stuff that might require mcf shit to work properly or just to look nicer.
Although gcc might have strong support over internet, when it comes to win32 stuff, many rely on mcf and vc proprietary stuff so again, one may have to work his own issues around and be creative when difficulty arises.
I think it's all about needs and circumstances. If you are just a hobbyist coders and have the time for researches, creating you own libs and stuff but you want a solid compiler that's around since the late 80's and free, gcc sound perfect for the job.
But in the industry visual studio is a must if you want to be competitive and stay in the race. Many hardware manufacturers would prefer bundling visual studio compatible libraries for they hardware over some opensource gnu stuff.
That's my two cents.
To be honest, C++ should be handled with MS Visual Studio. If you want to make cross-platform or Unix apps, use GCC. GCC works and can be used with any IDE other than Visual Studio. Even Visual Studio Code can use GCC. Code::Blocks, Eclipse IDE for C/C++ developers, CLion, Notepad++ and even the good ol' tool we've always known, Notepad works with GCC. And finally, on a PC with low disk space, installing Visual Studio's "Desktop Development with C++" is something like 5 GB, if it was to be useful. And this is where GCC hits MSVC hard. It has native C support. MSVC can compile C, but only with a lot of fine-tuning. It takes a lot of time and effort to finally be able to compile. The final verdict:
If MSVC works, it hella works! If MSVC doesn't work, it HELLA DON'T WORK.
If GCC installs, it works, and if it doesn't work, it's the IDE's problem.
GCC is for people who don't mind spending 4 hours at the computer making it work properly. MSVC is for those who don't care about C and want it to install without any pokin' around.
It can't be used as a direct swap-out replacement for the microsoft compilers, for a start it has a vastly different set of command line arguments and compiler specific options.
You can make use of MinGW or Cygwin to write software but introduce extra dependencies ( especially in the case of cygwin ).
One not often touted advantage of gcc over cl is that gcc can be used with ccache to drastically speed up rebuilds or distcc to build using several other machines as compiler slaves.
Consider the Intel compiler (or "Composer" as they seem to have taken to calling it) as another option. I'm not too sure where its C++11 support is at compared with MS (certainly it has lambdas), but it does integrate very nicely with VisualStudio (e.g different projects within a solution can use the Intel or MS compilers) and there's also been some efforts made to match the MS compiler commandline options.
GCC and MSVC use different name mangling conventions for C++. C++ dlls compiled by one compiler can not be used in applications compiled with the other. I believe this is the main reason we don't see more widespread use of gcc in windows.
Do you know of any resources highlighting known or possible issues during conversion of a VC++6/Win32 project to VS2010 C++/Win32 project type? I'm interested in all kinds of issues:
Compiler options compatibility
Compile-time issues
Link-time issues
Runtime issues
MFC issues
Otherwise, if you already performed that kind of migration, what issues have you encountered?
Thank you
VC++6 has non-standard exception handling. We hit a few problems because our code contain this:
try {
//Some code
}
catch (...) {
//Handle error
}
Some developers had relied on this broken behaviour and our application crash badly after being compiled in VS2008.
This article explains it well and how to solve the issue.
First of all, these issues are highly dependent on code quality and how the ancient code has been adapted to fit the VC++6 compiler's "features".
It's not possible to convert .dsp to the VS 2010 format directly (at least with the express editions), you'll have to pass through 2008 to be able to convert.
The conversion wizards should warn and inform you of any issues there might be. I haven't been through this process, but I would think compiler switches are the least of your worries. In general, I would expect bad code to produce a lot more errors you need to worry about instead.
As to your specific queries:
See 3.
VS2010 will take longer.
VS2010 linker will take longer, especially if you enable link-time optimization (introduced in VS 2005)
only if you made non-standard assumptions or if VC++6 has non-standard functions. As long as you recompile the complete project with VS2010 (and thus link it to the latest VC(++) runtime), no runtime issues should occur.
sidenote: I'm not saying the old code is bad, just that a lot of questions on SO regarding VC++6 are caused by bad code quality/conformance.
Heh, vs6 allowed you to use loop variables outside the scope of the loop. ie:
for ( int i=0; i<10; i++ )
{
if( i == 5 )
break;
}
int iVal = i;
This will fail in anything > vs6 :) fails in 2005 and 2010 - although there is a compiler setting that will allow you to force this behavior again (I would suggest just fixing it, don't force it, its noncompliant anyhow), at least in vs2005. I haven't made the dive to 2010 yet, as I do a lot of embedded development, and it seems switching compilers for the embedded stuff is usually a major pain. So, I can't say too much for 2010, but I know that carries through!
Did the move, two painful issues I'm aware of:
- The exception handling default was changed between VC6 and later versions. I believe /Eh was the default and that has changed.
- VC6 runtime (msvcrt DLL) is included in any Windows OS since Win2k, while for any other version you need to install it with your software for almost any OS.
You will have a lot of problems because VC6 was notoriously non-conformant, and your code (especially if you used templates) will likely be full of hacks to make it work that no longer are necessary, as MS did a lot of conformance work for more recent compiler versions, and the VC10 compiler will probably barf on them.
If you have a commercial edition of Visual C++:
Find devenv within your Visual C++ install directory (should be %VS90COMNTOOLS%\..\IDE\devenv.exe
> devenv /upgrade project.sln
> msbuild.exe project.sln /t:Build /p:Configuration=Release /p:Platform=Win32
Check the compilation
but If you have a free edition of Visual C++:
Find vcbuild.exe within your Visual C++ install directory (should be %PROGRAMFILES%\Microsoft Visual Studio 9.0\VC\vcpackages\vcbuild.exe)
> vcbuild.exe /upgrade project.sln
/msbuild:Configuration=Release
/msbuild:Platform=Win32
For future builds which don’t need conversion, type:
> msbuild.exe project.sln /t:Build /p:Configuration=Release /p:Platform=Win32
When moving from VS2005 to VS2010, we ran into a problem with a 3rd party library causing one of our programs to crash every time it was loaded. The problem turned out to be caused by Microsoft reversing the default setting for the /NXCOMPAT switch in the linker. That switch controls whether or not Data Execution Prevention (DEP) is enabled or not. Prior to VS2010, the default setting for that switch was NO and the DLL we were using apparently relied upon that setting to function correctly.
I'm not sure how far back the NXCOMPAT linker switch goes. It wasn't listed in any of the settings available in the VS2005 dialogs but it was listed when link /help was run from the command line. I've never been able to find a list of changes such as this from Microsoft, so this kind of bug is VERY tricky to track down.
If you're migrating from VC++ 6 to VS2015 with CLR, existing min and max functions are no longer found, add
#define NOMINMAX
#include <algorithm>
namespace Gdiplus
{
using std::min;
using std::max;
};
Secondly, NAN float changed to nan
We are considering moving the win32 build of our cross-platform C++ application from MS Visual Studio 2003 to MS Visual Studio 2005. (Yes, very forward-looking of us ;)
Should we expect to many code changes to get it compiling and working?
I've just migrated a comparatively large codebase from VS2003 to VS2008 via VS2005 and the majority of issues I found were const/non-const issues like assigning the return value of a function that returns a const char * to char *. Both VS2005 and VS2008 are a lot more picky when it comes to const correctness and if your existing codebase is a bit sloppy, sorry, old school when it comes to const correctness, you'll see plenty of this.
A very welcome change was that the template support in VS2005 is noticeably better than it is in VS2003 (itself a big improvement on earlier versions), which enabled me to throw out several workarounds for template related issues that the team had been dragging around since the heady days of VC++ 4.x.
One issue that you are likely to encounter are tons of warnings about "deprecated" or "insecure" functions, especially if you are using the C string functions. A lot of these are "deprecated by Microsoft" (only that they left out the "by Microsoft" part) and are still perfectly usable, but are known potential sources for buffer overflows. In the projects I converted, I set the preprocessor define _CRT_SECURE_NO_WARNINGS and disabled the warning C4996 to turn off these somewhat annoying messages.
Another issue that we came across is that MS has changed the default size of time_t either in VS2005 or in VS2008 (I apologise but I can't remember - it's definitely in VS2008 but it may already be in VS2005) so if you have to link with legacy libraries that use time_t in the interface, you'll have to use _USE_32BIT_TIME_T to revert to the older compiler's behaviour.
If your solution contains several projects, you may find that the parallel build feature (which is turned on by default) will highlight missing build dependencies. so projects are suddenly built in the wrong order but magically build correctly if you revert from parallel build back to linear build.
Overall I do prefer VS2005/8 to VS2003, and I would recommend to upgrade to VS2008 if that is an option, as the compiler is "better" than VS2005 - MS seems to have made a massive effort in improving the native C++ compiler. Part of that was already noticeable in 2005 so you'll get at least some of the benefit even if you stick to 2005.
If your code is already quite clean and compiles without warning, it's not a big step.
Check this article and consider how big the impact of those changes would be on your existing code. Cleaning up for-loop conformance can be a bit of work.
You can get the free Express edition of Visual Studio 2005 here.
You should review MS's lists of breaking changes when deciding if & how to undertake this project.
Breaking Changes VC 2005 - 2008
Breaking Changes in the Visual C++ 2005 Compiler
Breaking Changes in Visual C++ .NET 2003
you will find alot of string commands will give you warnings as in vis 2005 they stepped up the security to try and stop buffer over runs.
your 2003 code will still compile though.
I recently converted a 10-year old VC6 program to VS2008. It required no changes to the source code, and the only changes needed to the project files were handled by the upgrade wizard.
No. I wouldn't expect more than a few.
Edit: you should/could try the code with a demo version of vs2005 first.
Also, consder disabling checked iterators or the performance may suffer after porting to the new version.
If your source code conforms to the C++ standard, you shouldn't need to change anything to move to 2005. You may get some depreciated warnings, but nothing that should give compile errors.
The main issue people have with going from older versions of VS to newer ones is that the newer versions are more standard conforming.
Things like:
for(i = 0; i < length; ++i)
{
}
when i is undefined prior to that point work fine in previous versions of VS, but in 2005 it correctly marks i as an undefined variable.