msvc9, iostream and 2g/4g plus files - c++

Doing cross platform development with 64bit. Using gcc/linux and msvc9/server 2008.
Just recently deployed a customer on windows and during some testing of upgrades I found out that although std::streamoff is 8 bytes, the program crashes when seeking past 4G.
I immediately switched to stlport which fixes the problem, however stlport seems to have other issues. Is STL with msvc9 really that broken, or am I missing something?
Since the code is cross platform I have zero interest in using any win32 calls.
Related
iostream and large file support
Reading files larger than 4GB using c++ stl.

Even though you say that you have "zero" interest in using "win32" calls, it situations like this your stuck between a rock and a hard place.
I would just implement my own version of a file iostream using the "win32" calls that looks and feels like the fstream interfaces. This is easy to do and I've done it hundreds of times.
Call it say 'fstreamwin32'.
Then I would have a header file that would do something like:
#ifdef WIN32
typedef fstreamwin32 fsteamnative;
#else
typedef fstream fsteamnative;
#endif
Then I would use fsteamnative everywhere. That way you keep your code cross platform and still solve your problem.
If the problem is ever fixed, you can easily remove your "win32" workaround by changing your typedef back to using fstream typedef. This is why lots of cross platform codebases have lots of levels of indirection (e.g. by using their own typedef's for standard stuff) so that they are do stuff like this would having to change a lot of code.

Another link I found on this subject:
http://cplusplus.com/forum/general/6813/

I ended up using STLport. The biggest difference with STLport being that some unit tests which crashed during multiplies of double precision numbers now work and those unit tests pass. There are some other differences with relative precision popping up but those seem to be minor.

Related

C/C++ REGS union in DOS.h no longer available. Any alternatives?

I've had to use a very old library written some twenty years ago. I get it to compile almost completely, except for one part that uses a REGS union. From the Google searching I've done, REGS is a part of interrupt handling in the DOS.h file. Well, looking at the modern version of DOS.h, one does not see any REGS definition.
Some posts around the Internet said something about it being unique to either the Borland or Turbo compilers, but this code was written to work under many different compilers.
Any ideas what I should do? Is an old DOS.h file floating around that might work?
Using: Visual Studio 2010, compiling from command line.
Pretty much the only thing you can do is figure out what they're doing with the REGS, and do (as close as you can) to the same thing at that higher level. In most cases, REGS were used to invoke DOS functions, most of which have equivalents that can be invoked as normal functions under Windows. Others were to use BIOS functions, which (again) mostly have functions to accomplish the same.
Without knowing what was being accomplished with the REGS, however, it's impossible to guess what the replacement would/will be.
Creating the REGS union is the least of your problems. Applications can't simply call interrupts whenever they want under a protected-mode operating system. And if they try, they'll be stopped by the system.
A better approach is to look for ways to accomplish it using the Windows API.
EDIT
The above comments assume you are doing modern-day Windows development, which I guess was an assumption. Other than the tools you are using, you really haven't said a thing about the type of application you are developing (command line, windowed, 32-bit, 64-bit, etc.), in addition to not saying anything about the task you're trying to accomplish.
If you are developing for 16-bit DOS, then you can still do interrupts.

C++ Windows Compiler for smallest executables

guys I want to start programing with C++. I have written some programs in vb6, vb.net and now I want to gain knowledge in C++, what I want is a compiler that can compile my code to the smallest windows application. For example there is a Basic language compiler called PureBasic that can make Hello world standalone app's size 5 kb, and simple socket program which i compiled was only 12kb (without any DLL-s and Runtime files). I know it is amazing, so I want something like this for C++.
If I am wrong and there is not such kind of windows compiler can someone give me a website or book that can teach me how to reduce C++ executable size, or how to use Windows API calls?
Taking Microsoft Visual C++ compiler as example, if you turn off linking to the C runtime (/NODEFAULTLIB) your executable will be as small as 5KB.
There's a little problem though: you won't be able to use almost anything from the standard C or C++ libraries, nor standard features of C++ like exception handling, new and delete operators, floating point arithmetics, and more. You'll need to use only the features directly provided by WinAPI (e.g. create files with CreateFile, allocate memory with HeapAlloc, etc...).
It's also worth noting that while it's possible to create small executables with C++ using these methods, you may not be using most of C++ features at this point. In fact typical C++ code have some significant bloat due to heavy use of templates, polymorphism that prevents dead code elimination, or stack unwinding tables used for exception handling. You may be better off using something like C for this purpose.
I had to do this many years ago with VC6. It was necessary because the executable was going to be transmitted over the wire to a target computer, where it would run. Since it was likely to be sent over a modem connection, it needed to be as small as possible. To shrink the executable, I relied on two techniques:
Do not use the C or C++ runtime. Tell the compiler not to link them in. Implement all necessary functionality using a subset of the Windows API that was guaranteed to be available on all versions of Windows at the time (98, Me, NT, 2000).
Tell the linker to combine all code and data segments into one. I don't remember the switches for this and I don't know if it's still possible, especially with 64-bit executables.
The final executable size: ~2K
Reduction of the executable size for the code below from 24k to 1.6k bytes in Visual C++
int main (char argv[]) {
return 0;
}
Linker Switches (although the safe alignment is recommended to be 512):
/FILEALIGN:16
/ALIGN:16
Link with (in the VC++ project properties):
LIBCTINY.LIB
Additional pragmas (this will address Feruccio's suggestion)
However, I still see a section of ASCII(0) making a third of the executable, and the "Rich" Windows signature. (I'm reading the latter is not really needed for program execution).
#ifdef NDEBUG
#pragma optimize("gsy",on)
#pragma comment(linker,"/merge:.rdata=.data")
#pragma comment(linker,"/merge:.text=.data")
#pragma comment(linker,"/merge:.reloc=.data")
#pragma comment(linker,"/OPT:NOWIN98")
#endif // NDEBUG
int main (char argv[]) {
return 0;
}
I don't know why you are interested in this kind of optimization before learning the language, but anyways...
It doesn't make much difference of what compiler you use, but on how you use it. Chose a compiler like the Visual Studio C++'s or MinGW for example, and read its documentation. You will find information of how to optimize the compilation for size or performance (usually when you optimize for size, you lose performance, and vice-versa).
In Visual Studio, for example, you can minimize the size of the executable by passing the /O1 parameter to the compiler (or Project Properties/ C-C++ /Optimization).
Also don't forget to compile in "release" mode, or your executable may be full of debugging symbols, which will increase the size of your executable.
A modern desktop PC running Windows has at least 1Gb RAM and a huge hard drive, worrying about the size of a trivial program that is not representative of any real application is pointless.
Much of the size of a "Hello world" program in any language is fixed overhead to do with establishing an execution environment and loading and starting the code. For any non-trivial application you should be more concerned with the rate the code size increases as more functionality is added. And in that sense it is likley that C++ code in any compiler is pretty efficient. That is to say your PureBasic program that does little or nothing may be smaller than an equivalent C++ program, but that is not necessarily the case by the time you have built useful functionality into the code.
#user: C++ does produce small object code, however if the code for printf() (or cout<<) is statically linked, the resulting executable may be rather larger because printf() has a lot of functionality that is not used in a "hello world" program so is redundant. Try using puts() for example and you may find the code is smaller.
Moreover are you sure that you are comparing apples with apples? Some execution environments rely on a dynamically linked runtime library or virtual machine that is providing functionality that might be statically linked in a C++ program.
I don't like to reply to a dead post, but since none of the responses mentions this (except Mat response)...
Repeat after me: C++ != ( vb6 || vb.net || basic ). And I'm not only mentioning syntax, C++ coding style is typically different than the one in VB, as C++ programmers try to make things usually better designed than vb programmers...
P.S.: No, there is no place for copy-paste in C++ world. Sorry, had to say this...

Qt, MSVC, and /Zc:wchar_t- == I want to blow up the world

So Qt is compiled with /Zc:wchar_t- on windows. What this means is that instead of wchar_t being a typedef for some internal type (__wchar_t I think) it becomes a typedef for unsigned short. The really cool thing about this is that the default for MSVC is the opposite, which of course means that the libraries you're using are likely compiled with wchar_t being a different type than Qt's wchar_t.
This doesn't become an issue of course until you try to use something like std::wstring in your code; especially when one or more libraries have functions that accept it as parameters. What effectively happens is that your code happily compiles but then fails to link because it's looking for definitions using std::wstring<unsigned short...> but they only contain definitions expecting std::wstring<__wchar_t...> (or whatever).
So I did some web searching and ran into this link: https://bugreports.qt.io/browse/QTBUG-6345
Based on the statement by Thiago Macieira, "Sorry, we will not support building Qt like this," I've been worried that fixing Qt to work like everything else might cause some problem and have been trying to avoid it. We recompiled all of our support libraries with the /Zc:wchar_t- flag and have been fairly content with that until a couple days ago when we started trying to port over (we're in the process of switching from Wx to Qt) some serialization code.
Because of how win32 works, and because Wx just wraps win32, we've been using std::wstring to represent string data with the intent of making our product as i18n ready as possible. We did some testing and Wx did not work with multibyte characters when trying to print special stuff (even not so special stuff like the degree symbol was an issue). I'm not so sure that Qt has this problem since QString isn't just a wrapper to the underlying _TCHAR type but is a Unicode monster of some sort.
At any rate, the serialization library in boost has compiled parts. We've attempted to recompile boost with /Zc:wchar_t- but so far our attempts to tell bjam to do this have gone unheeded. We're at an impasse.
From where I'm sitting I have three options:
Recompile Qt and hope it works with /Zc:wchar_t. There's some evidence around the web that others have done this but I have no way of predicting what will happen. All attempts to ask Qt people on forums and such have gone unanswered. Hell, even in that very bug report someone asks why and it just sat there for a year.
Keep fighting with bjam until it listens. Right now I've got someone under me doing that and I have more experience fighting with things to get what I want but I do have to admit to getting rather tired of it. I'm also concerned that I'll KEEP running into this issue just because Qt wants to be a c**t.
Stop using wchar_t for anything. Unfortunately my i18n experience is pretty much 0 but it seems to me that I just need to find the right to/from function in QString (it has a BUNCH) to encode the Unicode into 8-bytes and visa-versa. UTF8 functions look promising but I really want to be sure that no data will be lost if someone from someplace with a more symbolic language starts writing in their own language and the documentation in QString frightens me a little into thinking that could happen. Of course, I could always run into some library that insists I use wchar_t and then I'm back to 1 or 2 but I rather doubt that would happen.
So, what's my question...
Which of these options is my best bet? Is Qt going to eventually cause me to gouge out my own eyes because I decided to compile it with /Zc:wchar_t anyway?
What's the magic incantation to get boost to build with /Zc:wchar_t- and will THAT cause permanent mental damage?
Can I get away with just using the standard 8-bit (well, 'common' anyway) character classes and be i18n compliant/ready?
How do other Qt developers deal with this mess?
I would agree with Öö Tiib's remark
That option is perhaps for
compatibility with some old legacy
pre-wchar_t code.
Having in mind that Qt is ported to many different platforms (including embedded systems), some of them not having a decent C++ compiler, I would guess that this switch is just to make it possible to compile Qt on those platforms. I mean it's probably not something that Qt relies on to work correctly. If it were the case it would mean that Qt's design is deeply broken in my opinion. So option 1 should work.
Having said that I would definitely recommend choosing option 3 because
wchar_t gives you almost nothing in
regard to i18n
as you noticed Qt has very capable string class
which makes i18n an easy task (see
Internationalization with Qt)
You might take a look at results of searching for wchar_t on qt-interest#qt.nokia.com list, ask your question there and talk to Thiago Macieira on freenode.net #qt irc channel where Thiago is very active.
Stumbled over the same issue ...
Obviously bjam expects cxxflags=-Zcwchar_t-
After building the static serialization libs via
bjam --with-serialization toolset=msvc-8.0 variant=debug threading=multi link=static cxxflags=-Zc:wchar_t-
everything linked like expected.
Hope this helps anyone.
Putting this here as an answer because it's too long for comment.
Here's one of the answers to why one might be getting LNK2019 aka "symbol not found" linker error (source):
You mix code that uses native wchar_t with code that doesn't. C++ language conformance work that was done in Visual C++ 2005 made
wchar_t a native type by default. You must use the /Zc:wchar_t-
compiler option to generate code compatible with modules compiled by
using earlier versions of Visual C++. If not all modules have been
compiled by using the same /Zc:wchar_t settings, type references may
not resolve to compatible types. Verify that wchar_t types in all
modules are compatible, either by updating the types that are used,
or by using consistent /Zc:wchar_t settings when you compile.
So that could be the main reason why /Zc:wchar_t- is there in all cl.exe-related mkspec files and also why you probably don't need it.
I like having native wchar_t because it makes it sometimes easy to convert between the strings that the windows api expects and QString.
wchar_t should be type like bool or long. No headers are needed to define it.
You did use that "wchar_t is undefined type" option. Then you typedef wchar_t as unsigned short and then you wonder that nothing works anymore?
That option is perhaps for compatibility with some old legacy pre-wchar_t code. Just simply ... never use it. Otherwise nothing C++ links to it because functions that take wchar_t parameters are differently name-mangled than functions that take unsigned short parameters.
If some library is compiled with some strange options then build it with correct options. When needed then fix its code. If you can not do it then you should not use that library. Every line of code in your C++ project is yours to maintain.

C++ defines for a 'better' Release mode build in VS

I currently use the following preprocessor defines, and various optimization settings:
WIN32_LEAN_AND_MEAN
VC_EXTRALEAN
NOMINMAX
_CRT_SECURE_NO_WARNINGS
_SCL_SECURE_NO_WARNINGS
_SECURE_SCL=0
_HAS_ITERATOR_DEBUGGING=0
My question is what other things do fellow SOers use, add, define, in order to get a Release Mode build from VS C++ (2008,2010) to be as performant as possible?
btw, I've tried PGO etc, it does help a bit but nothing that comes to parity with GCC, also I'm not using streams, the C++ i'm talking about its more like C but making use of templates and STL algorithms etc.
As it stands now very simple code segments pale in comparison wrt performance when compared to what GCC produces on say an equivalent x86 machine running linux (2.6+ kernel) using 02.
Side-Note: I believe a lot of the issues relate directly to the STL version (Dinkum) provided by MS. Could people please elaborate on experiences using STLPort etc with VS C++.
I don't see how the inclusion of:
_CRT_SECURE_NO_WARNINGS
_SCL_SECURE_NO_WARNINGS
..gives you a better or more performant build. All you are doing is disabling the warnings about the MS CRT deprecated functions. If you are doing this because you know what you are doing and require platform agnostic code fine, otherwise I would reconsider.
UPDATE: Furthermore the compiler can only do so much. I'd wager you would get more performant code if you instrumented and fixed your existing hotspots rather than trying to eek tiny percentage (if that) gains from the compiling and linking phase.
UPDATE2: _HAS_ITERATOR_DEBUGGING cannot be used when compiling release builds anyway according to the MSDN. WIN32_LEAN_AND_MEAN VC_EXTRALEAN (and probably NOMINMAX although performance isn't the chief reason to disable this) might give you some performance boost although all the rest have dubious value. You should favour correct fast code over (maybe - and I stress maybe) slightly faster but more risk prone code.

DLLs and STLs and static data (oh my!)

OK..... I've done all the reading on related questions, and a few MSDN articles, and about a day's worth of googling.
What's the current "state of the art" answer to this question:
I'm using VS 2008, C++ unmanaged code. I have a solution file with quite a few DLLs and quite a few EXEs. As long as I completely control the build environment, such that all pieces and parts are built with the same flags, and use the same runtime libaries, and no one has a statically linked CRT library, am I ok to pass STL objects around?
It seems like this should be OK, but depending on which article you read, there's lots of Fear, Uncertainty, and Doubt.
I know there's all sorts of problems with templates that produce static data behind the scenes (every dll would get their own copy, leading to heartache), but what about regular old STL?
As long as they ALL use the exact same version of runtime DLLs, there should be no problem with STL. But once you happen to have several around, they will use for instance different heaps - leading to no end of troubles.
We successfully pass STL objects around in our application which is made up from dozens of DLLs. To ensure it works one of our automated tests that runs at every build is to verify the settings for all projects. If you add a new project and misconfigure it, or break the configuration of an existing project, the build fails.
The settings we check are as follows. Note not all of these will cause issues, but we check them for consistency.
#defines
_WIN32_WINNT
STRICT
_WIN32_IE
NDEBUG
_DEBUG
_SECURE_SCL
Compiler options
DebugInformationFormat
WholeProgramOptimization
RuntimeLibrary
We use stl collections in our application and pass them to and from methods in different dlls (usually as references). This doesn't cause any trouble.
The only area where we have had trouble is where one dll allocates memory and another dll tries to delete it. This only is reported as bad, but I am not sure why. However it only seems to be a problem on Debug builds (where it is reported), but still works on release builds. Having said that where ever I come across this I do fix it.
If I was writing a 3rd party library I would think twice about using stl parameters in the api. Previously (VC6) we had to use the OCI (Oracles C api) as opposed to OCCI (Oracles C++ api) because it only worked with the Microsoft STL implementation and we were using stlport. Of course if you enable your clients to build the library with their own stl implementation this is not an issue.