I'm quite new to c++, but I've got the hang of the fundamentals. I've come across the use of "Uint32" (in various capitalizations) and similar data types when reading other's code, but I can't find any documentation mentioning them. I understand that "Uint32" is an unsigned int with 32 bits, but my compiler doesn't. I'm using visual c++ express, and it doesn't recognize any form of it from what I can tell.
Is there some compilers that reads those data types by default, or have these programmers declared them themselves as classes or #define constants?
I can see a point in using them to know exactly how long your integer will be, since the normal declaration seems to vary depending on the system. Is there any other pros or cons using them?
Unix platforms define these types in stdint.h, this is the preferred method of ensuring type sizing when writing portable code.
Microsoft's platforms do not define this header, which is a problem when going cross-platform. If you're not using Boost Integer Library already, I recommend getting Paul Hsieh's portable stdint.h implementation of this header for use on Microsoft platforms.
Update: Visual Studio 2010 and later do define this header.
The C99 header file stdint.h defines typedefs of this nature of the form uint32_t. As far as I know, standard C++ doesn't provide a cstdint version of this with the symbols in namespace std, but some compilers may, and you will typically be able to include the C99 header from C++ code anyways. The next version of C++ will provide the cstdint header.
You will often see code from other people who use non-standard forms of this theme, such as Uint32_t or Uint32 or uint32 etc. They typically just provide a single header that defines these types within the project. Probably this code was originally developed a long time ago, and they never bothered to sed replace the definitions out when C99 compilers became common.
Visual c++ doesn't support the fixed-width integer types, because it doesn't include support for C99. Check out the answers to my question on this subject for various options you have for using them.
The main reason for using them is that you then don't have to worry about any possible problems arising when switching between 64bit and 32bit OS.
Also if you are interfacing to any legacy code that you new was destined for 32bit or even 16bit then it avoids potential problems there as well.
Try UINT32 for Microsoft.
The upper case makes it clear that this is defined as a macro. If you try to compile using a different compiler that doesn't already contain the macro, you can define it yourself and your code doesn't have to change.
uint32 et al. are defined by macros. They solve a historic portability problem of there being few guarantees across platforms (back when there there more platform options than now) of how many bits you'd get when you asked for an int or a short. (One now-defunct C compile for the Mac provided 8-bit shorts!).
Related
Since at least C++11 we got lovely fixed width integers for example in C++'s <cstdint> or in C's <stdint.h> out of the box, (for example std::uint32_t, std::int8_t), so with or without the std:: in front of them and even as macros for minimum widths (INT16_C, UINT32_C and so on).
Yet we have do deal with libraries every day, that define their own fixed width integers and you might have seen for example sf::Int32, quint32, boost::uint32_t, Ogre::uint32, ImS32, ... I can go on and on if you want me to. You too know a couple more probably.
Sometimes these typedefs (also often macro definitions) may lead to conflicts for example when you want to pass a std fixed width integer to a function from a library expecting a fixed width integer with exactly the same width, but defined differently.
The point of fixed width integers is them having a fixed size, which is what we need in many situations as you know. So why would all these libraries go about and typedef exactly the same integers we already have in the C++ standard? Those extra defines are sometimes confusing, redundant and may invade your codebase, which are very bad things. And if they don't have the width and signedness they promise to have, they at least sin against the principle of least astonishment, so what's their point I hereby ask you?
Why do so many libraries define their own fixed width integers?
Probably for some of the reasons below:
they started before C++11 or C11 (examples: GTK, Qt, libraries internal to GCC, Boost, FLTK, GTKmm, Jsoncpp, Eigen, Dlib, OpenCV, Wt)
they want to have readable code, within their own namespace or class (having their own namespace, like Qt does, may improve readability of well written code).
they are build-time configurable (e.g. with GNU autoconf).
they want to be compilable with old C++ compilers (e.g. some C++03 one)
they want to be cross-compilable to cheap embedded microcontrollers whose compiler is not a full C++11 compiler.
they may have generic code (or template-s, e.g. in Eigen or Dlib) to perhaps support arbitrary-precision arithmetic (or bignums) perhaps using GMPlib.
they want to be somehow provable with Frama-C or DO-178C certified (for embedded critical software systems)
they are processor specific (e.g. asmjit which generates machine code at runtime on a few architectures)
they might interface to specific hardware or programming languages (Tensorflow, OpenCL, Cuda).
they want to be usable from Python or GNU guile.
they could be operating-system specific.
they add some additional runtime checks, e.g. against division by 0 (or other undefined behavior) or overflow (that the C++ standard cannot require, for performance or historical reasons)
they are intended to be easily usable from machine-generated C++ code (e.g. RefPerSys, ANTLR, ...)
they are designed to be callable from C code (e.g. libgccjit).
etc... Finding other good reasons is left as an exercise to the reader.
does anybody know if it is possible to use the code from a win32 static libary
in a cross platform one? I know i have to make some changes to the code but i think c++ code is the same on all platforms except for the classes. For simple types somebody told me to use the int16_t or int16_fast_t instead or short for example. Is that correct that i could just copy ally the headers and files from visual studio and compile them for mac with for example code lite - if code itself is crossplatform?
Best Regards,
K
It depends on what's in that library. The fact that it's a Visual C++ Win32 static library project tells us how your library is compiled. It does not tell us anything about what code goes into that library. It might be all perfectly portable Standard C++ code. It might as well be code where every second line is a Windows API function call that will obviously not be portable.
Whether or not the code of a library will be portable all depends on the code. Replacing short with int16_t or int_fast16_t will do nothing to increase the portability of code (unless the original use of short assumed some implementation-defined properties). So I'm not sure what a blanket replacement of short with int16_t is supposed to achieve. short is a fundamental type built into the language. int16_t is a type defined by the standard library if the target platform supports it. So, in a way, int16_t is actually less portable than short. While int_fast16_t is always defined, so is short. Use the fixed-width integer types of the standard library if you need the semantics that they provide. If you don't then there's no reason to use them. Note that to use the fixed-width integer types in C++, include the C++ header <cstdint> rather than <stdint.h> which is not guaranteed to be present in C++. Also note that <cstdint> is only guaranteed to place declarations in namespace std. So for maximum portability, use std::int16_t because it is not guaranteed that ::int16_t is available.
If the code of the library is portable, then all you will need to build that library on another platform is a buildsystem for that platform. So yes, it is correct that if the code is portable, all you need to do is compile that code on the other platform using whatever tools you use on that other platform…that's kind of the very definition of what it means for code to be portable… ;-)
This question is not the same as either of these:
Setting Visual C++ Studio/Express to strict ANSI mode
Is there an equivalent to -pedantic for gcc when using Microsoft's Visual C++ compiler?
I am running Windows 7 and Visual Studio Express 2012, but I expect neither to influence the answer to this question.
tl;dr How would I most appropriately counteract/prevent/tolerate the effects of the following excerpt from math.h, while still being allowed to compile with Visual C++?
#if !__STDC__
/* Non-ANSI names for compatibility */
#define DOMAIN _DOMAIN
#define SING _SING
#define OVERFLOW _OVERFLOW
#define UNDERFLOW _UNDERFLOW
#define TLOSS _TLOSS
#define PLOSS _PLOSS
#define matherr _matherr
Background: I'm writing a hobby text-based C++ project whose overall goals are far outside this question's scope. I'm using GNU Make (for familiarity and portability) to compile it with both Cygwin g++ and cl.exe, and assuming a strictly standards-compliant environment... so far. I'm beginning to think that Windows simply doesn't allow such an assumption.
I have an enum whose members include OVERFLOW and UNDERFLOW. The problem described below threatens to force me to change those names, but I would prefer to keep them because they are most appropriate for my purpose, notwithstanding outside influences such as Windows header files.
GCC, Visual C++, and Mac OS X's header files (independent of llvm-gcc) all define OVERFLOW and UNDERFLOW, among other non-standard macros, in math.h by default.
GCC has a selection of documented means of cleanly preventing those definitions.
Mac OS X has a couple of undocumented means to do the same, one of which (_POSIX_C_SOURCE) coincides with GCC's documentation. (I mention this in the interest of compensating for Apple's lack of documentation; I have a history with these identifiers.)
MSDN documents the /u command-line option as a means (via the __STDC__ macro) of preventing the definition of a few non-standard macros in Visual C++. As shown at the beginning of this question, the __STDC__ macro also prevents definition of OVERFLOW and UNDERFLOW.
Upon discovering that the /u switch would prevent the definitions I was concerned with, I added it to my makefile. But then I got a new error from line 44 of crtdefs.h:
error C1189: Only Win32 target supported!
This is because _WIN32 is no longer defined. A bit of searching indicated that crtdefs.h is related to the Windows Driver Development Kit. I'm not developing a driver; can I somehow not use that header? Or do I just need to rename my enum members to tolerate non-standard Windows behavior?
Instead of using the /u compiler switch, which has multiple effects, just use /D__STDC__=1 which causes the __STDC__ macro to be defined, and nothing else.
Two possibilities spring to mind.
The first is to make sure you reverse the specific effects whenever you include math.h, with something like:
#include <math.h>
#undef OVERFLOW
#undef UNDERFLOW
Now, that may also cause problems down the track somewhere with code that expects those things to be defined properly. However, even in that case, you could modify your software to use a different name for the math.h ones:
#include <math.h>
#undef OVERFLOW
#undef UNDERFLOW
#define MATH_H_OVERFLOW _OVERFLOW
#define MATH_H_UNDERFLOW _UNDERFLOW
You'd just have to ensure that all source code (already-compiled code like libraries won't matter) that wants to use the math.h ones, uses the MATH_H_* constants instead of the ones in your enumeration.
The second is to think very carefully about the amount of effort you're putting into this quest, as compared to the amount of effort it would take to simply rename your enum members to something that doesn't conflict. Something like using Overflow for your enumeration (instead of OVERFLOW) would be my first attempt since there's still exactly the same amount of information in both, and it removes the immediate conflict.
Yes, I know it would be nice to find a way that doesn't involve that, but you should be in the business of delivering software rather than spending inordinate amounts of time working around minor nitpicks with your environment :-)
In C++11 you can use scoped enums:
enum class Flows { Underflow, Overflow };
You now refer to Flows::Underflow and Flows::Overflow.
Even in C++98 it's good practice use simulate this with classes:
class Flows
{
public:
enum Value { Underflow, Overflow };
};
It's common practice where I work to avoid directly using built-in types and instead include a standardtypes.h that has items like:
// \Common\standardtypes.h
typedef double Float64_T;
typedef int SInt32_T;
Almost all components and source files become dependent on this header, but some people argue that it's needed to abstract the size of the types (in practice this hasn't been needed).
Is this a good practice (especially in large-componentized systems)? Are there better alternatives? Or should the built-in types be used directly?
You can use the standardized versions available in modern C and C++ implementations in the header file: stdint.h
It has types of the like: uint8_t, int32_t, etc.
In general this is a good way to protect code against platform dependency. Even if you haven't experienced a need for it to date, it certainly makes the code easier to interpret since one doesn't need to guess a storage size as you would for 'int' or 'long' which will vary in size with platform.
It would probably be better to use the standard POSIX types defined in stdint.h et al, e.g. uint8_t, int32_t, etc. I'm not sure if there are part of C++ yet but they are in C99.
Since it hasn't been said yet, and even though you've already accepted an answer:
Only used concretely-sized types when you need concretely sized types. Mostly, this means when you're persisting data, if you're directly interacting with hardware, or using some other code (e.g. a network stack) that expects concretely-sized types. Most of the time, you should just use the abstractly-sized types so that your compiler can optimize more intelligently and so that future readers of your code aren't burdened with useless details (like the size and signedness of a loop counter).
(As several other responses have said, use stdint.h, not something homebrew, when writing new code and not interfacing with the old.)
The biggest problem with this approach is that so many developers do it that if you use a third-party library you are likely to end up with a symbol name conflict, or multiple names for the same types. It would be wise where necessary to stick to the standard implementation provided by C99's stdint.h.
If your compiler does not provide this header (as for example VC++), then create one that conforms to that standard. One for VC++ for example can be found at https://github.com/chemeris/msinttypes/blob/master/stdint.h
In your example I can see little point for defining size specific floating-point types, since these are usually tightly coupled to the FP hardware of the target and the representation used. Also the range and precision of a floating point value is determined by the combination of exponent width and significant width, so the overall width alone does not tell you much, or guarantee compatibility across platforms. With respect to single and double precision, there is far less variability across platforms, most of which use IEEE-754 representations. On some 8 bit compilers float and double are both 32-bit, while long double on x86 GCC is 80 bits, but only 64 bits in VC++. The x86 FPU supports 80 bits in hardware (2).
I think it's not a good practice. Good practice is to use something like uint32_t where you really need 32-bit unsigned integer and if you don't need a particular range use just unsigned.
It might matter if you are making cross-platform code, where the size of native types can vary from system to system. For example, the wchar_t type can vary from 8 bits to 32 bits, depending on the system.
Personally, however, I don't think the approach you describe is as practical as its proponents may suggest. I would not use that approach, even for a cross-platform system. For example, I'd rather build my system to use wchar_t directly, and simply write the code with an awareness that the size of wchar_t will vary depending on platform. I believe that is FAR more valuable.
As others have said, use the standard types as defined in stdint.h. I disagree with those who say to only use them in some places. That works okay when you work with a single processor. But when you have a project which uses multiple processor types (e.g. ARM, PIC, 8051, DSP) (which is not uncommon in embedded projects) keeping track of what an int means or being able to copy code from one processor to the other almost requires you to use fixed size type definitions.
At least it is required for me, since in the last six months I worked on 8051, PIC18, PIC32, ARM, and x86 code for various projects and I can't keep track of all the differences without screwing up somewhere.
A program written in Visual C/C++ 2005/2008 might not compile with another compiler such as GNU C/C++ or vice-versa. For example when trying to reuse code, which uses windows.h, written for a particular compiler with another, what are the differences to be aware of?
Is there any information about how to produce code which is compatible with either one compiler or another e.g. with either GC/C++ or MSVC/C++? What problems will attempting to do this cause?
What about other compilers, such as LCC and Digital Mars?
The first thing to do when trying to compile code written for MSVC to other compilers is to compile it with Microsoft-extensions switched off. (Use the /Za flag, I think). That will smoke out lots of things which GCC and other compilers will complain about.
The next step is to make sure that Windows-specific APIs (MFC, Win32, etc.) are isolated in Windows-specific files, effectively partioning your code into "generic" and "windows-specific" modules.
Remember the argument that if you want your web page to work on different browsers, then you should write standards-compliant HTML?
Same goes for compilers.
At the language level, if your code compiles without warnings on GCC with -std=c89 (or -std=c++98 for C++), -pedantic -Wall, plus -Wextra if you're feeling brave, and as long as you haven't used any of the more blatant GNU extensions permitted by -pedantic (which are hard to do accidentally) then it has a good chance of working on most C89 compilers. C++ is a bit less certain, as you're potentially relying on how complete the target compiler's support is for the standard.
Writing correct C89 is somewhat restrictive (no // comments, declarations must precede statements in a block, no inline keyword, no stdint.h and hence no 64bit types, etc), but it's not too bad once you get used to it. If all you care about is GCC and MSVC, you can switch on some language features you know MSVC has. Otherwise you can write little "language abstraction" headers of your own. For instance, one which defines "inline" as "inline" on GCC and MSVC/C++, but "__inline" for MSVC/C. Or a MSVC stdint.h is easy enough to find or write.
I've written portable code successfully in the past - I was working mostly on one platform for a particular product, using GCC. I also wrote code that was for use on all platforms, including Windows XP and Mobile. I never compiled it for those platforms prior to running a "test build" on the build server, and I very rarely had any problems. I think I might have written bad code that triggered the 64bit compatibility warning once or twice.
The Windows programmers going the other way caused the occasional problem, mostly because their compiler was less pedantic than ours, so we saw warnings they didn't, rather than things that GCC didn't support at all. But fixing the warnings meant that when the code was later used on systems with more primitive compilers, it still worked.
At the library level, it's much more difficult. If you #include and use Windows APIs via windows.h, then obviously it's not going to work on linux, and the same if you use KDE with GCC and then try to compile with MSVC.
Strictly speaking that's a platform issue, not a compiler issue, but it amounts to the same thing. If you want to write portable code, you need an OS abstraction API, like POSIX (or a subset thereof) that all your targets support, and you need to be thinking "portable" when you write it in the first place. Taking code which makes heavy use of windows-specific APIs, and trying to get it to work on GCC/linux, is basically a complete rewrite AFIAK. You might be better off with WINE than trying to re-compile it.
You're mixing up "compilers" and "OSs". <windows.h> is not something that MSVC C compiler brings to the table: it's C-specific embodiment of Windows API. You can get it independently from Visual Studio. Any other C compiler on Windows is likely to provide it. On the Linux side, for example, you have <unistd.h>, <pthereads.h> and others. They are not an essential part of GCC, and any other compiler that compiles for Linux would provide them.
So you need to answer two different questions: how can I code C in such a way that any compiler accepts it? And how do I hide my dependencies on OS?
As you can tell from the diverse answers, this topic is fairly involved. Bearing that in mind here are some of the issues I faced when recently porting some code to target three platforms (msvc 8/Windows, gcc 4.2/Linux, gcc 3.4/embedded ARM9 processor). It was originally only compiling under Visual Studio 2005.
a) Much code that's written on the Windows platforms uses types defined in windows.h. I've had to create a "windows_types.h" file with the following in it:
#ifndef _WIN32
typedef short INT16;
typedef unsigned short UINT16;
typedef int INT32;
typedef unsigned int UINT32;
typedef unsigned char UCHAR;
typedef unsigned long long UINT64;
typedef long long INT64;
typedef unsigned char BYTE;
typedef unsigned short WORD;
typedef unsigned long DWORD;
typedef void * HANDLE;
typedef long LONG;
#endif
Ugly, but much easier than modifying the code that, previously, was only targeting Windows.
b) The typename keyword was not required in templated code to declare types. MSVC is lax in this regard (though I assume certain compiler switches would have generated a warning). Had to add it in a number of places.
c) Simple, but time-consuming: Windows is not case sensitive and many #included files were specified with incorrect case causing problems under Linux.
d) There was a fair chunk of code using the Windows API for many things. An example was for CRITICAL_SECTIONS and INTERLOCKED_INCREMENT. We used the boost libraries as much as possible to replace these issues but reworking code is time-consuming.
e) A lot of the code relied on headers being included in precompiled headers. We had issues with using pch on gcc3.4 so we had to ensure that all .h/cpp files correctly included all their dependencies (as they should have in the first place).
f) VS 2005 has two nasty bugs. auto_ptr's can be assigned to anything and temporary variables are allowed to be passed to reference parameters. Both fail to compile (thankfully!) under gcc but rework is required.
g) Bizarrely, we had template code that was trying to explicitly specialise class template functions. Not allowed. Again gcc refused, VS 2005 let it go. Easy enough to rework into regular overloads once the problem is understood.
h) Under VS 2005 std::exception is allowed to be constructed with a string. Not allowed under gcc or the standard. Rework your code to prefer to use one of the derived exception classes.
Hopefully that's the kind of information you were looking for!
Well this is a quite difficult question. Fact is that MSVC does not support the newest
C standard, about it's c++ compliance I can tell you anyything. Howerver "windows" C is understand by both MSVC and gcc you just can not hope for the other way. E.g if you use ANSI C99 features then you might have a hard time "porting" from gcc to MSVC.
But as long as you try the way MSVC-> gcc you chances will be better. The only point you have to be aware of is the libraries. Most of the libraries on Windows are supposed to work with MSVC and so you need some extra tools to make them accessible to gcc also.
LCC is a quite older system,which AFAIKT does not support much from ANSI C99, it also needs tools from MSVC to work properly. LCC is "just " a compiler.
lcc-win32 is a C Development system striving to be ANSI C99 compliant. It comes along with linker, IDE etc.
I can not tell about the state of Digital Mars implementation
Then there is also Pelles-C which is a fully fledged IDE also.
And we have hanging around OpenWatcom. Which once was quite a decent system but I can't tell how conformant it is.
All in all the "best" you can hope for the easier way from MSVC -> other system but it will probably be much worse the other way round.
Regards
Friedrich
vs2008 is a lot more standards compliant than 2005.
I have had more problems going the other way, especially the 'feature' of gcc that lets you allocate an array with a variable size at run time "int array[variable]" which is just pure evil.
A program written in Visual C/C++ 2005/2008 might not compile with another compiler such as GNU C/C++ or vice-versa.
This is true if you either (1) use some sort of extension available in one compiler but not another (the C++ standard, for instance requires the typename and template keywords in a few places but many compilers -- including Visual C++ don't enforce this; gcc used to not enforce this either, but changed in 3.4) or (2) use some standard compliant behavior implemented on one compiler but not another (right now the poster boy for this is exported templates, but only one or two compilers support this, and Visual C++ and gcc are not in that group).
For example when trying to reuse code, which uses windows.h, written for a particular compiler with another,
I've never seen a problem doing this. I have seen a problem using Microsoft's windows.h in gcc. But when I use gcc's windows.h in gcc and Microsoft's windows.h in Visual C++ I have access to all of the documented functions. That's the definitions of "implemented windows.h" after all.
what are the differences to be aware of?
The main one I've seen is people not knowing about the dependent template/typename thing mentioned above. I find it funny that a number of people think gcc is not smart enough to do what Visual C++ does, when in reality gcc had the feature first and then decided to remove it in the name of standards compliance.
In the near future you will run into problems using C++0x features. But both gcc and Visual C++ have implemented the easier things in that standard.