Strict standards-compliance with Visual C++ - c++

This question is not the same as either of these:
Setting Visual C++ Studio/Express to strict ANSI mode
Is there an equivalent to -pedantic for gcc when using Microsoft's Visual C++ compiler?
I am running Windows 7 and Visual Studio Express 2012, but I expect neither to influence the answer to this question.
tl;dr How would I most appropriately counteract/prevent/tolerate the effects of the following excerpt from math.h, while still being allowed to compile with Visual C++?
#if !__STDC__
/* Non-ANSI names for compatibility */
#define DOMAIN _DOMAIN
#define SING _SING
#define OVERFLOW _OVERFLOW
#define UNDERFLOW _UNDERFLOW
#define TLOSS _TLOSS
#define PLOSS _PLOSS
#define matherr _matherr
Background: I'm writing a hobby text-based C++ project whose overall goals are far outside this question's scope. I'm using GNU Make (for familiarity and portability) to compile it with both Cygwin g++ and cl.exe, and assuming a strictly standards-compliant environment... so far. I'm beginning to think that Windows simply doesn't allow such an assumption.
I have an enum whose members include OVERFLOW and UNDERFLOW. The problem described below threatens to force me to change those names, but I would prefer to keep them because they are most appropriate for my purpose, notwithstanding outside influences such as Windows header files.
GCC, Visual C++, and Mac OS X's header files (independent of llvm-gcc) all define OVERFLOW and UNDERFLOW, among other non-standard macros, in math.h by default.
GCC has a selection of documented means of cleanly preventing those definitions.
Mac OS X has a couple of undocumented means to do the same, one of which (_POSIX_C_SOURCE) coincides with GCC's documentation. (I mention this in the interest of compensating for Apple's lack of documentation; I have a history with these identifiers.)
MSDN documents the /u command-line option as a means (via the __STDC__ macro) of preventing the definition of a few non-standard macros in Visual C++. As shown at the beginning of this question, the __STDC__ macro also prevents definition of OVERFLOW and UNDERFLOW.
Upon discovering that the /u switch would prevent the definitions I was concerned with, I added it to my makefile. But then I got a new error from line 44 of crtdefs.h:
error C1189: Only Win32 target supported!
This is because _WIN32 is no longer defined. A bit of searching indicated that crtdefs.h is related to the Windows Driver Development Kit. I'm not developing a driver; can I somehow not use that header? Or do I just need to rename my enum members to tolerate non-standard Windows behavior?

Instead of using the /u compiler switch, which has multiple effects, just use /D__STDC__=1 which causes the __STDC__ macro to be defined, and nothing else.

Two possibilities spring to mind.
The first is to make sure you reverse the specific effects whenever you include math.h, with something like:
#include <math.h>
#undef OVERFLOW
#undef UNDERFLOW
Now, that may also cause problems down the track somewhere with code that expects those things to be defined properly. However, even in that case, you could modify your software to use a different name for the math.h ones:
#include <math.h>
#undef OVERFLOW
#undef UNDERFLOW
#define MATH_H_OVERFLOW _OVERFLOW
#define MATH_H_UNDERFLOW _UNDERFLOW
You'd just have to ensure that all source code (already-compiled code like libraries won't matter) that wants to use the math.h ones, uses the MATH_H_* constants instead of the ones in your enumeration.
The second is to think very carefully about the amount of effort you're putting into this quest, as compared to the amount of effort it would take to simply rename your enum members to something that doesn't conflict. Something like using Overflow for your enumeration (instead of OVERFLOW) would be my first attempt since there's still exactly the same amount of information in both, and it removes the immediate conflict.
Yes, I know it would be nice to find a way that doesn't involve that, but you should be in the business of delivering software rather than spending inordinate amounts of time working around minor nitpicks with your environment :-)

In C++11 you can use scoped enums:
enum class Flows { Underflow, Overflow };
You now refer to Flows::Underflow and Flows::Overflow.
Even in C++98 it's good practice use simulate this with classes:
class Flows
{
public:
enum Value { Underflow, Overflow };
};

Related

What are the dangers of using #pragma once? [duplicate]

This question already has answers here:
Is #pragma once a safe include guard?
(15 answers)
Closed 5 years ago.
Modern C and C++ compilers support the non-standard #pragma once, preprocessor directive, which serve a similar purpose to the classic header guards:
#ifndef hopefully_unique_identifier_that_doesnt_hurt_the_code
#define hopefully_unique_identifier_that_doesnt_hurt_the_code
// some code here
#endif
One problem, I'm aware of, with the classic approach is that once you've included a header, you have to #undef the header guard macro to include it again (doing so, to me, is a major code-smell, but that's beside the point here). The same problem arises with the #pragma once approach, but without the possibility of allowing the header to be included more than once.
Another problem with the classic-approach is that you may, accidentally, define the same macro in unrelated places, thus either not including the header as expected or doing some other nasty stuff, that I can't imagine. This is reasonably easy to avoid in practice, by keeping to certain conventions such as basing the macros to UUID-like objects (i.e. random strings) or (the less optimal approach), basing them on the name of the file, they are defined in.
I have only rarely experienced any of these, potential problems, in real life, so I don't really consider them to be major problems.
The only potential real life problem I can think of with #pragma once, is that it's not a standard thing -- you're relying on something that may not be available everywhere, even if it is present, in practice, everywhere (*).
So, what potential problems exist with #pragma once, besides the ones I've already mentioned? Am I having too much faith in having it available, in practice, everywhere?
(*) Some minor compiler that only a handful of people use, excluded.
One problem I have encountered with using #pragma once was when including the same file that is located at multiple locations. With #pragma once it is deemed different, not with #ifndef/#define guard.
I have worked with a decent set of compilers so far:
GCC
Clang/LLVM
IBM XLC
Intel C++ Compiler
The only compiler that does not support #pragma once is the IBM XLC compiler, that that one does not even support C++11, so I am not interested. If you need to work with the IBM XLC Compiler on Blue Gene/Q, then you cannot use #pragma once.
A long time ago, certain compilers did not understand the include guard idiom and would repeatedly open the header file only to find that the preprocessor reduced the content to nothing. With these compilers, using #pragma once would give compile time benefit. However, this has been implemented in major compilers, such that this makes no difference nowadays.
Perhaps you have some special compiler for your embedded system. That one might be unable to use #pragma once.
In general I prefer #pragma once because when you duplicate a header file in order to do incremental refactoring by duplication or extending a class, you cannot forget to change the name of the include guard macro.
Therefore I do not know of any hard problem you have with #pragma once, except for certain compilers.
In using #pragma once you are giving up portability. You are no longer writing C or C++, but something allowed as a compiler extension.
That could cause you headaches if your code ever targets a different platform.
It's for this reason that I never use it.
Given that the name and location of a file is unique, I use that as my include guard. Furthermore because I have in the past targetted very old preprocessors, I use as a habit
#if !defined(foo)
#define foo 1
/*code*/
#endif
which has worked on every platform I've encountered since 1996.

is there a way to run C code designed for an embedded micro-controller on a normal computer?

I have a C code which is written for an ATmega16 chip, and it is full of keywords like :
flash, eeprom, bit
and macros(?) like
interrupt [TIM1_OVF] void timer1_ovf_isr(void)
that come before function signatures.
Now what I want to do is write and run unit tests that verify the correctness of the logic of the controller unit and I want to be able to run these tests on any computer and not need to have the "device" that the code represents.
I searched a lot and came across "abstracting the hardware" and "replacing them with stubs" kind of solutions, but I'm not sure how I can abstract something like "interrupt [TIM1_OVF]" in the code!
I was wondering if there any special tools that provide the environment for running these sorts of codes?
And also if I am going at it wrong, can anybody point me in the right direction? giving that changing or rewriting (!) the micro-controller's code might not be an option?
Thanks a bunch.
Your examples are not ISO C code, they are compiler specific extensions, they are not common across AVR compilers let alone architectures. In many cases they can be worked around by defining macros that require little or no modification of the code. To make your code portable in any case even across different vendor's AVR compilers it is a good idea to do that in any case, although a combination of techniques may be required.
Most compilers support an "always include" option that allows a header file to be included from the command line with an explicit #include directive in the source. Creating a header with your compatibility macros, and including it either implicitly as described or explicitly in the code is a useful technique. For example for the issues you have mentioned, you might have:
// compatability.h
#if !defined COMPATABILITY_INCLUDE
#define COMPATABILITY_INCLUDE
#if defined __IAR_SYSTEMS_ICC__
#define INTERRUPT( irq, handler ) __interrupt [irq] void handler(void)
#elif defined _WIN32
#define INTERRUPT( irq, handler ) void handler(void)
#define __flash const
#define __eeprom const
#define __bit char
#else
#error Unknown toolchain/environment
#endif
#endif
That will remove the memory location qualifiers from the Win32 code, and define __bit as a char. The interrupt handler macro will turn a handler into a regular function on Win32, but does require your code to be modified, but since every toolchain does this differently, that is perhaps no bad thing.
For example in this case you would change:
interrupt [TIM1_OVF] void timer1_ovf_isr(void)
{
...
}
to
INTERRUPT( TIM_OVF, timer1_ovf_isr )
{
...
}
Note that you should use approapriate target macros in the compatability file - I have guessed at IAR for example; you may be using a different compiler. Your compiler documentation should specify the available predefined macros, alternatively Pre-defined Compiler Macros "project" on Sourceforge is a useful resource.
Some of the transformations may change the code semantically, such as swapping __bit for char in some cases for example if the bit is assigned a value greater than one, and then compared with 1, the embedded target is likely to yield true, while on the PC build it will not. It might better be transformed to _Bool but your compiler may give warnings about implicit conversions. My suggestions may not necessarily be the best possible transformation either - consult your compiler's manual for the precise semantics and decide how best to transform them to standard C for test builds.
An alternative that preserves proprietary semantics is to run your unit tests in an instruction-set simulator using debugger scripting if available to implement stubs for hardware interaction, however that method makes it impossible to use off-the-shelf unit-testing frameworks such as CUnit.
Depending on your toolchain, you may already have AVR simulator available, which would allow you to run your unit tests on any PC. For example, IAR provides "C-SPY", an AVR simulator that supports a terminal window, can show show register values, can support generation of interrupts, etc. Assuming you keep your unit sizes reasonable, you do not need significant infrastructure or stubbed interfaces to make this work.
One large benefit of running unit tests on your target platform (with your target compiler) is that you can account for any particular behaviors that will be caused by the platform (endianness, word size, compiler or library peculiarities, etc), compared to running in a PC environment.

Best (cleanest) way for writing platform specific code

Say you have a piece of code that must be different depending on the operating system your program is running on.
There's the old school way of doing it:
#ifdef WIN32
// code for Windows systems
#else
// code for other systems
#endif
But there must be cleaner solutions that this one, right?
The typical approach I've seen first hand at a half-dozen companies over my career is the use of a Hardware Abstraction Layer (HAL).
The idea is that you put the lowest level stuff into a dedicated header plus statically linked library, which includes things like:
Fixed width integers (int64_t on Linux, __int64 on Windows, etc).
Common library functions (strtok_r() vs strtok_s() on Linux vs Windows).
A common data type setup (ie: typedefs for all data types, such as xInt, xFloat etc, used throughout the code so that if the underlying type changes for a platform, or a new platform is suddenly supported, no need to re-write and re-test code that depends on it, which can be extremely expensive in terms of labor).
The HAL itself is usually riddled with preprocessor directives like in your example, and that's just the reality of the matter. If you wrap it with run-time if/else statements, you comilation will fail due to unresolved symbols. Or worse, you could have extra symbols included which will increase the size of your output, and likely slow down your program if that code is executed frequently.
So long as the HAL has been well-written, the header and library for the HAL give you a common interface and set of data types to use in the rest of your code with minimal hassle.
The most beautiful aspect of this, from a professional standpoint, is that all of your other code doesn't have to ever concern itself with architecture or operating system specifics. You'll have the same code-flow on various systems, which will by extension, allow you to test the same code in a variety of different manners, and find bugs you wouldn't normally expect or test for. From a company's perspective, this saves a ton of money in terms of labor, and not losing clients due to them being angry with bugs in production software.
I've had to do a lot of this sort of stuff in my career, supporting code that buils and runs on an embedded device, plus in windows, and then also have it run on different ASICS and/or revisions of ASICS.
I tend to do what you suggest and then when things really diverge, move on to defining the interface I desire to be fixed between platforms and then having separate implementation files or even libraries. It can get really messy as the codebase gets older and more exceptions need to be added.
Sometimes you can hide this stuff in header files, so your code looks 'clean', but a lot of times that's just obfuscating what's going on behind a bunch of macro magic.
The only other thing I'd add is I tend to make the #ifdef/#else/#endif chain fail if none of the options are defined. This forces me to revisit the issue when a new revision comes along. Some folks prefer it to have a default, but I find that just hides potential failures.
Granted, I'm working in the embedded world where code space is paramount (since memory is small and fixed), and code cleanliness unfortunately has to take a back seat.
An adopted practice for non-trivial projects is to write platform-specific code in separate files (and in separate directories, where applicable), avoiding "localized" #ifdefs to the fullest possible extent.
Say you are developing a library called "Example" and example.hpp will be your library header:
example.hpp
#include "platform.hpp"
//
// here: platform-independent declarations, includes etc
//
// below: platform-specific includes
#if defined(WINDOWS)
#include "windows\win32_specific_code.hpp"
// other win32 headers
#elif defined(POSIX)
#include "posix/linux_specific_code.hpp"
// other linux headers
#endif
platform.hpp (simplified)
#if defined(WIN32) && !defined(UNIX)
#define WINDOWS
#elif defined(UNIX) && !defined(WIN32)
#define POSIX
#endif
win32_specific_code.hpp
void Function1();
win32_specific_code.cpp
#include "../platform.hpp"
#ifdef WINDOWS // We should not violate the One Definition Rule
#include "win32_specific_code.hpp"
#include <iostream>
void Function1()
{
std::cout << "You are on WINDOWS" << std::endl;
}
//...
#endif /* WINDOWS */
Of course, declare Function1() in your linux_specific_code.hpp file as well.
Then, when implementing it for Linux (in the linux_specific_code.cpp file), be sure to surround everything for conditional compilation as well, similar to I did above (eg. using #ifdef POSIX). Otherwise, the compiler will generate multiple definitions and you'll get a linker error.
Now, everything an user of your library must do is #include <example.hpp> in his code, and place either #define WINDOWS or #define POSIX in his compiler's preprocessor definitions. In fact, the second step might not be necessary at all, assuming his environment already defines either one of the WIN32 or UNIX macros. This way, Function1() can already be used from the code in a cross-platform manner.
This approach is pretty much the one used by the Boost C++ Libraries. I personally find it clean and sensible. If, however, you don't like it, you can have a read at Chromium's conventions for multi-platform development for a somewhat different strategy.

Dos and Don'ts of Conditional Compile [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
When is doing conditional compilation a good idea and when is it a horribly bad idea?
By conditional compile I mean using #ifdefs to only compile certain bits of code in certain conditions. The #defineds themselves may be in either a common header file or introduced via the -D compiler directive.
The good ideas:
header guards (you can't do much better for portability)
conditional implementation (juggling with platform differences)
debug specific checks (asserts, etc...)
per suggestion: extern "C" { and } so that the same headers may be used by the C++ implementation and by the C clients of the API
The bad idea:
changing the API between compile flags, since it forces the client to changes its uses with the same compile flags... urk!
Don't put ifdef in your code.
It makes it really hard to read and understand. Please make the code as easy to read as possable for the maintainer (he knows where you live and owns an Axe).
Hide the conditional code in separate functions and use the ifdef to define what functions are being used.
DONT use the else part to make a definition. If you are ding that you are saying one platform is unique and all others are the same. This is unlikely, what is more likely is that you know what happens on a couple of platforms but you should use the #else section to stick a #error so when it is ported to a new platform a developer has to explicitly fix the condition for his platform.
x.h
#if defined(WINDOWS)
#define MyPlatfromSleepSeconds(x) sleep(x * 1000)
#elif defined (UNIX)
#define MyPlatfromSleepSeconds(x) Sleep(x)
#else
#error "Please define appropriate sleep for your platform"
#endif
Don;t be tempted to expand a macro into multiple lines of code. That leads to madness.
p.h
#if defined(SOLARIS_3_1_1)
#define DO_SOME_TASK(x,y) doPartA(x); \
doPartB(y); \
couple(x,y)
#elif defined(WINDOWS)
#define DO_SOME_TASK(x,y) doAndCouple(x,y)
#else
#error "Please define appropriate DO_SOME_TASK for your platform"
#endif
If you develop the code on windows then test on solaris 3_1_1 later you may find unexpected bugs when people do things like:
int loop;
for(loop = 0;loop < 10;++loop)
DO_SOME_TASK(loop,loop); // Windows works fine()
// Solaras. Only doPartA() is in the loop.
// The other statements are done when the loop finishes
Basically, you should try to keep the amount of code that is conditionally compiled to a minimum, because you should be trying to test all that and having lots of conditions makes that more difficult. It also reduces the readability of the code; conditionally compiling whole files is clearer, e.g., by putting platform-specific code in a separate file for each platform and having that all have the same API from the perspective of the rest of the problem. Also try to avoid using it in function headers; again, that's because that's a place where it is particularly confusing.
But that's not to say that you should never use conditional compilation. Just try to keep it short and minimal. (Where I can, I use conditional compilation to control the definitions of other macros which are then just used in the rest of the code; that seems to be clearer to me at least.)
It's a bad idea whenever you don't know what you're doing. It can be a good idea when you're effectively solving an issue this way :).
The way you describe conditional compiling, include guards are part of it. It's not only a good idea to use it. It's a way to avoid compilation errors.
For me, conditional compiling is also a way to target multiple compilers and operating systems. I'm involved in a lib that's supposed to be compileable on Windows XP and newer, 32 or 64 bit, using MinGW and Visual C++, on Linux 32 and 64 bit using gcc/g++ and on MacOS using I-don't-know-what (I'm not maintaining that, but I assume it's a gcc port). Without the preprocessor conditions, it would be pretty much impossible to create a single source file that's compileable anywhere.
Another pragmatic use of conditional compiles is to "comment out" sections of code which contain standard "C" comments (i.e. /* */). Some compilers do not allow nesting of these comments, for example:
/* comment out block of code
.... code ....
/* This is a standard
* comment.
*/ ... oopos! Some compilers try to compile code after this closing comment.
.... code ....
end of block of code*/
(As you can see in the syntax highlighting, StackOverflow does not nest comments.)
instead you can use#ifdef to get the right effect, for example:
#ifdef _NOT_DEFINED_
.... code ....
/* This is a standard
* comment.
*/
.... code ....
#endif
In the past if you wanted to produce truly portable code, you'd have to resort to some form of conditional compilation. With there being a proliferation of portable libraries (such as APR, boost etc.) this reason has little weight IMHO. If you are using conditional compilation simply compile out blocks of code that are not need for particular builds, you should really revisit your design - I should imagine that this would become a nightmare to maintain.
Having said all that, if you do need to use conditional compilation, I would hide as much as I can away from the main body of the code and limit to to very specific cases that are very well understood.
Good/justifiable uses are based on cost/benefit analysis. Obviously, people here are very conscious of the risks:
in linking objects that saw different versions of classes, functions etc.
in making code hard to understand, test and reason about
But, there are uses which often fall into the net-benefit category:
header guards
code customisations for distinct software "ecosystems", such as Linux versus Windows, Visual C++ versus GCC, CPU-specific optimisations, sometimes word size and endianness factors (though with C++ you can often determine these at compile via template hackery, but that may prove messier still) - abstracts away lower-level differences to provide a consistent API across those environments
interacting with existing code that uses preprocessor defines to select versions of APIs, standards, behaviours, thread safety, protocols etc. (sad but true)
compilation that may use optional features when available (think of GNU configure scripts and all the tests they perform on OS interfaces etc)
request that extra code be generated in a translation unit, such as adding main() for a standalone app versus without for a library
controlling code inclusion for distinct logical build modes such as debug and release
It is always a bad idea. What it does is effectively create multiple versions of your source code, all of which need to be tested, which is a pain, to say the least. Unfortunately, like many bad things it is sometimes unavoidable. I use it in very small amounts when writing code that needs to be ported between Windows and Linux, but if I found myself doing it a lot, I would consider alternatives, such as having two separate development sub-trees.

"Uint32", "int16" and the like; are they standard c++?

I'm quite new to c++, but I've got the hang of the fundamentals. I've come across the use of "Uint32" (in various capitalizations) and similar data types when reading other's code, but I can't find any documentation mentioning them. I understand that "Uint32" is an unsigned int with 32 bits, but my compiler doesn't. I'm using visual c++ express, and it doesn't recognize any form of it from what I can tell.
Is there some compilers that reads those data types by default, or have these programmers declared them themselves as classes or #define constants?
I can see a point in using them to know exactly how long your integer will be, since the normal declaration seems to vary depending on the system. Is there any other pros or cons using them?
Unix platforms define these types in stdint.h, this is the preferred method of ensuring type sizing when writing portable code.
Microsoft's platforms do not define this header, which is a problem when going cross-platform. If you're not using Boost Integer Library already, I recommend getting Paul Hsieh's portable stdint.h implementation of this header for use on Microsoft platforms.
Update: Visual Studio 2010 and later do define this header.
The C99 header file stdint.h defines typedefs of this nature of the form uint32_t. As far as I know, standard C++ doesn't provide a cstdint version of this with the symbols in namespace std, but some compilers may, and you will typically be able to include the C99 header from C++ code anyways. The next version of C++ will provide the cstdint header.
You will often see code from other people who use non-standard forms of this theme, such as Uint32_t or Uint32 or uint32 etc. They typically just provide a single header that defines these types within the project. Probably this code was originally developed a long time ago, and they never bothered to sed replace the definitions out when C99 compilers became common.
Visual c++ doesn't support the fixed-width integer types, because it doesn't include support for C99. Check out the answers to my question on this subject for various options you have for using them.
The main reason for using them is that you then don't have to worry about any possible problems arising when switching between 64bit and 32bit OS.
Also if you are interfacing to any legacy code that you new was destined for 32bit or even 16bit then it avoids potential problems there as well.
Try UINT32 for Microsoft.
The upper case makes it clear that this is defined as a macro. If you try to compile using a different compiler that doesn't already contain the macro, you can define it yourself and your code doesn't have to change.
uint32 et al. are defined by macros. They solve a historic portability problem of there being few guarantees across platforms (back when there there more platform options than now) of how many bits you'd get when you asked for an int or a short. (One now-defunct C compile for the Mac provided 8-bit shorts!).