Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
So,I have a requirement that to do a particular task (say multithreading) that is totally os dependent (or like win32/linux api call).
Now i read somewhere, that using #ifdef we can actually write os dependent code
#ifdef __linux__
/*some linux codes*/
#endif
Now my question is....
Is it the right way to write my code(i.e using ifdef) and then releasing a single .cpp file for both windows and linux? Or should i break my code into two parts and release two different builds- one for linux and one for windows?
Edit:
Seems like question is way too broad, and that generates a lot of opinions.
Differentiate between the two approaches that i mentioned on the basis of Performance, build size etc(any other factor that i may have missed).
Class A {
.
.// Some variables and methods
.
};
class B: public A {
void DoSomething() {
// COntains linux codes and some windows code
}
};
If suppose I don't use #ifdef, how am i going to write dosomething() method that calls right piece of code at right time
Solution #1: Use existing, debugged, documented library (e.g. boost) to hide the platform differences. It uses lots of #ifdef's internally, but you don't have to worry about that.
Solution #2: Write your own platform independent library (see solution #1 for a better approach) and hide all the #ifdef's inside.
Solution #3: Do it in macros (ugh, but see ACE (although most of ACE is in a library, too.)
Solution #4: Use #ifdefs throughout your code whenever a platform difference arises.
Solution #4 is suitable for very-small, throw-away code programs.
Solution #3 is suitable if you are programming in the 1990's.
Solution #2 is suitable only if you can't use a real library for non-technical reasons.
Conclusion: Use Solution #1.
It's possible to use #ifdef for this, but it quickly leads to
unmaintainable code. A better solution is to abstract the
functionality into a class, and provide two different
implementations (two different source files) for that class.
(Even back in the days of C, we'd define a set of functions in
a header, and provide different source files for their
implementation.)
I generally put give the source files the same name, but put
them in platform dependent directories, e.g.: thread.hh, with
the sources in Posix/thread.cc and Windows/thread.cc.
Alternatively, you can put the implementations in files with
different names: posix_thread.cc and windows_thread.cc.
If you need dependencies in a header, the directory approach
also works. Or you can use something like:
#include systemDependentHeader(thread.hh)
, where systemDependentHeader is a macro which does some token
pasting (with a token defined on the command line) and
stringizing.
Of course, in the case of threading, C++11 offers a standard
solution, which is what you should use; if you can't,
boost::thread isn't too far from the standard (I think). More
generally, if you can find the work already done, you should
take advantage of it. (But verify the quality of the library
first. In the past, we had to back out of using ACE because it
was so buggy.)
If you need to develop your code for different platform you have to consider the following:
You can use #ifdef or #if defined(x) but you have to confine it only inside a header file, better if this file is called "platform.h". Inside your source code you can use the macros defined inside paltform.h file. So your business logic is same for both platform.
Let me provide you an example:
PLATFORM.H
// A platform depended print function inside platform.h file
#if defined( _EMBEDDED_OS_ )
#include <embedded_os.h>
#define print_msg(message) put_uart_bytes(message)
#elif defined( _WINDOWS_ )
#include <windows.h>
#define print_msg(message) printf(message)
#else
#error undefined_platform
#endif
SOURCE.CPP
void main()
{
print_msg("Ciao Mondo!");
}
As you see your source is same for each platform and your business logic is not dirty by several #ifdef directives
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am developing software that will run on multiple platforms. I provide a general header file that includes all public API functions. The actual source files will contain very different code depending on the platform it is compiled for.
I could handle all platforms in the same .cpp-file, but I feel like that will get messy really really fast.
The next idea was, to have a source-file per platform, surrounded by #ifdefs that contains the platform specific code. I feel like this is a much cleaner way, because the wrong code basically doesn't even exist on the wrong platform. I am obviously not looking for the BEST way because that's very subjective.
Is this an acceptable way of handling platform-dependent code or am
I committing a major mistake that I am missing?
Would you find code like this in medium to high quality code-bases?
Are there any major drawbacks to this method?
Window.h:
#pragma once
class Window
{
public:
void Create();
};
Window_Win32.cpp:
#ifdef WINDOWS
#include "Window.h"
void Window::Create()
{
// Win32 specific
}
#endif
Window_Linux.cpp:
#ifdef LINUX
#include "Window.h"
void Window::Create()
{
// Linux specific
}
#endif
Using zillions of #ifdef to use the proper platform is a nightmare. But it's a way, and some famous code out there is done that way.
I prefer having different .h/.cpp for each platform, and also some .cpp for common code to all platforms.
The .h header should include the common objects/functions and include (via #ifdefs) the specific platform header (which has only objects/functions for that platform).
With this approach, you need different configuration/makefile/whatever build files for each platform.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Is there any way within a C or C++ program of getting information on all the functions that could be called? Perhaps a compiler macro of some sort? I know that there are programs that could take in source files or .o files and get the symbols or the prototypes, and I suppose I could just run those programs within a c program, but I'm curious about maybe returning function pointers to functions or an array of function prototypes available in the current scope, or something related?
I'm not phrasing this very well, but the question is part of my curiosity of what I can learn about a program from within the program (and not necessarily by just reading its own code). I kind of doubt that there is anything like what I'm asking for, but I'm curious.
Edit: It appears that what I was wondering about but didn't know how to describe very well was whether reflection was possible in C or C++. Thank you for your answers.
The language doesn't support reflection yet. However, since you are looking for some sources of information, take a look at the Boost.Reflect library to help you add reflection to your code, to a certain extent. Also, look at ClangTooling and libclang for libraries that let you do automated code-analysis.
C and C++ have no way to gather the names of all the functions available.
However, you can use macros to test standards (ANSI, ISO, POSIX, etc) compliance, which can then be used to guarantee the presence of each standard's functions.
For example, if _POSIX_C_SOURCE is defined, you can (usually) assume that functions specified by POSIX will be available:
#ifdef _POSIX_C_SOURCE
/* you can safely call POSIX functions */
#else
/* the system probably isn't POSIX compliant */
#endif
Edit: If you're on a Linux system, you can find some common compatibility macros under feature_test_macros(7). OS X and the BSDs should have roughly the same macros, even though they may not have that manual page. Windows uses the WINVER and _WIN32_WINNT macros to control function visibility across releases.
No.
C++ meta-programming power is weak don't include any form of reflection. You can however use tools like gcc-xml to parse a C++ program and export its content in a easier to analyze format.
Writing your own parser for C++ to extract function declaration is going to be a nightmare unless you only need to do that on your specific project and you're ready to cut some corners.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Google C++ Style Guide (http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Preprocessor_Macros) says:
"Instead of using a macro to conditionally compile code ... well, don't do that at all"
Why is it so bad to have functions like
void foo()
{
// some code
#ifdef SOME_FUNCTIONALITY
// code
#endif
// more code
}
?
As they say in the doc you linked to:
Macros mean that the code you see is not the same as the code the compiler sees. This can introduce unexpected behavior, especially since macros have global scope.
It's not too bad if you have just one conditional compilation, but can get quick complicated if you start having nested ones like:
#if PS3
...
#if COOL_FEATURE
...
#endif
...
#elif XBOX
...
#if COOL_FEATURE
...
#endif
...
#elif PC
...
#if COOL_FEATURE
...
#endif
...
#end
I believe some the arguments against it go:
#ifdef cuts across C++ expression/statement/function/class syntax. That is to say, like goto it is too flexible for you to trust yourself to use it.
Suppose the code in // code compiles when SOME_FUNCTIONALITY is not defined. Then just use if with a static const bool and trust your compiler to eliminate dead code.
Suppose the code in // code doesn't compile when SOME_FUNCTIONALITY is not defined. Then you're creating a dog's breakfast of valid code mixed with invalid code, and relevant code with irrelevant code, that could probably be improved by separating the two cases more thoroughly.
The preprocessor was a terrible mistake: Java is way better than C or C++, but if we want to muck around near the metal we're stuck with them. Try to pretend the # character doesn't exist.
Explicit conditionals are a terrible mistake: polymorphism baby!
Google's style guide specifically mentions testing: if you use #ifdef, then you need two separate executables to test both branches of your code. This is hassle, you should prefer a single executable, that can be tested against all supported configurations. The same objection would logically apply to a static const bool, of course. In general testing is easier when you avoid static dependencies. Prefer to inject them, even if the "dependency" is just on a boolean value.
I'm not wholly sold on any argument individually -- personally I think messy code is still occasionally the best for a particular job under particular circumstances. But the Google C++ style guide is not in the business of telling you to use your best judgement. It's in the business of setting a uniform coding style, and eliminating some language features that the authors don't like or don't trust.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
When is doing conditional compilation a good idea and when is it a horribly bad idea?
By conditional compile I mean using #ifdefs to only compile certain bits of code in certain conditions. The #defineds themselves may be in either a common header file or introduced via the -D compiler directive.
The good ideas:
header guards (you can't do much better for portability)
conditional implementation (juggling with platform differences)
debug specific checks (asserts, etc...)
per suggestion: extern "C" { and } so that the same headers may be used by the C++ implementation and by the C clients of the API
The bad idea:
changing the API between compile flags, since it forces the client to changes its uses with the same compile flags... urk!
Don't put ifdef in your code.
It makes it really hard to read and understand. Please make the code as easy to read as possable for the maintainer (he knows where you live and owns an Axe).
Hide the conditional code in separate functions and use the ifdef to define what functions are being used.
DONT use the else part to make a definition. If you are ding that you are saying one platform is unique and all others are the same. This is unlikely, what is more likely is that you know what happens on a couple of platforms but you should use the #else section to stick a #error so when it is ported to a new platform a developer has to explicitly fix the condition for his platform.
x.h
#if defined(WINDOWS)
#define MyPlatfromSleepSeconds(x) sleep(x * 1000)
#elif defined (UNIX)
#define MyPlatfromSleepSeconds(x) Sleep(x)
#else
#error "Please define appropriate sleep for your platform"
#endif
Don;t be tempted to expand a macro into multiple lines of code. That leads to madness.
p.h
#if defined(SOLARIS_3_1_1)
#define DO_SOME_TASK(x,y) doPartA(x); \
doPartB(y); \
couple(x,y)
#elif defined(WINDOWS)
#define DO_SOME_TASK(x,y) doAndCouple(x,y)
#else
#error "Please define appropriate DO_SOME_TASK for your platform"
#endif
If you develop the code on windows then test on solaris 3_1_1 later you may find unexpected bugs when people do things like:
int loop;
for(loop = 0;loop < 10;++loop)
DO_SOME_TASK(loop,loop); // Windows works fine()
// Solaras. Only doPartA() is in the loop.
// The other statements are done when the loop finishes
Basically, you should try to keep the amount of code that is conditionally compiled to a minimum, because you should be trying to test all that and having lots of conditions makes that more difficult. It also reduces the readability of the code; conditionally compiling whole files is clearer, e.g., by putting platform-specific code in a separate file for each platform and having that all have the same API from the perspective of the rest of the problem. Also try to avoid using it in function headers; again, that's because that's a place where it is particularly confusing.
But that's not to say that you should never use conditional compilation. Just try to keep it short and minimal. (Where I can, I use conditional compilation to control the definitions of other macros which are then just used in the rest of the code; that seems to be clearer to me at least.)
It's a bad idea whenever you don't know what you're doing. It can be a good idea when you're effectively solving an issue this way :).
The way you describe conditional compiling, include guards are part of it. It's not only a good idea to use it. It's a way to avoid compilation errors.
For me, conditional compiling is also a way to target multiple compilers and operating systems. I'm involved in a lib that's supposed to be compileable on Windows XP and newer, 32 or 64 bit, using MinGW and Visual C++, on Linux 32 and 64 bit using gcc/g++ and on MacOS using I-don't-know-what (I'm not maintaining that, but I assume it's a gcc port). Without the preprocessor conditions, it would be pretty much impossible to create a single source file that's compileable anywhere.
Another pragmatic use of conditional compiles is to "comment out" sections of code which contain standard "C" comments (i.e. /* */). Some compilers do not allow nesting of these comments, for example:
/* comment out block of code
.... code ....
/* This is a standard
* comment.
*/ ... oopos! Some compilers try to compile code after this closing comment.
.... code ....
end of block of code*/
(As you can see in the syntax highlighting, StackOverflow does not nest comments.)
instead you can use#ifdef to get the right effect, for example:
#ifdef _NOT_DEFINED_
.... code ....
/* This is a standard
* comment.
*/
.... code ....
#endif
In the past if you wanted to produce truly portable code, you'd have to resort to some form of conditional compilation. With there being a proliferation of portable libraries (such as APR, boost etc.) this reason has little weight IMHO. If you are using conditional compilation simply compile out blocks of code that are not need for particular builds, you should really revisit your design - I should imagine that this would become a nightmare to maintain.
Having said all that, if you do need to use conditional compilation, I would hide as much as I can away from the main body of the code and limit to to very specific cases that are very well understood.
Good/justifiable uses are based on cost/benefit analysis. Obviously, people here are very conscious of the risks:
in linking objects that saw different versions of classes, functions etc.
in making code hard to understand, test and reason about
But, there are uses which often fall into the net-benefit category:
header guards
code customisations for distinct software "ecosystems", such as Linux versus Windows, Visual C++ versus GCC, CPU-specific optimisations, sometimes word size and endianness factors (though with C++ you can often determine these at compile via template hackery, but that may prove messier still) - abstracts away lower-level differences to provide a consistent API across those environments
interacting with existing code that uses preprocessor defines to select versions of APIs, standards, behaviours, thread safety, protocols etc. (sad but true)
compilation that may use optional features when available (think of GNU configure scripts and all the tests they perform on OS interfaces etc)
request that extra code be generated in a translation unit, such as adding main() for a standalone app versus without for a library
controlling code inclusion for distinct logical build modes such as debug and release
It is always a bad idea. What it does is effectively create multiple versions of your source code, all of which need to be tested, which is a pain, to say the least. Unfortunately, like many bad things it is sometimes unavoidable. I use it in very small amounts when writing code that needs to be ported between Windows and Linux, but if I found myself doing it a lot, I would consider alternatives, such as having two separate development sub-trees.
I am engaged in developing a C++ mobile phone application on the Symbian platforms. One of the requirement is it has to work on all the Symbian phones right from 2nd edition phones to 5th edition phones. Now across editions there are differences in the Symbian SDKs. I have to use preprocessor directives to conditionally compile code that are relevant to the SDK for which the application is being built like below:
#ifdef S60_2nd_ED
Code
#elif S60_3rd_ED
Code
#else
Code
Now since the application I am developing is not trivial it will soon grow to tens of thousands of lines of code and preprocessor directives like above would be spread all over. I want to know is there any alternative to this or may be a better way to use these preprocessor directives in this case.
Please help.
Well ... That depends on the exact nature of the differences. If it's possible to abstract them out and isolate them into particular classes, then you can go that route. This would mean having version-specific implementations of some classes, and switch entire implementations rather than just a few lines here and there.
You'd have
MyClass.h
MyClass_S60_2nd.cpp
MyClass_S60_3rd.cpp
and so on. You can select which CPP file to compile either by wrapping the entire inside using #ifdefs as above, or my controlling at the build-level (through Makefiles or whatever) which files are included when you're building for various targets.
Depending on the nature of the changes, this might be far cleaner.
I've been exactly where you are.
One trick is, even if you're going to have conditions in code, don't switch on Symbian versions. It makes it difficult to add support for new versions in future, or to customise for handsets which are unusual in some way. Instead, identify what the actual properties are that you're relying on, write the code around those, and then include a header file which does:
#if S60_3rd_ED
#define CAF_AGENT 1
#define HTTP_FILE_UPLOAD 1
#elif S60_2nd_ED
#define CAF_AGENT 0
#if S60_2nd_ED_FP2
#define HTTP_FILE_UPLOAD 1
#else
#define HTTP_FILE_UPLOAD 0
#endif
#endif
and so on. Obviously you can group the defines by feature rather than by version if you prefer, have completely different headers per configuration, or whatever scheme suits you.
We had defines for the UI classes you inherit from, too, so that there was some UI code in common between S60 and UIQ. In fact because of what the product was, we didn't have much UI-related code, so a decent proportion of it was common.
As others say, though, it's even better to herd the variable behaviour into classes and functions where possible, and link different versions.
[Edit in response to comment:
We tried quite hard to avoid doing anything dependent on resolution - fortunately the particular app didn't make this too difficult, so our limited UI was pretty generic. The main thing where we switched on screen resolution was for splash/background images and the like. We had a script to preprocess the build files, which substituted the width and height into a file name, splash_240x320.bmp or whatever. We actually hand-generated the images, since there weren't all that many different sizes and the images didn't change often. The same script generated a .h file containing #defines of most of the values used in the build file generation.
This is for per-device builds: we also had more generic SIS files which just resized images on the fly, but we often had requirements on installed size (ROM was sometimes quite limited, which matters if your app is part of the base device image), and resizing images was one way to keep it down a bit. To support screen rotation on N92, Z8, etc, we still needed portrait and landscape versions of some images, since flipping aspect ratio doesn't give as good results as resizing to the same or similar ratio...]
In our company we write a lot of cross-platform code (gamedevelopment for win32/ps3/xbox/etc).
To avoid platform-related macroses as much as possible we generally use the next few tricks:
extract platfrom-related code into platform-abstraction libraries that has the same interface across different platforms, but not the same implementation;
split code into different .cpp files for different platforms (ex: "pipe.h", "pipe_common.cpp", "pipe_linux.cpp", "pipe_win32.cpp", ...);
use macroses and helper functions to unify platform-specific function calls (ex: "#define usleep(X) Sleep((X)/1000u)");
use cross-platform third-party libraries.
You can try to define a common interface for all the platforms, if possible. Then, implement the interface for each platform.
Select the right implementation using preprocessor directives.
This way, you will have the platform selection directive in fewer places in your code (ideally, in one place, explicitly in the header file declaring the interface).
This means something like:
commoninterface.h /* declaring the common interface API. Platform identification preprocessor directives might be needed for things like common type definitions */
platform1.c /*specific implementation*/
platform2.c /*specific implementation*/
Look at SQLite. They have the same problem. They move the platform-dependent stuff to separate files and effectively compile only needed stuff by having the preprocessor directives that exclude an entire file contents. It's a widely used approach.
No Idea about alternative, But what you can do is, use different files to include for different version of OS. example
#ifdef S60_2nd_ED
#include "graphics2"
#elif S60_3rd_ED
#include "graphics3"
#else
#include "graphics"
You could something like they do for the assembly definition in the linux kernel. Each architecture has its own directory (asm-x86 for instance). All these folders cluster the same high level header files presenting the same interface. When the kernel is configured, a link named asm is created targeting the appropriate asm-arch directory. This way, all the C files include files like .
There are several differences between S60 2nd ed and 3rd ed applications that are not limited to code: application resource files differ, graphic formats and tools to pack them are different, mmp-files differ in many ways.
Based on my experience, don't try to automate it too much, but have a separate build scripts for 2nd ed and 3rd ed. In code level, separate differences to own classes that have common abstract API, use flags only in rare cases.
You should try and avoid spreading #ifs through the code.
Rather; use the #if in the header files to define alternative macros and then in the code use the single macro.
This method allows you to keep the code slightly more readable.
Example:
Plop.h
======
#if V1
#define MAKE_CALL(X,Y) makeCallV1(X,Y)
#elif V2
#define MAKE_CALL(X,Y) makeCallV2("Plop",X,222,Y)
....
#endif
Plop.cpp
========
if (pushPlop)
{
MAKE_CALL(911,"Help");
}
To help facilitate this split version specific code into their own functions, then use macros to activiate the functions as shown above. Also you can wrap the changing parts of the SDK in your own class to try and provide a consistent interface then all your differences are managed within the wrapper class leaving your code that does the work more tidy.