Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am developing software that will run on multiple platforms. I provide a general header file that includes all public API functions. The actual source files will contain very different code depending on the platform it is compiled for.
I could handle all platforms in the same .cpp-file, but I feel like that will get messy really really fast.
The next idea was, to have a source-file per platform, surrounded by #ifdefs that contains the platform specific code. I feel like this is a much cleaner way, because the wrong code basically doesn't even exist on the wrong platform. I am obviously not looking for the BEST way because that's very subjective.
Is this an acceptable way of handling platform-dependent code or am
I committing a major mistake that I am missing?
Would you find code like this in medium to high quality code-bases?
Are there any major drawbacks to this method?
Window.h:
#pragma once
class Window
{
public:
void Create();
};
Window_Win32.cpp:
#ifdef WINDOWS
#include "Window.h"
void Window::Create()
{
// Win32 specific
}
#endif
Window_Linux.cpp:
#ifdef LINUX
#include "Window.h"
void Window::Create()
{
// Linux specific
}
#endif
Using zillions of #ifdef to use the proper platform is a nightmare. But it's a way, and some famous code out there is done that way.
I prefer having different .h/.cpp for each platform, and also some .cpp for common code to all platforms.
The .h header should include the common objects/functions and include (via #ifdefs) the specific platform header (which has only objects/functions for that platform).
With this approach, you need different configuration/makefile/whatever build files for each platform.
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I'm developing some multimedia software on embedded linux, and using an open source library I've happened upon. This library is from the vendor, and comes pre-installed on the hardware. I can confirm that the GNU install process works to build/link/install this library via autogen/make, however, the header files used for making my own program all specify common datatypes with capital letters. In C. So I am seeing datatypes called Int and Char. The code blocks with the strange capitalized datatypes are within an extern C block, and C++ doesn't accept Int as a datatype anyway!
So I am having issues compiling when I'm using these libraries. The autogen-generated makefile seems to take its flags from the environment, and I do not want to recompile the libraries every time I need to compile my program.
Is there any way I can compile my own code (which is just written in C) without having to modify these libraries, which were made specifically for this hardware?
CLARITY EDIT: My task is to compile a small C program, which relies upon header files with erroneous datatypes that came preinstalled. I do not want to edit or recompile these hardware-specific header files.
You may not be doing the import/include part so the compiler understands the new types. You shouldn't have to modify the library but you might have to change how you include it. The extern C isn't enough as it is just a hint to the linker. C++ can accept Int or Char as a datatype if it is properly told about the declaration. Well, they aren't really going to be true types but likely will be typedef structs, or even more likely they are handled through preprocessor macros. I would be willing to bet that they've used #define's so the Int is transformed into the actual typedef statements.
Do you have any example code that came with the library? Can you post any of the headers from the library?
[Edit]
On line 100 of Engine.h is the following:
typedef Int Engine_Error; which says to me you must not be including the appropriate headers. You also didn't include the error you get from compiling, does it say unknown identifier for Int or does it not find Engine_Error. If the former, then you aren't including Engine.h, if the latter, you aren't including whichever header that contains the definition of Engine_Error.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I was wondering if there are any official recommendations regarding the use of define in c++ language, precisely is it best to define in your header or your source file?
I am asking this to know if there are any official standards to live by, or is it just plain subjective... I don't need the whole set of standards but the source or a link to the guidelines, will suffice.
LATER EDIT:
What is the explanation of the fact that const and constexpr have become the status quo, I am referring to define used as means of avoiding repetitive typing, it is clear in my mind that programmers should use the full potential of the c++ oop compiler. On the other hand, if it is so feared, why not remove it altogether? I mean, as far as I understand, define is used solely for conditional compilation, especially, as in making the same code work on different compilers.
Secondary, tiny question, the potential for errors is also the main reason why java doesn't have true C-style define?
A short list of #define use guidelines for C++, points 2, 4, 6 and 7 actually address the question:
Avoid them
Use them for the the common "include guard" pattern in header files
Otherwise, don't use them, unless you can explain, why you are using #define and not const, constexpr, or an inline or a template function, etc, instead.
Use them to allow giving compile time options from compiler command line, but only when having the option as run-time option is not feasible or desirable.
Use them when whatever library you are using requires using them (example: disable assert() function )
In general, put everything in the most narrow possible scope. For some uses of #define macros, this means #define just before a function in .cpp file, then #undef right after the function.
The exact use case for #define determines if it should be in .h or in .cpp file. But note that most use cases are actually in violation of 3. above, and you should actually not use #define.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
Is there any way within a C or C++ program of getting information on all the functions that could be called? Perhaps a compiler macro of some sort? I know that there are programs that could take in source files or .o files and get the symbols or the prototypes, and I suppose I could just run those programs within a c program, but I'm curious about maybe returning function pointers to functions or an array of function prototypes available in the current scope, or something related?
I'm not phrasing this very well, but the question is part of my curiosity of what I can learn about a program from within the program (and not necessarily by just reading its own code). I kind of doubt that there is anything like what I'm asking for, but I'm curious.
Edit: It appears that what I was wondering about but didn't know how to describe very well was whether reflection was possible in C or C++. Thank you for your answers.
The language doesn't support reflection yet. However, since you are looking for some sources of information, take a look at the Boost.Reflect library to help you add reflection to your code, to a certain extent. Also, look at ClangTooling and libclang for libraries that let you do automated code-analysis.
C and C++ have no way to gather the names of all the functions available.
However, you can use macros to test standards (ANSI, ISO, POSIX, etc) compliance, which can then be used to guarantee the presence of each standard's functions.
For example, if _POSIX_C_SOURCE is defined, you can (usually) assume that functions specified by POSIX will be available:
#ifdef _POSIX_C_SOURCE
/* you can safely call POSIX functions */
#else
/* the system probably isn't POSIX compliant */
#endif
Edit: If you're on a Linux system, you can find some common compatibility macros under feature_test_macros(7). OS X and the BSDs should have roughly the same macros, even though they may not have that manual page. Windows uses the WINVER and _WIN32_WINNT macros to control function visibility across releases.
No.
C++ meta-programming power is weak don't include any form of reflection. You can however use tools like gcc-xml to parse a C++ program and export its content in a easier to analyze format.
Writing your own parser for C++ to extract function declaration is going to be a nightmare unless you only need to do that on your specific project and you're ready to cut some corners.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
So,I have a requirement that to do a particular task (say multithreading) that is totally os dependent (or like win32/linux api call).
Now i read somewhere, that using #ifdef we can actually write os dependent code
#ifdef __linux__
/*some linux codes*/
#endif
Now my question is....
Is it the right way to write my code(i.e using ifdef) and then releasing a single .cpp file for both windows and linux? Or should i break my code into two parts and release two different builds- one for linux and one for windows?
Edit:
Seems like question is way too broad, and that generates a lot of opinions.
Differentiate between the two approaches that i mentioned on the basis of Performance, build size etc(any other factor that i may have missed).
Class A {
.
.// Some variables and methods
.
};
class B: public A {
void DoSomething() {
// COntains linux codes and some windows code
}
};
If suppose I don't use #ifdef, how am i going to write dosomething() method that calls right piece of code at right time
Solution #1: Use existing, debugged, documented library (e.g. boost) to hide the platform differences. It uses lots of #ifdef's internally, but you don't have to worry about that.
Solution #2: Write your own platform independent library (see solution #1 for a better approach) and hide all the #ifdef's inside.
Solution #3: Do it in macros (ugh, but see ACE (although most of ACE is in a library, too.)
Solution #4: Use #ifdefs throughout your code whenever a platform difference arises.
Solution #4 is suitable for very-small, throw-away code programs.
Solution #3 is suitable if you are programming in the 1990's.
Solution #2 is suitable only if you can't use a real library for non-technical reasons.
Conclusion: Use Solution #1.
It's possible to use #ifdef for this, but it quickly leads to
unmaintainable code. A better solution is to abstract the
functionality into a class, and provide two different
implementations (two different source files) for that class.
(Even back in the days of C, we'd define a set of functions in
a header, and provide different source files for their
implementation.)
I generally put give the source files the same name, but put
them in platform dependent directories, e.g.: thread.hh, with
the sources in Posix/thread.cc and Windows/thread.cc.
Alternatively, you can put the implementations in files with
different names: posix_thread.cc and windows_thread.cc.
If you need dependencies in a header, the directory approach
also works. Or you can use something like:
#include systemDependentHeader(thread.hh)
, where systemDependentHeader is a macro which does some token
pasting (with a token defined on the command line) and
stringizing.
Of course, in the case of threading, C++11 offers a standard
solution, which is what you should use; if you can't,
boost::thread isn't too far from the standard (I think). More
generally, if you can find the work already done, you should
take advantage of it. (But verify the quality of the library
first. In the past, we had to back out of using ACE because it
was so buggy.)
If you need to develop your code for different platform you have to consider the following:
You can use #ifdef or #if defined(x) but you have to confine it only inside a header file, better if this file is called "platform.h". Inside your source code you can use the macros defined inside paltform.h file. So your business logic is same for both platform.
Let me provide you an example:
PLATFORM.H
// A platform depended print function inside platform.h file
#if defined( _EMBEDDED_OS_ )
#include <embedded_os.h>
#define print_msg(message) put_uart_bytes(message)
#elif defined( _WINDOWS_ )
#include <windows.h>
#define print_msg(message) printf(message)
#else
#error undefined_platform
#endif
SOURCE.CPP
void main()
{
print_msg("Ciao Mondo!");
}
As you see your source is same for each platform and your business logic is not dirty by several #ifdef directives
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Google C++ Style Guide (http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Preprocessor_Macros) says:
"Instead of using a macro to conditionally compile code ... well, don't do that at all"
Why is it so bad to have functions like
void foo()
{
// some code
#ifdef SOME_FUNCTIONALITY
// code
#endif
// more code
}
?
As they say in the doc you linked to:
Macros mean that the code you see is not the same as the code the compiler sees. This can introduce unexpected behavior, especially since macros have global scope.
It's not too bad if you have just one conditional compilation, but can get quick complicated if you start having nested ones like:
#if PS3
...
#if COOL_FEATURE
...
#endif
...
#elif XBOX
...
#if COOL_FEATURE
...
#endif
...
#elif PC
...
#if COOL_FEATURE
...
#endif
...
#end
I believe some the arguments against it go:
#ifdef cuts across C++ expression/statement/function/class syntax. That is to say, like goto it is too flexible for you to trust yourself to use it.
Suppose the code in // code compiles when SOME_FUNCTIONALITY is not defined. Then just use if with a static const bool and trust your compiler to eliminate dead code.
Suppose the code in // code doesn't compile when SOME_FUNCTIONALITY is not defined. Then you're creating a dog's breakfast of valid code mixed with invalid code, and relevant code with irrelevant code, that could probably be improved by separating the two cases more thoroughly.
The preprocessor was a terrible mistake: Java is way better than C or C++, but if we want to muck around near the metal we're stuck with them. Try to pretend the # character doesn't exist.
Explicit conditionals are a terrible mistake: polymorphism baby!
Google's style guide specifically mentions testing: if you use #ifdef, then you need two separate executables to test both branches of your code. This is hassle, you should prefer a single executable, that can be tested against all supported configurations. The same objection would logically apply to a static const bool, of course. In general testing is easier when you avoid static dependencies. Prefer to inject them, even if the "dependency" is just on a boolean value.
I'm not wholly sold on any argument individually -- personally I think messy code is still occasionally the best for a particular job under particular circumstances. But the Google C++ style guide is not in the business of telling you to use your best judgement. It's in the business of setting a uniform coding style, and eliminating some language features that the authors don't like or don't trust.