is there some method to solve macros in glsl for vulkan - glsl

Many GLSL programs use macros
#ifdef
#else
#endif
to solve different configurations, is there some elegant method to solve this issue when vulkan build pipeline layout and descriptor set?

Using normal if()\else() blocks and Vulkan specialization constants should solve this one for most cases. I'd expect any sensible compiler to optimize out an entire if() basic block if the specialization constant is zero at compile time.

Related

How to switch from mpif.h to mpi_f08 in fortran while maintaining compatibility?

I am working on a numerical solver written in Fortran which uses MPI for parallelization on large clusters (up to about 500 processes). Currently we are including mpi via
#include "mpif.h"
which, from my understanding, is deprecated and strongly discouraged. In an effort to modernize and clean up our mpi communications, we would like to switch to using the more modern mpi_f08 module. The issue we are facing is that we need to maintain the possibility of compiling a version based on the old mpi header in order to not break the coupling with another solver. I'd much appreciate some advice on how to elegantly maintain this compatibility.
Question #1: What would be an elegant way to either include the header or use the module depending on a preprocessor flag without having #ifdef statements scattered throughout the code?
My thought so far would to define a module
module mpi_module
#ifdef MPI_LEGACY
#include "mpif.h"
#else
use mpi_f08
#endif
end module
and use this module everywhere where the mpi header file is currently included. Is this a viable approach or would this have any unwanted effects which I'm currently overlooking?
Question #2: What would be an elegant way to switch between integers and the new derived types from mpi_f08 depending on the preprocessor flag? (Again, without scattering #ifdef statements throughout the code)
My initial thought on this would be to use something like
#ifdef MPI_LEGACY
#define _mpiOp_ integer
#else
#define _mpiOp_ type(MPI_Op)
#endif
so that I can simply replace
integer :: OP
by
_mpiOp_ :: OP
to obtain compatibility with both ways of including MPI. I'm also not quite happy with this solution yet, since, in my understanding, you can not put these kinds of preprocessor definitions into a module. Thus, you'd end up with a module plus a header file which you necessarily have to remember to include together each time. Again, I'm grateful for any potential flaws with this approach and any alternatives that you can point out.
Sorry for the long post, but I wanted to make my thoughts as clear as possible. I'm looking forward to your input!
The old and the new way are way too different. Not only you have a use statement instead of an include statement and a derived instead of an integer for an Op. Many routines will have different signatures and use different types.
So I am afraid the answer is that there is no elegant way. You are making a conglomerate of two things that are way too different to be elegantly combined.
As has been mentioned in the comments, the first step to get more modern is to do use mpi instead of include "mpif.h". This already enables the compiler to catch many kinds of bugs when the routines are called incorrectly. Tje extent, to which these checks will be possible, will depend on the details of the MPI library configuration. Namely, the extent of generic interfaces generated instead of just external statements.
If you have to combine your code with another code that uses the old way, it makes good sense to first do use mpi, see how it goes, and think whether it makes sense to go further.

How can I make my compiler flag an error when trying to compile deprecated OpenGL functions?

I learnt to use legacy OpenGl (v1 / 2) around a year back but now I am trying to make something that is a bit more up to date (i.e. >OpenGL 3.3).
I want to use a lot of my old code, however I could really do with the compiler flagging up an error when it tries to compile something legacy (e.g. glBegin() ... glEnd()).
I compiled on a mac a while back and it flagged up this error when trying to compile, but now I'm using a raspberry pi running raspbian.
Thanks for your help in advance!
Depending on your use-case, you might be able to use the OpenGL ES header instead of the standard OpenGL header. The OpenGL ES header doesn't contain the deprecated functions.
Another possibility would be to use a loader like gl3w which will also make your code more portable.
I'd recommend using the OpenGL loader generator glad to generate a loader for core profile of the OpenGL version you want to target. The resulting headers will not contain any of the deprected compatibility profile functions and GLenum definitions.
However, be aware that this will not catch all deprecated GL usage at compile time. For example, a core profiles mandates that a VAO != 0 is bound when rendering, that vertex arrays come from VBOs and not client-side memory, and a shader program != 0 is used. Such issues can't really be detected at compile time. I recommend to use the OpenGL Debug Output functionality to catch those remaining issues at runtime. Most GL implementations will produce very useful error or warning messages that way.

Control decorations when compiling GLSL to SPIR-V with glslang

The SPIR-V specification allows a module to request that a branch will be flattened or a loop unrolled using control decorations for the appropriate instructions. This has a significant impact on the final performance profile of the shader. However, standard GLSL, unlike HLSL, doesn't have a way to express this. The intent is that the driver can make those decisions for you, though arguably only the developer can have enough information to do so.
Is there a way to specify how a control operation should be compiled from GLSL when using glslang, or is this left up to the driver to make these decisions? Do we still have to manually unroll loops to be sure they won't branch?
Is there a way to specify how a control operation should be compiled from GLSL when using glslang
There is no explicit means in GLSL to request such things. There may be glslangValidator switches that can control it, but even then, that would be a global setting, not a per-loop setting.
Do we still have to manually unroll loops to be sure they won't branch?
That's the only way to "be sure they won't branch". Even with SPIR-V's unroll decoration, that is a request, not a guarantee. If the internal SPIR-V compiler doesn't want to unroll that loop, then it won't, regardless of what you tell it.

is there a way to run C code designed for an embedded micro-controller on a normal computer?

I have a C code which is written for an ATmega16 chip, and it is full of keywords like :
flash, eeprom, bit
and macros(?) like
interrupt [TIM1_OVF] void timer1_ovf_isr(void)
that come before function signatures.
Now what I want to do is write and run unit tests that verify the correctness of the logic of the controller unit and I want to be able to run these tests on any computer and not need to have the "device" that the code represents.
I searched a lot and came across "abstracting the hardware" and "replacing them with stubs" kind of solutions, but I'm not sure how I can abstract something like "interrupt [TIM1_OVF]" in the code!
I was wondering if there any special tools that provide the environment for running these sorts of codes?
And also if I am going at it wrong, can anybody point me in the right direction? giving that changing or rewriting (!) the micro-controller's code might not be an option?
Thanks a bunch.
Your examples are not ISO C code, they are compiler specific extensions, they are not common across AVR compilers let alone architectures. In many cases they can be worked around by defining macros that require little or no modification of the code. To make your code portable in any case even across different vendor's AVR compilers it is a good idea to do that in any case, although a combination of techniques may be required.
Most compilers support an "always include" option that allows a header file to be included from the command line with an explicit #include directive in the source. Creating a header with your compatibility macros, and including it either implicitly as described or explicitly in the code is a useful technique. For example for the issues you have mentioned, you might have:
// compatability.h
#if !defined COMPATABILITY_INCLUDE
#define COMPATABILITY_INCLUDE
#if defined __IAR_SYSTEMS_ICC__
#define INTERRUPT( irq, handler ) __interrupt [irq] void handler(void)
#elif defined _WIN32
#define INTERRUPT( irq, handler ) void handler(void)
#define __flash const
#define __eeprom const
#define __bit char
#else
#error Unknown toolchain/environment
#endif
#endif
That will remove the memory location qualifiers from the Win32 code, and define __bit as a char. The interrupt handler macro will turn a handler into a regular function on Win32, but does require your code to be modified, but since every toolchain does this differently, that is perhaps no bad thing.
For example in this case you would change:
interrupt [TIM1_OVF] void timer1_ovf_isr(void)
{
...
}
to
INTERRUPT( TIM_OVF, timer1_ovf_isr )
{
...
}
Note that you should use approapriate target macros in the compatability file - I have guessed at IAR for example; you may be using a different compiler. Your compiler documentation should specify the available predefined macros, alternatively Pre-defined Compiler Macros "project" on Sourceforge is a useful resource.
Some of the transformations may change the code semantically, such as swapping __bit for char in some cases for example if the bit is assigned a value greater than one, and then compared with 1, the embedded target is likely to yield true, while on the PC build it will not. It might better be transformed to _Bool but your compiler may give warnings about implicit conversions. My suggestions may not necessarily be the best possible transformation either - consult your compiler's manual for the precise semantics and decide how best to transform them to standard C for test builds.
An alternative that preserves proprietary semantics is to run your unit tests in an instruction-set simulator using debugger scripting if available to implement stubs for hardware interaction, however that method makes it impossible to use off-the-shelf unit-testing frameworks such as CUnit.
Depending on your toolchain, you may already have AVR simulator available, which would allow you to run your unit tests on any PC. For example, IAR provides "C-SPY", an AVR simulator that supports a terminal window, can show show register values, can support generation of interrupts, etc. Assuming you keep your unit sizes reasonable, you do not need significant infrastructure or stubbed interfaces to make this work.
One large benefit of running unit tests on your target platform (with your target compiler) is that you can account for any particular behaviors that will be caused by the platform (endianness, word size, compiler or library peculiarities, etc), compared to running in a PC environment.

Optimizing code for various C/C++ compilers

For those that develop software for multiple platforms, how do you handle the potential that compilers might do certain things better than other compilers.
Say you develop for OS X, Windows, Linux and you are using Clang/LLVM, VS and GCC.
So if someone compiles your app on OS X and they are using GCC and another person compiles on OS X using the Intel Compilers and you could optimized sections of the code for the Intel compilers if the person has them.
Would you just check a Preprocessor directive?
#ifdef __GCC_
// do it this way
#endif
#ifdef __INTEL__
// do it this way
#endif
#ifdef __GCC_WITH C++_V11_Support__
// do it this way
#endif
#idfef __WINDOWS_VISUAL_STUDIO
// do it this way
#endif
Or is there a better way?
How does one find a list of what directive a compiler offers for checking compiler version, etc
Don't choose the implementation based on predefined macros. Let the build system control it.
This lets you build and compare multiple implementations against each other during unit testing.
Typically, optimization follows the traditional 80/20 or 90/10 rule of "20% of the code takes 80% of the time to run" (and "20% of the code takes 80% of the time to develop"). Substitute 80/20 for 90/10 if you like - it's nearly always somewhere between those two...
So, the first stage of "do we optimize for a particular compiler" is to figure out what parts of your code are slow, and if you can make it any better in a generic way that works on all compilers (e.g. passing const reference rather than a copy of a large object). Once you have exhausted all generic improvements to the code, you may want to look at compiler specific optimizations - but that really requires that you gain enough that it really is worth the extra maintenance of having code that is different between the different compilers.
In general, I would very much avoid the "things are different in different compilers".
Generally speaking, compilers are written to optimize common code, not something specialized written specific for the compiler. So generally you should just focus on writing clean code, using the fastest algorithms. However some compilers are hintable, for instance gcc, through attributes using these attributes lets the compiler do its job better.
For instance using the noreturn attribute will allow gcc to discard function return code, thereby minimizing code size. I guess a lot of compilers have similar hinting schemes.
One could then do;
#ifdef GCC
#define NO_RETURN __attribute(...)
#else
#define NO_RETURN
#endif
And use NO_RETURN in your code.