I'm creating an HAL for an embedded system and part of that is re-creating printf functionality (via a class called Printer). Because it is an embedded system, code-space is critical and I would like to exclude floating-point support in printf by default, but allow the user of my HAL to include it on a project-by-project basis without having to recompile my library.
All of my classes have their method definitions inline in the header file.
printer.h looks something like....
class Printer {
public:
Printer (const PrintCapable *printCapable)
: m_printCapable(printCapable) {}
void put_char (const char c) { ... }
#ifdef ENABLE_PRINT_FLOAT
void put_float (const float f) { ... }
#endif
void printf (const char fmt[], ...) {
// Stuffs...
#ifdef ENABLE_PRINT_FLOAT
// Handle floating point support
#endif
}
private:
const PrintCapable *m_printCapable;
}
// Make it very easy for the user of this library to print by defining an instance for them
extern Printer out;
Now, it is my understanding that this should work great.
printer.cpp is nice and simple:
#include <printer.h>
#include <uart/simplexuart.h>
const SimplexUART _g_simplexUart;
const Printer out(&_g_simplexUart);
Unnecessary code bloat:
If I compile my library with and project without ENABLE_PRINT_FLOAT defined, then code size is 9,216 kB.
Necessary code bloat:
If I compile both library and project with ENABLE_PRINT_FLOAT, code size is 9,348 kB.
Necessary code blo.... oh wait, it's not bloated:
If I compile the project with and the library without ENABLE_PRINT_FLOAT, I would expect to see the same as above. But no... instead I have code size of 7,092 kB and a program that doesn't execute correctly.
Minimum Size:
If I compile both are compiled without ENABLE_PRINT_FLOAT, then the code size is only 6,960 kB.
How can I achieve my goal of small code size, flexible classes, and easy-to-use?
Build system is CMake. Full project source is here.
Main file is nice and simple:
#include <printer.h>
void main () {
int i = 0;
while (1) {
out.printf("Hello world! %u %05.2f\n", i, i / 10.0);
++i;
delay(250); // 1/4 second delay
}
}
If you have different definition of inline functions in different translation units you have undefined behavior. Since your printf() definition changes with the setting of the ENABLE_PRINT_FLOAT macro you just see this effect.
Typically the compiler won't inline functions if it considers them too complicated. It would create out of line implementations and pick a random one when linking. Since the are all the same picking a random is OK ... oh wait, they are different and the program may be broken.
You could make floating point support a template parameter of your printf() function: the function would be called using
out.printf<false>("%d\n", i);
out.printf<true>("%f", f);
The implementation of printf() would delegate to suitable internal functions (to have the compiler merge definitions where they are identical) with the floating point support being disabled for the false case: it could do nothing, fail, or assert.
It may be simpler not do any conditional support in the first place and rather use a stream-like interface: since the formatting functions for the different types are separate, only those actually being used are picked up.
If it is an option for you library to use C++11 you could use variadic template to deal with the situation: the individual formatter would be implemented as separate functions which are dispatched to inside printf(): this way there is no printf() function which needs to handle all formatting. Instead, only the type formatters needed would be pulled in. The implementation could look something like this:
inline char const* format(char const* fmt, int value) {
// find format specifier and format value accordingly
// then adjust fmt to point right after the processed format specifier
return fmt;
}
inline char const* format(char const* fmt, double value) {
// like the other but different
}
// othe formatters
inline int printf(char const* fmt) { return 0; }
template <typename A, typename... T>
inline int printf(char const* fmt, A&& arg, T&& args) {
fmt = format(fmt, std::forward<A>(arg));
return 1 + printf(fmt, std::forward<T>(args));
)
Clearly, there are different approaches how common code between different formatter can be factored out. However, the overall idea should work. Ideally, the generic code would do as little work as possible to have the compiler merge all non-trivial code between the different uses. As a nice side-effect this implementation could make sure that the format specifiers are matching the objects being passed and either produce a suitable error or appropriately handle the format in some way.
Related
I'd like to use the built-in compiler checks to verify format strings of a custom logging framework to catch the odd runtime crash due to mismatching format string <-> parameters in advance.
Arguments of the custom C++ logging methods are identical to the printf() family so I was attempting to replace all calls to
MyLogger::Error(
with
fprintf(stderr,
Though unfortunately the (clang) preprocessor chokes on the scope resolution operator (::), i.e. instead of ULog::Warn( only the ULog substring is recognized:
#define MyLogger::Error( fprintf(stderr,
Any suggestions on how to make this work much appreciated.
Have you tried a variadic template? found here.
#include <iostream>
namespace MyLogger
{
template <typename... T>
auto Error(const char * _Format, T &&... args)
{
return printf(_Format, std::forward<T>(args)...);
};
}
#define printf(...) MyLogger::Error(__VA_ARGS__)
int main()
{
MyLogger::Error("Non-Macro Print \n");
printf("Macro Print \n");
return 0;
}
Elaborating on the approach suggested by #Someprogrammerdude I've extended the custom logging class to use the clang/gcc format attribute to enable compiler format checking.
The declaration simply becomes
static void Error(const char *format,...) __attribute__ ((format (printf, 1, 2)));
It's even better than the original idea to use the preprocessor to temp. enable checks by replacing calls to the custom formatter with calls to printf() as it's enabled all the time catching argument mismatches immediately!
(FWIW - already fixed dozens of issues and couple potential crashes on our 120+ LOC code base)
What if you modify MyLogger::Error to
MyLogger::Error(args){
if (0) {
fprintf(stderr,args)
}
//actual function
}
This way you get the built-in warnings and it does not effect the efficiency of your code. (You can obviosly actually use the print if you wan to write to stderr, but I think if you wanted that you used that already)
One way of doing this would be to create function pointers which conditionally point to different functions depending upon a preprocessor directive which selects the desired feature set.
#if defined(__AVX512__)
void (*func_ptr)() = _mm512_func;
#else
void (*func_ptr)() = _mm256_func;
#endif
int main()
{
func_ptr();
return 0;
}
Are there better ways of doing this? Thanks.
If you're detecting AVX512 only at compile time, you don't need function pointers.
The simplest way: don't define different names for the same function at all, just select which definition to compile in the .cpp file where you have multiple versions of it. That keeps the compile-time dispatching isolated to the file that defines the function, not visible to the rest of your code.
#ifdef __AVX512F__
void func(float *__restrict a, float *__restrict b) {
... // AVX512 version here
}
#elif defined(__AVX2__) && defined(__FMA__)
void func(float *__restrict a, float *__restrict b) { // same name
... // AVX2 version here
}
#else
... // SSE2 or scalar fallback
#endif
Although for testing you do probably want to be able to build all versions of it and test + benchmark them against each other, so you might consider using #define func _mm512_func, or using some preprocessor tricks inside that one file. Maybe another answer will have a better idea for this.
I thought function pointers were preferred over macros in the C++ community. But this does the same job
Maybe if the function point is void (*static const func_ptr)() then you can count on it being inlined / optimized away. You really don't want to add extra overhead for dispatching if you don't need it (e.g. for runtime CPU detection, setting functions pointers in an init function that runs cpuid)
I need to register functions like the following in a list of functions with arguments.
void func1( int a , char* b ) {}
void func2( vec3f a , std::vector<float> b , double c) {}
...
And call them back when I receive data over network with proper arguments. I imagined va_list would solve, but it doesnt work :
void func1(int a, char* b)
{
printf("%d %s",a,b);
}
void prepare(...)
{
va_list argList;
int args = 2;
va_start(argList, args);
((void (*)(va_list))func1)(argList);
va_end(argList);
}
int main(int argc, char **argv)
{
prepare(1, "huhu");
return 0;
}
What is the most elegant way to solve this ?
I know std::bind / std::function has similar abilities, but the internal data is hidden deep in std I assume. I just need a few basic data types, doesnt have to be for arbitrary types. If preprocessor tricks with ##VA_ARGS or using templates would solve, I am also OK with that. Priority is that it is most simple to use.
Edit1 : I found that assembly can solve ( How do I pass arguments to C++ functions when I call them from inline assembly ) - but I would prefer a more platform independent solution.
If your goal is to create your own, small and ad-hoc "rpc" solution, possibly one of the major drivers for making decisions should be: 1. Minimal amount of code 2. Easy as possible.
Keeping that in mind, it is paying off to ponder, what the difference is between the following 2 scenarios:
"Real" RPC: The handlers shall be as you wrote with rpc-method-specific signature.
"Message passing": The handlers receive messages of either "end point-determined type" or simply of a unified message type.
Now, what has to be done to get a solution of type 1?
Incoming byte streams/network packets need to get parsed to some sort of message with regards to some chosen protocol. Then, using some meta-info (contract), according to { serviceContract, serviceMethod }, a specific set of data items needs to be confirmed in the packet and if present, the respective, registered handler function needs to be called. Somewhere within that infrastructure you typically have a (likely code generated) function which does something like that:
void CallHandlerForRpcXYCallFoo( const RpcMessage*message )
{
uint32_t arg0 = message->getAsUint32(0);
// ...
float argN = message->getAsFloat(N);
Foo( arg0, arg1, ... argN );
}
All that can, of course also be packed into classes and virtual methods with the classes being generated from the service contract meta data. Maybe, there is also a way by means of some excessive template voodoo to avoid generating code and having a more generic meta-implementation. But, all that is work, real work. Way too much work to do it just for fun. Instead of doing that, it would be easier to use one of the dozens technologies which do that already.
Worth noting so far is: Somewhere within that piece of art, there is likely a (code generated) function which looks like the one given above.
Now, what has to be done to get a solution of type 2?
Less than for case 1. Why? Because you simply stop your implementation at calling those handler methods, which all take the RpcMessage as their single argument. As such, you can get away without generating the "make-it-look-like-a-function-call" layer above those methods.
Not only is it less work, it is also more robust in the presence of some scenarios where the contract changes. If one more data item is being added to the "rpc solution", the signature of the "rpc function" MUST change. Code re-generated, application code adapted. And that, whether or not the application needs that new data item. On the other hand, in approach 2, there are no breaking changes in the code. Of course, depending on your choices and the kind of changes in the contract, it still would break.
So, the most elegant solution is: Don't do RPC, do message passing. Preferably in a REST-ful way.
Also, if you prefer a "unified" rpc message over a number of rpc-contract specific message types, you remove another reason for code bloat.
Just in case, what I say seems a bit too abstract, here some mock-up dummy code, sketching solution 2:
#include <cstdio>
#include <cstdint>
#include <map>
#include <vector>
#include <deque>
#include <functional>
// "rpc" infrastructure (could be an API for a dll or a lib or so:
// Just one way to do it. Somehow, your various data types need
// to be handled/represented.
class RpcVariant
{
public:
enum class VariantType
{
RVT_EMPTY,
RVT_UINT,
RVT_SINT,
RVT_FLOAT32,
RVT_BYTES
};
private:
VariantType m_type;
uint64_t m_uintValue;
int64_t m_intValue;
float m_floatValue;
std::vector<uint8_t> m_bytesValue;
explicit RpcVariant(VariantType type)
: m_type(type)
{
}
public:
static RpcVariant MakeEmpty()
{
RpcVariant result(VariantType::RVT_EMPTY);
return result;
}
static RpcVariant MakeUint(uint64_t value)
{
RpcVariant result(VariantType::RVT_UINT);
result.m_uintValue = value;
return result;
}
// ... More make-functions
uint64_t AsUint() const
{
// TODO: check if correct type...
return m_uintValue;
}
// ... More AsXXX() functions
// ... Some ToWire()/FromWire() functions...
};
typedef std::map<uint32_t, RpcVariant> RpcMessage_t;
typedef std::function<void(const RpcMessage_t *)> RpcHandler_t;
void RpcInit();
void RpcUninit();
// application writes handlers and registers them with the infrastructure.
// rpc_context_id can be anything opportune - chose uint32_t, here.
// could as well be a string or a pair of values (service,method) or whatever.
void RpcRegisterHandler(uint32_t rpc_context_id, RpcHandler_t handler);
// Then according to taste/style preferences some receive function which uses the registered information and dispatches to the handlers...
void RpcReceive();
void RpcBeginReceive();
void RpcEndReceive();
// maybe some sending, too...
void RpcSend(uint32_t rpc_context_id, const RpcMessage_t * message);
int main(int argc, const char * argv[])
{
RpcInit();
RpcRegisterHandler(42, [](const RpcMessage_t *message) { puts("message type 42 received."); });
RpcRegisterHandler(43, [](const RpcMessage_t *message) { puts("message type 43 received."); });
while (true)
{
RpcReceive();
}
RpcUninit();
return 0;
}
And if RpcMessage then is traded, while packed in a std::shared_ptr, you can even have multiple handlers or do some forwarding (to other threads) of the same message instance. This is one particularly annoying thing, which needs yet another "serializing" in the rpc approach. Here, you simply forward the message.
I'm trying to initialize a global array of function pointers at compile-time, in either C or C++. Something like this:
module.h
typedef int16_t (*myfunc_t)(void);
extern myfunc_array[];
module.cpp
#include "module.h"
int16_t myfunc_1();
int16_t myfunc_2();
...
int16_t myfunc_N();
// the ordering of functions is not that important
myfunc_array[] = { myfunc_1, myfunc_2, ... , myfunc_N };
func1.cpp, func2.cpp, ... funcN.cpp (symbolic links to a single func.cpp file, so that different object files are created: func1.o, func2.o, func3.o, ... , funcN.o. NUMBER is defined using g++ -DNUMBER=N)
#include "module.h"
#define CONCAT2(x, y) x ## y
#define CONCAT(x, y) CONCAT2(x, y)
int16_t CONCAT(myfunc_, NUMBER)() { ... }
When compiled using g++ -DNUMBER=N, after preprocessing becomes:
func1.cpp
...
int16_t myfunc_1() { ... }
func2.cpp
...
int16_t myfunc_2() { ... }
and so on.
The declarations of myfunc_N() and the initialization of myfunc_array[] are not cool, since N changes often and could be between 10 to 200. I prefer not to use a script or Makefile to generate them either. The ordering of functions is not that important, i can work around that. Is there a neater/smarter way to do this?
How To Make a Low-Level Function Registry
First you create a macro to place pointers to your functions in a special section:
/* original typedef from question: */
typedef int16_t (*myfunc)(void);
#define myfunc_register(N) \
static myfunc registered_##myfunc_##N \
__attribute__((__section__(".myfunc_registry"))) = myfunc_##N
The static variable name is arbitrary (it will never be used) but it's nice to choose an expressive name. You use it by placing the registration just below your function:
myfunc_register(NUMBER);
Now when you compile your file (each time) it will have a pointer to your function in the section .myfunc_registry. This will all compile as-is but it won't do you any good without a linker script. Thanks to caf for pointing out the relatively new INSERT AFTER feature:
SECTIONS
{
.rel.rodata.myfunc_registry : {
PROVIDE(myfunc_registry_start = .);
*(.myfunc_registry)
PROVIDE(myfunc_registry_end = .);
}
}
INSERT AFTER .text;
The hardest part of this scheme is creating the entire linker script: You need to embed that snippet in the actual linker script for your host which is probably only available by building binutils by hand and examining the compile tree or via strings ld. It's a shame because I quite like linker script tricks.
Link with gcc -Wl,-Tlinkerscript.ld ... The -T option will enhance (rather than replace) the existing linker script.
Now the linker will gather all of your pointers with the section attribute together and helpfully provide a symbol pointing before and after your list:
extern myfunc myfunc_registry_start[], myfunc_registry_end[];
Now you can access your array:
/* this cannot be static because it is not know at compile time */
size_t myfunc_registry_size = (myfunc_registry_end - myfunc_registry_start);
int i;
for (i = 0; i < myfunc_registry_size); ++i)
(*myfunc_registry_start[i])();
They will not be in any particular order. You could number them by putting them in __section__(".myfunc_registry." #N) and then in the linker gathering *(.myfunc_registry.*), but the sorting would be lexographic instead of numeric.
I have tested this out with gcc 4.3.0 (although the gcc parts have been available for a long time) and ld 2.18.50 (you need a fairly recent ld for the INSERT AFTER magic).
This is very similar to the way the compiler and linker conspire to execute your global ctors, so it would be a whole lot easier to use a static C++ class constructor to register your functions and vastly more portable.
You can find examples of this in the Linux kernel, for example __initcall is very similar to this.
I was going to suggest this question is more about C, but on second thoughts, what you want is a global container of function pointers, and to register available functions into it. I believe this is called a Singleton (shudder).
You could make myfunc_array a vector, or wrap up a C equivalent, and provide a function to push myfuncs into it. Now finally, you can create a class (again you can do this in C), that takes a myfunc and pushes it into the global array. This will all occur immediately prior to main being called. Here are some code snippets to get you thinking:
// a header
extern vector<myfunc> myfunc_array;
struct _register_myfunc {
_register_myfunc(myfunc lolz0rs) {
myfunc_array.push_back(lolz0rs);
}
}
#define register_myfunc(lolz0rs) static _register_myfunc _unique_name(lolz0rs);
// a source
vector<myfunc> myfunc_array;
// another source
int16_t myfunc_1() { ... }
register_myfunc(myfunc_1);
// another source
int16_t myfunc_2() { ... }
register_myfunc(myfunc_2);
Keep in mind the following:
You can control the order the functions are registered by manipulating your link step.
The initialization of your translation unit-scoped variables occurs before main is called, i.e. the registering will be completed.
You can generate unique names using some macro magic and __COUNTER__. There may be other sneaky ways that I don't know about. See these useful questions:
Unnamed parameters in C
Unexpected predefined macro behaviour when pasting tokens
How to generate random variable names in C++ using macros?
Your solution sounds much too complicated and error prone to me.
You go over your project with a script (or probably make) to place the -D options to the compiler, anyhow. So I suppose you are keeping a list of all your functions (resp. the files defining them).
I'd use proper names for all the functions, nothing of your numbering scheme and then I would produce the file "module.cpp" with that script and initialize the table with the names.
For this you just have to keep a list of all your functions (and perhaps filenames) in one place. This could be easier be kept consistent than your actual scheme, I think.
Edit: Thinking of it even this might also be overengineering. If you have to maintain a list of your functions somewhere in any case, why not just inside the file "module.cpp"? Just include all the header files of all your functions, there, and list them in the initializer of the table.
Since you allow C++, the answer is obviously yes, with templates:
template<int N> int16_t myfunc() { /* N is a const int here */ }
myfunc_array[] = { myfunc<0>, myfunc<1>, myfunc<2> }
Now, you might wonder if you can create that variable-length initializer list with some macro. The answer is yes, but the macro's needed are ugly. So I'n not going to write them here, but point you to Boost::Preprocessor
However, do you really need such an array? Do you really need the name myfunc_array[0] for myfunc<0> ? Even if you need a runtime argument (myfunc_array[i]) there are other tricks:
inline template <int Nmax> int16_t myfunc_wrapper(int i) {
assert (i<Nmax);
return (i==Nmax) ? myfunc<Nmax> : myfunc_wrapper(i-1);
}
inline int16_t myfunc_wrapper(int i) {
return myfunc_wrapper<NUMBER>(i); // NUMBER is defined on with g++ -DNUMBER=N
}
Ok I worked out a solution based on Matt Joiner's tip:
module.h
typedef int16_t (*myfunc_t)(void);
extern myfunc_array[];
class FunctionRegistrar {
public:
FunctionRegistrar(myfunc_t fn, int fn_number) {
myfunc_array[fn_number - 1] = fn; // ensures correct ordering of functions (not that important though)
}
}
module.cpp
#include "module.h"
myfunc_array[100]; // The size needs to be #defined by the compiler, probably
func1.cpp, func2.cpp, ... funcN.cpp
#include "module.h"
static int16_t myfunc(void) { ... }
static FunctionRegistrar functionRegistrar(myfunc, NUMBER);
Thanks everyone!
I have sort of a tricky problem I'm attempting to solve. First of all, an overview:
I have an external API not under my control, which is used by a massive amount of legacy code.
There are several classes of bugs in the legacy code that could potentially be detected at run-time, if only the external API was written to track its own usage, but it is not.
I need to find a solution that would allow me to redirect calls to the external API into a tracking framework that would track api usage and log errors.
Ideally, I would like the log to reflect the file and line number of the API call that triggered the error, if possible.
Here is an example of a class of errors that I would like to track. The API we use has two functions. I'll call them GetAmount, and SetAmount. They look something like this:
// Get an indexed amount
long GetAmount(short Idx);
// Set an indexed amount
void SetAmount(short Idx, long amount);
These are regular C functions. One bug I am trying to detect at runtime is when GetAmount is called with an Idx that hasn't already been set with SetAmount.
Now, all of the API calls are contained within a namespace (call it api_ns), however they weren't always in the past. So, of course the legacy code just threw a "using namespace api_ns;" in their stdafx.h file and called it good.
My first attempt was to use the preprocessor to redirect API calls to my own tracking framework. It looked something like this:
// in FormTrackingFramework.h
class FormTrackingFramework
{
private:
static FormTrackingFramework* current;
public:
static FormTrackingFramework* GetCurrent();
long GetAmount(short Idx, const std::string& file, size_t line)
{
// track usage, log errors as needed
api_ns::GetAmount(Idx);
}
};
#define GetAmount(Idx) (FormTrackingFramework::GetCurrent()->GetAmount(Idx, __FILE__, __LINE__))
Then, in stdafx.h:
// in stdafx.h
#include "theAPI.h"
#include "FormTrackingFramework.h"
#include "LegacyPCHIncludes.h"
Now, this works fine for GetAmount and SetAmount, but there's a problem. The API also has a SetString(short Idx, const char* str). At some point, our legacy code added an overload: SetString(short Idx, const std::string& str) for convenience. The problem is, the preprocessor doesn't know or care whether you are calling SetString or defining a SetString overload. It just sees "SetString" and replaces it with the macro definition. Which of course doesn't compile when defining a new SetString overload.
I could potentially reorder the #includes in stdafx.h to include FormTrackingFramework.h after LegacyPCHIncludes.h, however that would mean that none of the code in the LegacyPCHIncludes.h include tree would be tracked.
So I guess I have two questions at this point:
1: how do I solve the API overload problem?
2: Is there some other method of doing what I want to do that works better?
Note: I am using Visual Studio 2008 w/SP1.
Well, for the cases you need overloads, you could use a class instance that overloads operater() for a number of parameters.
#define GetAmount GetAmountFunctor(FormTrackingFramework::GetCurrent(), __FILE__, __LINE__)
then, make a GetAmountFunctor:
class GetAmountFunctor
{
public:
GetAmountFunctor(....) // capture relevant debug info for logging
{}
void operator() (short idx, std::string str)
{
// logging here
api_ns::GetAmount(idx, str);
}
void operator() (short idx)
{
/// logging here
api_ns::GetAmount(Idx);
}
};
This is very much pseudocode but I think you get the idea. Whereever in your legacy code the particular function name is mentioned, it is replaced by a functor object, and the function is actually called on the functor. Do consider you only need to do this for functions where overloads are a problem. To reduce the amount of glue code, you can create a single struct for the parameters __FILE__, __LINE__, and pass it into the constructor as one argument.
The problem is, the preprocessor doesn't know or care whether you are calling SetString or defining a SetString overload.
Clearly, the reason the preprocessor is being used is that it it oblivious to the namespace.
A good approach is to bite the bullet and retarget the entire large application to use a different namespace api_wrapped_ns instead of api_ns.
Inside api_wrapped_ns, inline functions can be provided which wrap counterparts with like signatures in api_ns.
There can even be a compile time switch like this:
namespace api_wrapped_ns {
#ifdef CONFIG_API_NS_WRAPPER
inline long GetAmount(short Idx, const std::string& file, size_t line)
{
// of course, do more than just wrapping here
return api_ns::GetAmount(Idx, file, line);
}
// other inlines
#else
// Wrapping turned off: just bring in api_ns into api_wrapper_ns
using namespace api_ns;
#endif
}
Also, the wrapping can be brought in piecemeal:
namespace api_wrapped_ns {
// This function is wrapped;
inline long GetAmount(short Idx, const std::string& file, size_t line)
{
// of course, do more than just wrapping here
return
}
// The api_ns::FooBar symbol is unwrapped (for now)
using api_ns::FooBar;
}