iOS — Determining whether Accelerate.framework is available at runtime - c++

Is there any way to determine whether the Accelerate.framework is available at runtime from straight C or C++ files?
The examples I've found for conditional coding all seem to require Objective-C introspection (e.g., respondsToSelector) and/or Objective-C apis (e.g., UIDevice's systemVersion member)

The usual trick for this is that you weak link against the framework and then check a function pointer exported by that framework for the actual availability. If the framework failed to link because it is not available then the function will be NULL.
So for Accelerate.framework you would do something like this:
#include <Accelerate/Accelerate.h>
if (cblas_sdsdot) {
NSLog(#"Yay we got Accelerate.framework");
} else {
NSLog(#"Oh no, no Accelerate.framework");
}
This is described in TN2064 - Ensuring Binary Backwards Compatibility

Related

How to dynamically register class in a factory class at runtime period with c++

Now, I implemented a factory class to dynamically create class with a idenification string, please see the following code:
void IOFactory::registerIO()
{
Register("NDAM9020", []() -> IOBase * {
return new NDAM9020();
});
Register("BK5120", []() -> IOBase * {
return new BK5120();
});
}
std::unique_ptr<IOBase> IOFactory::createIO(std::string ioDeviceName)
{
std::unique_ptr<IOBase> io = createObject(ioDeviceName);
return io;
}
So we can create the IO class with the registered name:
IOFactory ioFactory;
auto io = ioFactory.createIO("BK5120");
The problem with this method is if we add another IO component, we should add another register code in registerIO function and compile the whole project again. So I was wondering if I could dynamically register class from a configure file(see below) at runtime.
io_factory.conf
------------------
NDAM9020:NDAM9020
BK5120:BK5120
------------------
The first is identification name and the second is class name.
I have tried with Macros, but the parameter in Macros cann't be string. So I was wondering if there is some other ways. Thanks for advance.
Update:
I didn't expect so many comments and answers, Thank you all and sorry for replying late.
Our current OS is Ubuntu16.04 and we use the builtin compiler that is gcc/g++5.4.0, and we use CMake to manage the build.
And I should mention that it is not a must that I should register class at runtime period, it is also OK if there is a way to do this in compile period. What I want is just avoiding the recompiling when I want to register another class.
So I was wondering if I could dynamically register class from a configure file(see below) at runtime.
No. As of C++20, C++ has no reflection features allowing it. But you could do it at compile time by generating a simple C++ implementation file from your configuration file.
How to dynamically register class in a factory class at runtime period with c++
Read much more about C++, at least a good C++ programming book and see a good C++ reference website, and later n3337, the C++11 standard. Read also the documentation of your C++ compiler (perhaps GCC or Clang), and, if you have one, of your operating system. If plugins are possible in your OS, you can register a factory function at runtime (by referring to to that function after a plugin providing it has been loaded). For examples, the Mozilla firefox browser or recent GCC compilers (e.g. GCC 10 with plugins enabled), or the fish shell, are doing this.
So I was wondering if I could dynamically register class from a configure file(see below) at runtime.
Most C++ programs are running under an operating system, such as Linux. Some operating systems provide a plugin mechanism. For Linux, see dlopen(3), dlsym(3), dlclose(3), dladdr(3) and the C++ dlopen mini-howto. For Windows, dive into its documentation.
So, with a recent C++ implementation and some recent operating systems, y ou can register at runtime a factory class (using plugins), and you could find libraries (e.g. Qt or POCO) to help you.
However, in pure standard C++, the set of translation units is statically known and plugins do not exist. So the set of functions, lambda-expressions, or classes in a given program is finite and does not change with time.
In pure C++, the set of valid function pointers, or the set of valid possible values for a given std::function variable, is finite. Anything else is undefined behavior. In practice, many real-life C++ programs accept plugins thru their operating systems or JIT-compiling libraries.
You could of course consider using JIT-compiling libraries such as asmjit or libgccjit or LLVM. They are implementation specific, so your code won't be portable.
On Linux, a lot of Qt or GTKmm applications (e.g. KDE, and most web browsers, e.g. Konqueror, Chrome, or Firefox) are coded in C++ and do load plugins with factory functions. Check with strace(1) and ltrace(1).
The Trident web browser of MicroSoft is rumored to be coded in C++ and probably accepts plugins.
I have tried with Macros, but the parameter in Macros can't be string.
A macro parameter can be stringized. And you could play x-macros tricks.
What I want is just avoiding the recompiling when I want to register another class.
On Ubuntu, I recommend accepting plugins in your program or library
Use dlopen(3) with an absolute file path; the plugin would typically be passed as a program option (like RefPerSys does, or like GCC does) and dlopen-ed at program or library initialization time. Practically speaking, you can have lots of plugins (dozen of thousands, see manydl.c and check with pmap(1) or proc(5)). The dlsym(3)-ed C++ functions in your plugins should be declared extern "C" to disable name mangling.
A single C++ file plugin (in yourplugin.cc) can be compiled with g++ -Wall -O -g -fPIC -shared yourplugin.cc -o yourplugin.so and later you would dlopen "./yourplugin.so" or an absolute path (or configure suitably your $LD_LIBRARY_PATH -see ld.so(8)- and pass "yourplugin.so" to dlopen). Be also aware of Rpath.
Consider also (after upgrading your GCC to GCC 9 at least, perhaps by compiling it from its source code) using libgccjit (it is faster than generating temporary C++ code in some file and compiling that file into a temporary plugin).
For ease of debugging your loaded plugins, you might be interested by Ian Taylor's libbacktrace.
Notice that your program's global symbols (declared as extern "C") can be accessed by name by passing a nullptr file path to dlopen(3), then using dlsym(3) on the obtained handle. You want to pass -rdynamic -ldl when linking your program (or your shared library).
What I want is just avoiding the recompiling when I want to register another class.
You might registering classes in a different translation unit (a short one, presumably). You could take inspiration from RefPerSys multiple #include-s of its generated/rps-name.hh file. Then you would simply recompile a single *.cc file and relink your entire program or library. Notice that Qt plays similar tricks in its moc, and I recommend taking inspiration from it.
Read also J.Pitrat's book on Artificial Beings: the Conscience of a Conscious Machine ISBN which explains why a metaprogramming approach is useful. Study the source code of GCC (or of RefPerSys), use or take inspiration from SWIG, ANTLR, GNU bison (they all generate C++ code) when relevant
You seem to have asked for more dynamism than you actually need. You want to avoid the factory itself having to be aware of all of the classes registered in it.
Well, that's doable without going all the way runtime code generation!
There are several implementations of such a factory; but I am obviously biased in favor of my own: einpoklum's Factory class (gist.github.com)
simple example of use:
#include "Factory.h"
// we now have:
//
// template<typename Key, typename BaseClass, typename... ConstructionArgs>
// class Factory;
//
#include <string>
struct Foo { Foo(int x) { }; }
struct Bar : Foo { Bar(int x) : Foo(x) { }; }
int main()
{
util::Factory<std::string, Foo, int> factory;
factory.registerClass<Bar>("key_for_bar");
auto* my_bar_ptr factory.produce("key_for_bar");
}
Notes:
The std::string is used as a key; you could have a factory with numeric values as keys instead, if you like.
All registered classes must be subclasses of the BaseClass value chosen for the factory. I believe you can change the factory to avoid that, but then you'll always be getting void *s from it.
You can wrap this in a singleton template to get a single, global, static-initialization-safe factory you can use from anywhere.
Now, if you load some plugin dynamically (see #BasileStarynkevitch's answer), you just need that plugin to expose an initialization function which makes registerClass() class calls on the factory; and call this initialization function right after loading the plugin. Or if you have a static-initialization safe singleton factory, you can make the registration calls in a static-block in your plugin shared library - but be careful with that, I'm not an expert on shared library loading.
Definetly YES!
Theres an old antique post from 2006 that solved my life for many years. The implementation runs arround having a centralized registry with a decentralized registration method that is expanded using a REGISTER_X macro, check it out:
https://web.archive.org/web/20100618122920/http://meat.net/2006/03/cpp-runtime-class-registration/
Have to admit that #einpoklum factory looks awesome also. I created a headeronly sample gist containing the code and a sample:
https://gist.github.com/h3r/5aa48ba37c374f03af25b9e5e0346a86

C++ Plugins ABI issues on Linux

I am working on a plugin system to replace shared libs.
I am aware of ABI issues when designing an API for shared libs and entry points in the libs, such as exported classes, should be carefully design.
For example, adding, removing or reordering private member variables of an exported class may lead to different memory layout and runtime errors (from my understanding, that's why the Pimpl pattern might be useful). Of course there are many other pitfalls to avoid when modifying exported classes.
I have built a small example here to illustrate my question.
First, i provide the following header for the plugin developer :
// character.h
#ifndef CHARACTER_H
#define CHARACTER_H
#include <iostream>
class Character
{
public:
virtual std::string name() = 0;
virtual ~Character() = 0;
};
inline Character::~Character() {}
#endif
Then the plugin is built as a shared lib "libcharacter.so" :
#include "character.h"
#include <iostream>
class Wizard : public Character
{
public:
virtual std::string name() {
return "wizard";
}
};
extern "C"
{
Wizard *createCharacter()
{
return new Wizard;
}
}
And finally the main application that uses the plugin :
#include "character.h"
#include <iostream>
#include <dlfcn.h>
int main(int argc, char *argv[])
{
(void)argc, (void)argv;
using namespace std;
Character *(*creator)();
void *handle = dlopen("../character/libcharacter.so", RTLD_NOW);
if (handle == nullptr) {
cerr << dlerror() << endl;
exit(1);
}
void *f = dlsym(handle, "createCharacter");
creator = (Character *(*)())f;
Character *character = creator();
cout << character->name() << endl;
dlclose(handle);
return 0;
}
Is it sufficient to define an abstract class to get rid of all ABI issues?
Is it sufficient to define an abstract class to get rid of all ABI issues?
Short answer:
No.
I wouldn't recommend using C++ for a plugin API (see longer answer below), but if you do decide to stick with C++ then:
Don't use any standard library types in your plugin API.
For instance, Character::name() returns a std::string. If the implementation of std::string ever changes (and it has in the past in GCC) then that will result in Undefined Behavior. Really, anything that you don't control (any third-party library) shouldn't be used in the API.
Don't use exceptions or RTTI across the plugin boundary. On Linux exceptions and RTTI might work if you load the plugin with RTLD_GLOBAL (not a good idea for plugins) and both the host and the plugin use the same runtime. But in general you either won't be able to catch exceptions from another module, or they might even cause heap corruption (if they are allocated by different runtimes).
Only add functions to the end of your abstract classes, or everything will silently break because of the vtable layout changing (and that can be really hard to diagnose).
Always allocate and deallocate an object from the same module. I noticed you don't have a destroyCharacter() function (main() actually leaks the character but that's another question). Always provide symmetric create and destroy functions for resources created by a different module (shared library or plugin).
I believe on Linux with GCC the host application's operator new and operator delete get correctly propagated to the loaded plugin (through weak symbols), but if you want it to work on Windows then don't assume that operator new and operator delete in the host application and the plugin are the same. A statically linked runtime, especially built with LTO, might also mess with this.
Longer answer:
There are a lot of possible issues when exporting a C++ API from a plugin.
Generally speaking, there are no guarantees about it working if anything about the toolchains used to build the host application and the plugin differs. This can include (but is not limited to) compilers, versions of the language, compiler flags, preprocessor definitions, etc.
The common wisdom regarding plugins is to use a pure C89 API, because the C ABI on all common platforms is very stable.
Keeping to the common subset of C89 and C++ will mean that the host and plugin can use different language standards, standard libraries, etc. Unless the host or the plugin are built with some weird (and probably non-standard-conforming) APIs, this should be reasonably safe. Obviously, you still have to be careful with data structure layouts.
You can then provide a rich C++ header-only wrapper for the C API that handles lifetime and error code/exception conversions, etc.
As a nice bonus, C APIs are producible and consumable by most languages, which could allow the plugin authors to use not just C++.
There are actually quite a few pitfalls even in a C API. If we're being pedantic then the only safe things are functions with fixed-size arguments and return types (pointers, size_t, [u]intN_t) - not even necessarily built-in types (short, int, long, ...), or enums. E.g. in GCC: -fshort-enums can change the size of enums, -fpack-struct[=n] can change the padding within structs.
So, if you really want to be safe then don't use enums and either pack all your structs or don't expose them directly (instead expose accessor functions).
Other considerations:
These aren't strictly related to the question but should definitely be considered before committing to a specific style of API.
Error handling: Whether or not you use C++, you'll need an alternative to exceptions.
This will probably be some form of error code. std::error_code in C++ can be then used to wrap the raw enum/int as soon as you're in C++ land, and if the API uses C++ then a std::expected-like or Boost.Outcome-like type with a stable ABI could be used.
Loading the plugin and importing symbols: With abstract classes it's easy - a simple factory function is all you need. With a traditional C API you might end up needing to import hundreds of symbols. One way of dealing with that would be to emulate a vtable in C. Make each object that has associated functions start with a pointer to a dispatch table, e.g.
typedef struct game_string_view { const char *data; size_t size; } game_string_view;
typedef enum game_plugin_error_code { game_plugin_success = 0, /* ... */ } game_plugin_error_code;
typedef struct game_plugin_character_impl *GamePluginCharacter; // handle to a Character
typedef struct game_plugin_character_dispatch_table { // basically a vtable
void (*destroy)(GamePluginCharacter character); // you could even put destroy() here
game_string_view (*name)(GamePluginCharacter character);
void (*update)(GamePluginCharacter character, /*...*/, game_plugin_error_code *ec); // might fail
} game_plugin_character_dispatch_table;
typedef struct game_plugin_character_impl {
// every call goes through this table and takes GamePluginCharacter as it's first argument
const game_plugin_character_dispatch_table *dispatch;
} game_plugin_character_impl;
Future extensibility and compatibility: You should design the API, knowing that you'll want to change it in the future and keep compatibility. IMO, a C API lends itself well to this because it forces you to be very precise in what is exposed. The plugin should be able to expose it's API version to the host in a way that is forward and backward compatible.
It's a good idea to think about extensibility when designing each function signature. E.g. if a struct is passed by pointer (instead of by value), then it's size can be extended without breaking compatibility (so long as at run time both the caller and the callee agree on it's size).
Visibility: Maybe look into visibility on Linux and other platforms. This isn't really a question of API design, just helps clean up the symbols exported from a shared library.
All of the above is by no means extensive.
I would suggest the talk "Hourglass Interfaces for C++ APIs" as further "reading".
And of course there other good talks and articles on the matter (that I can't remember of the top of my head).

DLL fails to load if unused ref class is removed

I'm running into a very strange problem trying to compile and use a windows runtime component within an UWP application (VS2017 community 15.9.13 with NetCore.UniversalWindowsPlatform 6.2.8, compiled without /clr but with /ZW).
It is basically something like the Grayscaletransform. The runtime component is actually working as expected, now I wanted to remove some unused code. However, as soon as I remove it from a particular file and recompile, it indeed compiles, links, but the DLL does not load any more.
Here's some example code that I have to put in:
ref class DummyU sealed
{
public:
DummyU() {}
};
DummyU^ CreateDummyU()
{
return ref new DummyU();
}
The code just makes it work, although it is a) not referenced at all and b) does not do anything useful.
The result of removing it:
Exception thrown at 0x0EFF322F (vccorlib140d_app.dll) in TestAppUWP.exe: 0xC0000005: Access violation reading location 0x00000000.
in
STDAPI DllGetActivationFactory(_In_ HSTRING activatibleClassId, _Deref_out_ IActivationFactory** ppFactory)
{
return Platform::Details::GetActivationFactory(Microsoft::WRL::Details::ModuleBase::module_, activatibleClassId, ppFactory);
}
function in dllexports.cpp which is part of VS. The module_ becomes NULL.
Does anyone have an idea if there are any known bugs with respect to the windows runtime not being initialized/used properly if there is no explicit instantiation of a ref class in a file?
EDIT 1:
Here's the link to the full source code:
What's happening here is that you're mixing modes a bit. Since you've compiled your C++ code with the /CX flag, you've told the compiler to enable winrt extensions to produce a WinRT DLL. In practice though, none of your code is actually using the CX extensions to implement classes. You're using WRL and standards C++. The compiler looks at the DLL, finds no CX-style WinRT classes, and doesn't set up the module, accordingly.
This is basically an untested & unsupported case, since you've chosen to say that you want to expose ref classes by picking a UWP component library project type, but then didn't actually provide any ref classes. It happens that under the hood, /CX effectively uses WRL, so you can nudge it along and initialize the state to work correctly, but you're kinda hacking the implementation details of the system.
There are two options I would recommend, either works: just make the project a non-CX Win32 DLL and init the module as described above. Or, better yet, flip over to C++ /WinRT, which will give you better support for the WinRT types than /CX and allow you to more easily mix in the classic COM types in your implementation. You can get started by just turning off the /CX flag in the compiler switches, then start updating the code accordingly.
Ben
You might have wrong .winmd file for your component. WinRT components made in C++ produce two outputs, dll and winmd. Both must match. It's possible you have them from different builds.
Another possible reason is error in manifest. The manifest of the app must include all referenced runtime components.
BTW, for native DLLs written in classic C++ and exposing C API, deployment is simpler, you include a DLL in the package and they just work, with [DllImport] if you're consuming them from C#.
Update: You can replace that ref class with the following code, works on my PC.
struct ModuleStaticInitialize
{
ModuleStaticInitialize()
{
Microsoft::WRL::Module<Microsoft::WRL::InProc>::GetModule();
}
};
static ModuleStaticInitialize s_moduleInit;
Probably a bug in Microsoft's runtime somewhere.

Return argument in function parameter of C++

I am trying to understand a typedef but I am stuck with this Code:
typedef _Return_type_success_(return == DUPL_RETURN_SUCCESS) enum
{
DUPL_RETURN_SUCCESS = 0,
DUPL_RETURN_ERROR_EXPECTED = 1,
DUPL_RETURN_ERROR_UNEXPECTED = 2
}DUPL_RETURN;
My question is:
1) What is the Return argument doing in parameter of a function
2) we usually define typedef like this
typedef int foo;
I am unable to understand this above format.
I am a noobee to serious c++ programming, Thankyou for your time.
EDIT: I am trying to build this streaming app, so I need to retrieve frames as fast as possible, I came across a few articles that recommended DXGI way which is a fast solution. I am trying to understand how to use the windows desktop Duplication API. I found this code on official msdn website : here
_Return_type_success_ it is not part of the official c++ standard, but part of the Microsoft source code annotation language (which is proprietary addition to the c++ syntax):
Using SAL Annotations to Reduce C/C++ Code Defects
SAL is the Microsoft source code annotation language. By using source code annotations, you can make the intent behind your code explicit. These annotations also enable automated static analysis tools to analyze your code more accurately, with significantly fewer false positives and false negatives.
And _Return_type_success_ itself is described in Success/Failure Annotations
_Return_type_success_(expr): May be applied to a typedef. Indicates that all functions that return that type and do not explicitly have _Success_ are annotated as if they had _Success_(expr). _Return_type_success_ cannot be used on a function or a function pointer typedef.
_Return_type_success_ is most certainly just a macro defined as #define _Return_type_success_(arg) so that it is completely removed while compiling. This is at least the case for the sal.h (no_sal2.h ) header used in some azure code.

Is the python C API entirely compatible with C++?

As I understand the relationship between C and C++, the latter is essentially an extension of the former and retains a certain degree of backwards compatibility. Is it safe to assume that the python C API can be called with C++ code?
More to the point, I notice that the official python documentation bundles C and C++ extensions together on the same page. Nowhere am I able to find a C++ API. This leads me to believe that the same API is safe to use in both languages.
Can someone confirm or deny this?
EDIT:
I think I made my question much more complicated than it needs to be. The question is this: what must I do in order to write a python module in C++? Do I just follow the same directions as listed here, substituting C code for C++? Is there a separate API?
I can confirm that the same Python C API is safe to be used in both languages, C and C++.
However, it is difficult to provide you with more detailed answer, unless you will ask more specific question. There are numerous caveats and issues you should be aware of. For example, your Python extensions are defined as C types struct, not as C++, so don't expect to have their constructor/destructor implicitly defined and called.
For example, taking the sample code from Defining New Types in the Python manual, it can be written in C++ way and you can even blend-in C++ types:
// noddy.cpp
namespace {
struct noddy_NoddyObject
{
PyObject_HEAD
// Type-specific fields go here.
std::shared_ptr<int> value; // WARNING
};
PyObject* Noddy_new(PyTypeObject *type, PyObject *args, PyObject *kwds)
{
try {
Noddy *self = (Noddy *)type->tp_alloc(type, 0);
if (self) {
self->value = std::make_shared(7);
// or more complex operations that may throw
// or extract complex initialisation as Noddy_init function
return self;
}
}
catch (...) {
// do something, log, etc.
}
return 0;
}
PyTypeObject noddy_NoddyType =
{
PyObject_HEAD_INIT(NULL)
// ...
}
} // unnamed namespace
But, neither constructor nor destructor of the std::shared_ptr will be called.
So, remember to define dealloc function for your noddy_NoddyType where you will reset the value with nullptr. Why even bother with having value defined as shared_ptr, you may ask. It is useful if you use your Python extension in C++, with exceptions, to avoid type conversions and casts, to have more seamless integration inside definitions of your implementation, error handling based on exception may be easier then, etc.
And in spite of the fact that your objects of the noddy_NoddyType are managed by machinery implemented in pure C, thanks to dealloc function the value will be released according to well-known RAII rules.
Here you can find interesting example of nearly seamless integration of Python C API with the C++ language: How To catch Python stdout in c++ code
Python C API can be called within C++ code.
Python C++ extensions are written using the same C API as C extensions use, or using some 3rd party API, such as boost::python.