I am porting one of my iOS Apps to Swift3 / Xcode8.
I have embedded a C library, which expects a function parameter of type:
char ***
In Swift2.3 this was translated into a:
UnsafeMutablePointer<UnsafeMutablePointer<UnsafeMutablePointer<Int8>>>
So i could declare that pointer in my swift code like that:
let myPointer = UnsafeMutablePointer<UnsafeMutablePointer<UnsafeMutablePointer<Int8>>>.alloc(1)
This worked well until i updated to Xcode8 with Swift3, now i am getting a compiler error:
Cannot convert value of type 'UnsafeMutablePointer<UnsafeMutablePointer<UnsafeMutablePointer<Int8>>>' to expected argument type 'UnsafeMutablePointer<UnsafeMutablePointer<UnsafeMutablePointer<Int8>?>?>!'
Can i anybody help me to understand the changes in swift3? What does this Optional, Optional, Implicit Unwrapped Optional (?) mean in this context and how i can i declare a pointer with this type?
Try
let myPointer = UnsafeMutablePointer<
UnsafeMutablePointer<
UnsafeMutablePointer<Int8>?>?>.allocate(capacity: 1)
Alternatively you could also use the _Nonnull annotation to keep the pointer as non-optional. Suppose the C function is void setPtr(char ***). You could write its declaration available to Swift via a bridging header as follows:
void setPtr(char * _Nonnull * _Nonnull * _Nonnull);
Then in your Swift code you can do something like this:
let myPointer = UnsafeMutablePointer<
UnsafeMutablePointer<
UnsafeMutablePointer<Int8>>>.allocate(capacity: 1)
setPtr(myPointer)
let myInt = myPointer.pointee.pointee.pointee
But what if setPtr(char *** ptr) in C code, where _Nonnull is not usable, does something like
*ptr = NULL;
? Then the Swift code will crash at runtime. However, using optionals you don't need the _Nonnull annotation in the declaration of setPtr(), your Swift code becomes
let myPointer = UnsafeMutablePointer<
UnsafeMutablePointer<
UnsafeMutablePointer<Int8>?>?>.allocate(capacity: 1)
setPtr(myPointer)
let myInt = myPointer.pointee?.pointee?.pointee
and it won't crash at runtime.
Thus the approach with optionals, enforced by Swift3 when you don't use _Nonnull, is safer.
Related
I using clang-tidy for a while and creating some own checks. But now I stuck in this issue. I have a cstyle cast expression from which I want to get a macro name as string.
#define H_KEY 5;
float *a;
a = (float *)H_KEY; // I want to print out "H_KEY" from this expression
So I registered a matcher like this
void PointersCastCheck::registerMatchers(MatchFinder *Finder) {
Finder->addMatcher(cStyleCastExpr().bind("cStyleCastExpr"), this);
}
I'm able to catch every cstyle cast and get a subExpr from it like this.
const Expr * subExpr = cStyleCast->getSubExpr();
So the clang tidy now give me information that I have "int" type sub-expression which is correct but I don't know how can I get the name of it.
What I tried was dynamic cast to DeclRefExpr, but this not pass. Also tried dynamic cast to BuiltinType, then I want to get a declaration but with no luck.
So please help. I think this should not be difficult.
Thank you!
If someone run in this issue, I resolve it like this.
if (subExpr->getExprLoc().isMacroID()) {
SourceManager &SM = *Result.SourceManager;
LangOptions LangOpts = getLangOpts();
StringRef subExprText = Lexer::getSourceText(CharSourceRange::getTokenRange(subExpr->getSourceRange()), SM, LangOpts);}
Maybe there is better approach but this one fits my needs.
I was reading source code of WebKit yesterday when I found some lines of code that I can't understand how it works.
There is a C++ method declared in WebPageProxy.h as this:
RefPtr<API::Navigation> loadRequest(const WebCore::ResourceRequest&, WebCore::ShouldOpenExternalURLsPolicy = WebCore::ShouldOpenExternalURLsPolicy::ShouldAllowExternalSchemes, API::Object* userData = nullptr);
It is called in WKWebView.mm like this:
- (WKNavigation *)loadRequest:(NSURLRequest *)request
{
auto navigation = _page->loadRequest(request);
if (!navigation)
return nil;
return [wrapper(*navigation.leakRef()) autorelease];
}
The argument request is of type NSURLRequest *, but the type in the declaration of the method is const WebCore::ResourceRequest&. I can't fully understand it.
Also, after setting some breakpoints and experimenting, I found that this constructor of the ResourceRequest class is being called:
ResourceRequest(NSURLRequest *nsRequest)
: ResourceRequestBase()
, m_nsRequest(nsRequest)
{
}
I'm not familiar with C++. Can someone help me understand how does this work?
I have a compound module contain simple modules (R = receiver_1 + receiver_2), and my network contain 2 modules (R + R1) the both of them are the same (class R), I want to access to the simple modules of the two with C++, I tried to use:
cModule *test = getModuleByPath("Network.R");
cSimpleModule *test1 = test->getSubmodule("receiver_2", 6);
But naturally I had an error told me that " invalid conversion from 'cModule*' to 'cSimpleModule*'" in the second line. So how could I access to the cSimpleModule of the cModule? please help me.
The method getSubmodule() returns the pointer to a cModule object so you should cast the result into the pointer to cSimpleModule using check_and_cast:
cModule *test = getModuleByPath("Network.R");
cSimpleModule *test1 = check_and_cast<cSimpleModule *> (test->getSubmodule("receiver_2"));
Moreover, the second argument in getSubmodule() is using only if a compound module contains a vector of submodules. Based on your description there is no vector, so I suggest omitting this argument.
I'm trying to call a method on an object from inside my compiled llvm JIT code.
I've read the answer here ( Can I bind an existing method to a LLVM Function* and use it from JIT-compiled code? ) but my case is slightly different, as my method requires an argument.
If I'm understanding everything correctly, I need to wrap my method with a function, but how can I store the pointer to my instance to use as the first argument when calling?
Here's a short example (omited some irrelevant parts)
class Foo:
{
public:
Foo();
float getValue(char * name);
};
float fooWrap(Foo *foo, char * name)
{
foo->getValue(name);
}
Foo::Foo()
{
// snipped llvm init stuff
std::vector<llvm::Type*> fun_args;
fun_args.push_back(llvm::Type::getInt8Ty(context)); // Pointer to this instance (pretty sure is wrong)
fun_args.push_back(llvm::Type::getInt8PtrTy(context)); // char array *
llvm::FunctionType *FT = llvm::FunctionType::get(llvm::Type::getFloatTy(context), fun_args, false);
llvm::Function * F = llvm::Function::Create(FT, llvm::Function::ExternalLinkage, "foo", module);
engine->addGlobalMapping(F, &fooWrap);
// later
llvm::Value *instance = llvm::ConstantInt::get(context, llvm::APInt((intptr_t) &this)); // wont compile, can't construct APInt from intptr_t
std::vector<llvm::Value*> args;
args.push_back(instance);
args.push_back(builder.CreateGlobalStringPtr("test"));
builder.CreateCall(F, args);
}
Any help would be greatly appreciated.
The argument type ended up being:
llvm::Type::getIntNTy(context, sizeof(uintptr_t)*8)
And the value was set with:
llvm::Value *instance = llvm::ConstantInt::get(llvm::Type::getIntNTy(context, sizeof(uintptr_t)*8), (uintptr_t) this);
This ensured that the pointer size was always correct for the compiled platform.
Seems to me like your question can be summed up by:
How to convert some object pointer in my code to an LLVM Value?
And your approach is correct - create a constant int with the value of the pointer as casted to an integer. Your error is just an incorrect usage of APInt's constructor - and in fact you don't need an APInt to start with, you can just do ConstantInt::get(context, (uintptr_t)this) (which works by constructing an APInt itself, as you can see in its implementation).
I have a bit of pure C++ code which is reading from Objective-C data structures with the help of a function pointer to a method in an Objective C class. I'm treating the Objective-C class instance to read from as an opaque pointer. For example, the C++ method that does the reading has a signature like this:
typedef void(*DataGetterFunc)(void * dataSource, int key, int * outValue);
...
void readData(void * dataSource, DataGetterFunc dataReadingFunc);
When I call the C++ method from Objective-C, I do the following:
MYDataStructure * objectiveCData;
cppObject->readData((__bridge void*)objectiveCData, DataGetterFuncImpl);
Finally, DataGetterFuncImpl dereferences the Objective-C class like so:
void DataGetterFuncImpl(void * dataSource, int key, int * outValue)
{
MYDataStructure * objCData = (__bridge MYDataStructure*)dataSource;
...
}
Originally in DataGetterFuncImpl I was using __bridge_transfer, but then I was getting EXC_BAD_ACCESS the next time ARC called retain on MYDataStructure, So I assumed it was being over-released by the use of __bridge_transfer and changed it to just __bridge.
Are there any memory leaks I should look for by just using __bridge, or do I need to use some combination of __bridge_retain and __bridge_transfer in this case?
When you're using __bridge to convert to or from objc, owership is just not affected. That means, that while you're using the object in C++ you must make sure that there's still a strong reference around.
If you, on the other hand, use __bridge_retain to convert to void* and __bridge_transfer to convert back to id (or any other retainable object type), you must make sure that each __bridge_retain is matched by exactly one __bridge_transfer later.