I am writing a function which first creates a new jbyteArray, which, if everything else works, it will return. If everything else is not successful in the function it will return NULL instead.
However if an error occurs somewhere during the function after I have successfully called NewByteArray(), so I need to explicitly dispose of the jbyteArray before returning, or can I simply leave it to the garbage collector ?
In a rough code sketch :
jbyteArray makeAndFill( JNIEnv *env )
{
jbyteArray ba = NULL ;
ba = (*env)->NewByteArray( env, 1000 ) ;
if( ba == NULL )
return NULL ;
/* so far so good /
if( fillme( ba ) )
{
/* Whoops, a problem ...
*
* DO I need to free the jbyteArray explicitly before
* returning NULL ?
*/
return NULL ;
}
/* everything was fine
*/
return ba ;
}
You can assume that fillme() does all the required Get and Release code and simply returns FALSE if soem some reason it had a problem. If fillme() cannot do it's job properly we simply want the function to return a NULL.
My understanding is that if I do not return the jbyteArray to JAVA "proper" from JNI it will simply be garbage collected. Is that correct ?
If this is in a function that's called via a Java native method, then the runtime will take care of the jbyteArray local reference for you. From the documentation: "Local references are valid for the duration of a native method call, and are automatically freed after the native method returns.". Once the local reference is freed and there aren't any other references to it (returning the object to Java creates a new reference), the object becomes eligible for garbage collection.
The exception would be, if you had a lot of references like this. There's a limit to how many local references a VM can handle, but if you stay below that, you're good.
If this in a function in a non-Java process where a VM has been created with JNI_CreateJavaVM(), you have to explicitly delete every reference that's created outside a Java native method call. Inside a native method call the VM creates a space for references for you that it destroys when the native method returns. When you get yourself an env pointer via JNI_CreateJavaVM() or AttachCurrentThread() or GetEnv(), the VM will not manage the references created for that env for you.
One more edit: seems as if DetachCurrentThread() has been seen to free local references. I'm pretty sure, however, I've read about problems where it did not. Maybe it's implementation dependent, maybe the reported problems were due to a bug. The documentation doesn't say for sure, so I'd prefer to not rely on it.
Related
I am using boost 1.55 (io_service doc). I need to call the destructor on my io_service to reset it after power is cycled on my serial device to get new data. The problem is that when the destructor is called twice (re-trying connection), I get a segmentation fault.
In header file
boost::asio::io_service io_service_port_1;
In function that closes connection
io_service_port_1.stop();
io_service_port_1.reset();
io_service_port_1.~io_service(); // how to check for NULL?
// do I need to re-construct it?
The following does not work:
if (io_service_port_1)
if (io_service_port_1 == NULL)
Thank you.
If you need manual control over when the object is created and destroyed, you should be wrapping it in a std::unique_ptr object.
std::unique_ptr<boost::asio::io_service> service_ptr =
std::make_unique<boost::asio::io_service>();
/*Do stuff until connection needs to be reset*/
service_ptr->stop();
//I don't know your specific use case, but the call to io_service's member function reset is probably unnecessary.
//service_ptr->reset();
service_ptr.reset();//"reset" is a member function of unique_ptr, will delete object.
/*For your later check*/
if(service_ptr) //returns true if a valid object exists in the pointer
if(!service_ptr) //returns true if no object is being pointed to.
Generally speaking, you should never directly call ~object_name();. Ever. Ever. Ever. There's several reasons why:
As a normal part of Stack Unwinding, this will get called anyways when the method returns.
deleteing a pointer will call it.
"Smart Pointers" (like std::unique_ptr and std::shared_ptr) will call it when they self-destruct.
Directly calling ~object_name(); should only ever be done in rare cases, usually involving Allocators, and even then, there are usually cleaner solutions.
I do have a code structure where I read Oracle rows from database and I do assign then to a common model that represents its data (called commonmodel::Model). I´m using VS2012 on Windows 7.
My issue is with this piece of code below, where I do execute some statement like SELECT ...
I´m running for test and the tables are empty, so no data is returned from SELECT....from database, thus the piece of code inside while (resultSet->next()) is not being called.
My program compiles, but it crashes at runtime on returnin data (return retData). I have no idea what´s causing this behaviour and I would like help to solve it.
BTW: I choose to create std::unique_ptr´s for Oracle pointers so that I can leave to the compiler to free these pointers when not needed any mode. In that wat I don´t need to delete them at the end of the operation.
std::vector<std::unique_ptr<commonmodel::Model>> OracleDatabase::ExecuteStmtReturningData(std::string sql, int& totalRecords, commonmodel::Model &modelTemplate)
{
std::unique_ptr<oracle::occi::Statement> stmt(connection->createStatement());
stmt->setAutoCommit(TRUE);
std::unique_ptr<oracle::occi::ResultSet> res(stmt->executeQuery(sql));
std::vector<std::unique_ptr<commonmodel::Model>> ret = getModelsFromResultSet(res, modelTemplate);
return ret;
}
std::vector<std::unique_ptr<commonmodel::Model>> OracleDatabase::getModelsFromResultSet(std::unique_ptr<oracle::occi::ResultSet>& resultSet, commonmodel::Model &modelTemplate)
{
std::vector<std::unique_ptr<commonmodel::Model>> retData;
std::vector<oracle::occi::MetaData> resultMeta = resultSet->getColumnListMetaData();
while (resultSet->next())
{
std::unique_ptr<commonmodel::Model> model = modelTemplate.clone();
for (unsigned int i = 1; i <= resultMeta.size(); i++) // ResultSet starts with one, not zero
{
std::string label = resultMeta.at(i).getString(oracle::occi::MetaData::ATTR_NAME);
setPropertyFromResultSet(resultSet, label, i, *model);
}
retData.push_back(std::move(model)); // unique_ptr can only be copied or moved.
}
return retData; <<<==== CRASH ON RETURN....
}
You can't use 'stock' unique_ptr to deal with OCCI objects and pointers. OCCI doesn't want to you to delete those pointers (this is what unique_ptr is going to do), instead, they want you to free them using OCCI-provided mechanisms.
In particular, to free Statement object you should use Connection::terminateStatement. Same for ResultSet*, and any other OCCI pointer for this matter.
Now, you can supply custom deleter into unique_ptr object, but the issue there is that you'd need to use a pointer to already existing 'parent' object for this - and it is hard to manage lifetime of independent pointers in such a way.
On a side note, I strongly advise against using OCCI. It is a very badly designed, not properly documented library. OCI provides a much better choice.
i have searched all over the world knowing that, we should DeleteLocalRef if it's created in JNI code
then, should i also delete it if the object is newed and returned by Java code? such as:
// in java code
public SomeObject funcInJavaCode() {
return new SomeObject();
}
// in jni code
funcInJNI {
jobject obj = env->CallObjectMethod(...);
...
// do i have to delete the obj here???
env->DeleteLocalRef(obj);
}
thanks
No. Local references are garbage collected when the native function returns to Java (when Java calls native) or when the calling thread is detached from the JVM (in native calls Java). You need explicit DeleteLocalRef only when you have a long lived native function (e.g., a main loop) or create a large number of transient objects in a loop.
You definitely can NOT delete the local ref to a returned object since that call will free up the reference to the object. For example
jbitmap = invokeObjectJavaMethod("MFImageToNative", "([B)Landroid/graphics/Bitmap;", byte_array);
env->DeleteLocalRef(jbitmap);
return jbitmap;
will crash, I believe that it the consumer of the method's responsibility to deal with freeing up the reference. I some kind soul could provide clarification on how to do this, I would be most grateful.
In the Lua 5.2 manual we can find the following text:
When a C function is created, it is possible to associate some values
with it, thus creating a C closure (see lua_pushcclosure); these
values are called upvalues and are accessible to the function whenever
it is called.
Whenever a C function is called, its upvalues are located at specific
pseudo-indices. These pseudo-indices are produced by the macro
lua_upvalueindex. The first value associated with a function is at
position lua_upvalueindex(1), and so on. Any access to
lua_upvalueindex(n), where n is greater than the number of upvalues of
the current function (but not greater than 256), produces an
acceptable (but invalid) index.
So, I created a callback function associated with an object pointer created with "new".
lua_pushstring(L,"myCallbackFunc");
Foo* ideleg = new Foo()
lua_pushlightuserdata (L, (Foo*)ideleg);
lua_pushcclosure(L, LuaCall<Tr,C,Args...>::LuaCallback,1);
lua_settable(L,-3);
And all the mechanics works very nicely... but now it's time to clean up, and I'm unable to get the pointer back so I can delete it, when I unregister the callback function or when I quit my program.
I do find the table entry using the following snippet:
lua_pushnil(L);
while (lua_next(L, -2) != 0)
{
if (lua_isstring(L, -2) && lua_tostring(L,-2) == "myCallBackFunc" )
{
// get the pointer back and delete it!
}
}
(Can I do it using lua_getfield(L, -1, "myCallBackFunc"); ?)
But I'm unable to get the upvalue associated to the cclosure outside the LuaCall<Tr,C,Args...>::LuaCallback() function (Indeed inside this close function I can simply use lua_upvalueindex(1) but in this case I'm outside the close function...)
Is there a way to get this value so I can delete the pointer when I do not need it anymore ?
Edit: I did found the lua_getupvalue function that should do what I need, but at the moment I don't know how to get the cclosure stack index. So it still doesn't works yet.
Yes, you can do this in Lua. But I strongly advise against it.
If you have access to that callback, if it's on the Lua stack, then it is entirely possible that some bit of Lua code has access to it as well. Which means that even though you think it will be destroyed, it actually won't be until Lua finishes with it. Destroying your closure puts it into a non-functioning state. Yet, because Lua may still have access to it, it is possible for the closure to be called after you destroyed its upvalue.
Badness ensues.
It would be better to put the pointer into a full userdata and attach a metatable to it with a __gc metamethod to do cleanup. That way, you can be certain that it will be be cleaned up when it truly is no longer in use.
However, if you insist on doing it your way, you can always use lua_getupvalue.
I just started working with the Android NDK but I keep getting SIGSEGV when I have this call in my C code:
jobjectArray someStringArray;
someStringArray = (*env)->NewObjectArray(env, 10,
(*env)->FindClass(env,"java/lang/String"),(*env)->NewStringUTF(env, ""));
Base on all the example I can find, the above code is correct but I keep getting SIGSERGV and everything is ok if the NewObjectArray line is commented out. Any idea what could cause such a problem?
that looks right, so i'm guessing you've done something else wrong. i assume you're running with checkjni on? you might want to break that up into multiple lines: do the FindClass and check the return value, do the NewStringUTF and check the return value, and then call NewObjectArray.
btw, you might want to pass NULL as the final argument; this pattern of using the empty string as the default value for each element of the array is commonly used (i think it's copy & pasted from some Sun documentation and has spread from there) but it's rarely useful, and it's slightly wasteful. (and it doesn't match the behavior of "new String[10]" in Java.)
I guess one of the possible causes is that in a long-run JNI method, the VM aborts when running out of the per-method-invocation local reference slots (normally 512 slots in Android).
Since FindClass() and NewStringUTF() functions would allocate local references, if you stay in a JNI method for a long time, the VM do not know whether a specific local reference should be recycled or not. So you should explicitly call DeleteLocalRef() to release the acquired local references when not required anymore. If you don't do this, the "zombie" local references will occupy slots in VM, and the VM aborts while running out of all the local reference slots.
In short-run JNI method, this may not be a problem due to all the local references would be recycled when exiting from a JNI method.