I'm a JNI navice, a issue about release memory has blocked me, the scenario like below:
If you have convert a ArrayList from Java to C, and put the ArrayList elements from String array like:
jclass arrlist_cls = (*env)->FindClass(env,"java/util/ArrayList");
if(NULL==arrlist_cls)return;
jmethodID m_get = (*env)->GetMethodID(env,arrlist_cls,"get","(I)Ljava/lang/Object");
jmethodID m_size = (*env)->GetMethodID(env,arrlist_cls,"size","()I");
//get the ArrayList field from object
jfieldID fidArrStr = (*env)->GetFieldID(env,helloObj,"arrStr",
"Ljava/util/ArrayList;");
jobject ArrObj = (*env)->GetObjectField(env,paramsObj,fidArrStr);
if(NULL==ArrObj)return;
int len = (*env)->CallIntMethod(env,ArrObj,size_method);
int i=0; const char **ArrStr;
for(i=0;i<len;i++){
jstring jstr = (jstring)(*env)->CallObjectMethod(env,ArrObj,get_method,i);
ArrStr[i]=jstr;
}
How can i release it like single String?you know, the single String could use the method of ReleaseStringUTFChars to release,
how can i release a array object memory? Because String is not primitive type, so i can't find ReleaseArrayElements method to use.
I mean how to release the ArrStr.
This code won't even work: it will get a SIGSEGV when you execute ArrStr[i] = ... So I don't know why you're worrying about termination yet.
You need to allocate ArrStr before you can use it. How you allocate it determines how you release it. If you allocate it with new you must release it with delete. If you allocate it as an array, i.e. char *ArrStr[len], it will disappear when the method exits: no release required.
If you're asking about how to deallocate the jstrings returned by CallObjectMethod(), again they are released automatically by JNI when you exit this JNI method. You can release them explicitly prior to that, if necessary, with DeleteLocalRef().
Related
I created a MySQL aggregate UDF in C++.
I am returning char* from the final function of MySQL aggregate UDF.
In the void xxx_deinit(UDF_INIT initid) I am releasing any memory used by my function.
I am releasing the memory by deleting the init->ptr.
My deinit function:
extern "C" void xxx_deinit(UDF_INIT * initid)
{
delete initid->ptr;
}
In the init function I am initialing the ptr like this:
extern "C" bool xxx_init(UDF_INIT * initid, UDF_ARGS * args, char* message)
{
const char* demo = "demo";
initid->ptr = (char*) demo;
return 0;
}
I am able to create the UDF and install it in the MySQL server.
After installing when I try to call the function, an error message pops up like this:
Error Code: 2013. Lost connection to MySQL server during query
But when I delete the line: delete initid->ptr; from the xxx_deinit(UDF_INIT * initid) function, I get the desired output.
But I am guessing this is the wrong approach because it will lead to a memory leak.
Also, the same statement: delete initid->ptr; doesn't generate error in case of simple UDF of return type char*.
Can anyone tell me what am I doing wrong here?
Any kind of help or suggestions will be of great help.
thanks in advance.
Can anyone help me solving this issue.
There is no memory leak, as "demo" is a pointer to static memory.
You are trying to delete a pointer to memory that was not allocated with new. Your runtime has every right to blow up.
The simplest solution is to simply remove the delete in your deinit function. If you will always put a static string there, that is sufficient.
Alternatively, you can switch to dynamic allocation for the memory of the ptr member. In the init function:
initid->ptr = strdup("demo");
In the deinit function:
free(initid->ptr);
Note that we use free instead of delete, as strdup allocates memory using malloc. Never cross new/delete and malloc/free!
Being new to C++, I am still struggling with pointers-to-pointers and I am not sure if my method below is returning decoded image bytes properly.
This method gets a base64 encoded image string from API. The method has to follow this signature as it is part of legacy code that is not allowed to abbreviate from the way it was written originally. So the signature has to stay the same. Also, I have omitted here async calls and continuations, exceptions etc for code simplicity.
int __declspec(dllexport) GetInfoAndPicture(CString uid, char **image, long *imageSize)
{
CString request = "";
request.Format(url);
http_client httpClient(url);
http_request msg(methods::POST);
...
http_response httpResponse;
httpResponse = httpClient.request(msg).get(); //blocking
web::json::value jsonValue = httpResponse.extract_json().get();
if (jsonValue.has_string_field(L"img"))
{
web::json::value base64EncodedImageValue = jsonValue.at(L"img");
utility::string_t imageString = base64EncodedImageValue.as_string();
std::vector<unsigned char> imageBytes = utility::conversions::from_base64(imageString);
image = (char**)&imageBytes; //Is this the way to pass image bytes back?
*imageSize = imageBytes.size();
}
...
}
The caller calls this method like so:
char mUid[64];
char* mImage;
long mImageSize;
...
resultCode = GetInfoAndPicture(mUid, &mImage, &mImageSize);
//process image given its data and its size
I know what pointer to pointer is, my question is specific to this line
image = (char**)&imageBytes;
Is this the correct way to return the image decoded from base64 into the calling code via the char** image formal parameter given the above method signature and method call?
I do get error "Program .... File: minkernel\crts\ucrt\src\appcrt\convert\isctype.cpp ... "Expression c >= -1 && c <= 255"" which I believe is related to the fact that this line is not correctly passing data back.
Give the requirements there isn't any way to avoid allocating more memory and copying the bytes. You cannot use the vector directly because that is local to the GetInfoAndPicture function and will be destroyed when that function exits.
If I understand the API correctly then this is what you need to do
//*image = new char[imageBytes.size()]; //use this if caller calls delete[] to deallocate memory
*image = (char*)malloc(imageBytes.size()); //use this if caller calls free(image) to deallocate memory
std::copy(imageBytes.begin(), imageBytes.end(), *image);
*imageSize = imageBytes.size();
Maybe there is some way in your utility::conversions functions of decoding directly to a character array instead of to a vector, but only you would know about that.
The problem is with allocating (and freeing) memory for that image; who is responsible for that?
You can't (shouldn't) allocate memory in one module and free it in another.
Your two options are:
Allocate large enough buffer on the caller side, and have DLL use it utility::conversions::from_base64(). The issue here is: what is large enough? Some Win APIs provide an additional method to query the required size. Doesn't fit this scenario as the DLL would either have to get that image for the second time, or hold it (indefinitely) until you ask for it.
Allocate required buffer in the DLL and return a pointer to it. You need to ensure that it won't be freed until the caller request to free it (in a separate API).
I'm currently developing application using gSoap library and has some misunderstanding of proper usage library. I has generated proxy object (-j flag) which wrapped my own classes, as you can see below. Application must work 24/7 and connect simultaneously to many cameras (~50 cameras), so after every request i need to clear all temporary data. Is it normal usage to call soap_destroy() and soap_end() after every request? Because it seem's overkill to do it after each request. May be exists another option of proper usage?
DeviceBindingProxy::destroy()
{
soap_destroy(this->soap);
soap_end(this->soap);
}
class OnvifDeviceService : public Domain::IDeviceService
{
public:
OnvifDeviceService()
: m_deviceProxy(new DeviceBindingProxy)
{
soap_register_plugin(m_deviceProxy->soap, soap_wsse);
}
int OnvifDeviceService::getDeviceInformation(const Access::Domain::Endpoint &endpoint, Domain::DeviceInformation *information)
{
_tds__GetDeviceInformation tds__GetDeviceInformation;
_tds__GetDeviceInformationResponse tds__GetDeviceInformationResponse;
setupUserPasswordToProxy(endpoint);
m_deviceProxy->soap_endpoint = endpoint.endpoint().c_str();
int result = m_deviceProxy->GetDeviceInformation(&tds__GetDeviceInformation, tds__GetDeviceInformationResponse);
m_deviceProxy->soap_endpoint = NULL;
if (result != SOAP_OK) {
Common::Infrastructure::printSoapError("Fail to get device information.", m_deviceProxy->soap);
m_deviceProxy->destroy();
return -1;
}
*information = Domain::DeviceInformation(tds__GetDeviceInformationResponse.Manufacturer,
tds__GetDeviceInformationResponse.Model,
tds__GetDeviceInformationResponse.FirmwareVersion);
m_deviceProxy->destroy();
return 0;
}
}
To ensure proper allocation and deallocation of managed data:
soap_destroy(soap);
soap_end(soap);
You want to do this often to avoid memory to fill up with old data. These calls remove all deserialized data and data you allocated with the soap_new_X() and soap_malloc() functions.
All managed allocations are deleted with soap_destroy() followed by soap_end(). After that, you can start allocating again and delete again, etc.
To allocate managed data:
SomeClass *obj = soap_new_SomeClass(soap);
You can use soap_malloc for raw managed allocation, or to allocate an array of pointers, or a C string:
const char *s = soap_malloc(soap, 100);
Remember that malloc is not safe in C++. Better is to allocate std::string with:
std::string *s = soap_new_std__string(soap);
Arrays can be allocated with the second parameter, e.g. an array of 10 strings:
std::string *s = soap_new_std__string(soap, 10);
If you want to preserve data that otherwise gets deleted with these calls, use:
soap_unlink(soap, obj);
Now obj can be removed later with delete obj. But be aware that all pointer members in obj that point to managed data have become invalid after soap_destroy() and soap_end(). So you may have to invoke soap_unlink() on these members or risk dangling pointers.
A new cool feature of gSOAP is to generate deep copy and delete function for any data structures automatically, which saves a HUGE amount of coding time:
SomeClass *otherobj = soap_dup_SomeClass(NULL, obj);
This duplicates obj to unmanaged heap space. This is a deep copy that checks for cycles in the object graph and removes such cycles to avoid deletion issues. You can also duplicate the whole (cyclic) managed object to another context by using soap instead of NULL for the first argument of soap_dup_SomeClass.
To deep delete:
soap_del_SomeClass(obj);
This deletes obj but also the data pointed to by its members, and so on.
To use the soap_dup_X and soap_del_X functions use soapcpp2 with options -Ec and -Ed, respectively.
In principle, static and stack-allocated data can be serialized just as well. But consider using the managed heap instead.
See https://www.genivia.com/doc/databinding/html/index.html#memory2 for more details and examples.
Hope this helps.
The way memory has to be handled is described in Section 9.3 of the GSoap documentation.
I'm writing a function that looks like this:
Nan::MaybeLocal<v8::Object> getIconForFile(const char * str) {
NSImage * icon = [[NSWorkspace sharedWorkspace] iconForFile:[NSString stringWithUTF8String:str]];
NSData * tiffData = [icon TIFFRepresentation];
unsigned int length = [tiffData length];
//TODO this is causing `malloc: *** error for object 0x10a202000: pointer being freed was not allocated`
char * iconBuff = (char *)[tiffData bytes];
Nan::MaybeLocal<v8::Object> ret = Nan::NewBuffer(iconBuff, length);
return ret;
}
It works as expected except when it gets run by node.js, it throws malloc: *** error for object 0x10a202000: pointer being freed was not allocated. I've tried different things using malloc, etc but nothing is working. I understand that Nan::NewBuffer is trying to free the buffer data somehow and that's where the problem is coming from. Maybe the iconBuff variable is allocated to the stack and when it goes out of scope and Nan::NewBuffer tries to free it, it's freeing a null pointer? I'm not sure and I'm sort of lost :(
Here's the code that "fixed" it but #uliwitness pointed out in his answer that it still has memory management problems:
Nan::MaybeLocal<v8::Object> getIconForFile(const char * str) {
NSImage * icon = [[NSWorkspace sharedWorkspace] iconForFile:[NSString stringWithUTF8String:str]];
NSData * tiffData = [icon TIFFRepresentation];
unsigned int length = [tiffData length];
char * iconBuff = new char[length];
[tiffData getBytes:iconBuff length:length];
Nan::MaybeLocal<v8::Object> ret = Nan::NewBuffer(iconBuff, length);
return ret;
}
Here's the specific code I ended up going with based on #uliwitness' answer:
Nan::MaybeLocal<v8::Object> getIconForFile(const char * str) {
#autoreleasepool {
NSImage * icon = [[NSWorkspace sharedWorkspace] iconForFile:[NSString stringWithUTF8String:str]];
NSData * tiffData = [icon TIFFRepresentation];
unsigned int length = [tiffData length];
return Nan::CopyBuffer((char *) [tiffData bytes], length);
}
}
This seems to be the most elegant solution and from some quick and dirty testing, seems to result in a smaller resident set size in node.js over many invocations, for whatever that's worth.
I googled for documentation about Nan::NewBuffer, and found this page: https://git.silpion.de/users/baum/repos/knoesel-mqttdocker-environment/browse/devicemock/node_modules/nan/doc/buffers.md
This says:
Note that when creating a Buffer using Nan::NewBuffer() and an
existing char*, it is assumed that the ownership of the pointer is
being transferred to the new Buffer for management. When a
node::Buffer instance is garbage collected and a FreeCallback has not
been specified, data will be disposed of via a call to free(). You
must not free the memory space manually once you have created a Buffer
in this way.
So the code in your own answer is incorrect, because a buffer created using new char[] needs to be disposed of using delete [] (this is different from delete, FWIW), however you're giving it to a function that promises to call free on it. You're just accidentally turning off the error message, not fixing the error.
So at the least, you should use malloc instead of new there.
This page also mentions another function, Nan::CopyBuffer(), which it describes as:
Similar to Nan::NewBuffer() except that an implicit memcpy will occur
within Node. Calls node::Buffer::Copy(). Management of the char* is
left to the user, you should manually free the memory space if
necessary as the new Buffer will have its own copy.
That sounds like a better choice for your original code. You can pass it the NSData's bytes and it will even do the copying for you. You don't have to worry about "manually freeing the memory space", as the memory is owned by the NSData, which will take care of disposing of it when it is released (If you're using ARC, ARC will release the NSData, if you're not ARC, you should add an #autoreleasepool so it gets released).
PS - At the bottom of that page, it also mentions that you could use a Nan::FreeCallback and pass it to Nan::NewBuffer. If you provide one there that calls delete [] on the buffer it is given, the code from your answer would work as well. But really, why write additional code for something Nan::CopyBuffer will apparently already do for you?
I have a win32 console application with ruby 1.9.3 embedded, and I am having problems with ruby GC and objects with wrapped C structs that include a pointer to big data.
After some testing, ruby seems to run the GC when the orphaned objects are taking up some memory. The problem is that ruby does not take into account the memory size the struct pointer is taking up, so it won't start the GC as it thinks that those orphaned objects are too small and do not take up much memory.
I have made an example app that will crash as it creates lots of objects with big data in their wrapped struct, here is the code:
#include <ruby.h>
typedef struct TestClassStructS {
byte* bytes;
} TestClassStruct;
static void testClassFree(TestClassStruct *p) {
delete p->bytes;
delete p;
}
VALUE testClassNew(VALUE klass) {
TestClassStruct* ptr = new TestClassStruct();
ptr->bytes = new byte[1024 * 1024 * 5]();
VALUE obj = Data_Wrap_Struct(klass, NULL, testClassFree, ptr);
rb_obj_call_init(obj, 0, 0);
return obj;
}
VALUE testClassInitialize(VALUE self) {
return self;
}
typedef VALUE (*rubyfunc)(...);
VALUE require_wrap(VALUE arg)
{
return rb_eval_string("GC.enable; loop do; TestClass.new; end");
}
int main(int argc, char** argv[])
{
RUBY_INIT_STACK;
ruby_init();
//freopen("CON", "w", stdout);
ruby_init_loadpath();
ruby_sysinit(&argc, argv);
VALUE testClass = rb_define_class("TestClass", rb_cObject);
rb_define_singleton_method(testClass, "new", (rubyfunc)testClassNew, 0);
rb_define_method(testClass, "initialize", (rubyfunc)testClassInitialize, 0);
int error;
VALUE result = rb_protect(require_wrap, 0, &error);
if (error)
{
VALUE lasterr = rb_gv_get("$!");
VALUE message = rb_obj_as_string(lasterr);
printf(StringValuePtr(message));
}
return ruby_cleanup(0);
}
This is not a real case scenario, but makes me worry that in some cases my app could take too much memory if the GC is not started.
I could fix this problem making regular calls to GC.start, but it seems like a dirty solution for me.
If there was a way for making ruby to prioritize garbage collection when some objects are orphaned or to tell ruby the real size the c struct occupies in memory, would be a nice solution, but I do not know if ruby api includes something like this, I could not find anything like that.
If you can, use xmalloc (from ruby.h I think) to allocate the memory. That is, so far, the only way to make sure that the allocated memory gets accounted for the next GC trigger.
There is a new dsize function registered with wrapped C structs, but it seems that it is not (yet ?) used in ruby 1.9.3