NPAPI: need to RetainObject() a handler twice, otherwise SIGBUS - c++

In my NPAPI plugin, some of the objects have an "onEvent" property that is readable and writeable, and which is called on certain events.
What I have in my Javascript code will look like this:
myObject.onEvent = function( event ) {
console.log("Event: " + event );
}
// if I put this next line, the next call to the 'onEvent' handler will SIGBUS
// when there's no RetainObject() in the getter.
console.log("Event handler : " + myObject.onEvent);
And on the C++ side of the plugin, I have that kind of code:
bool MyPluginObject::getOnEvent(NPIdentifier id, NPVariant *result)
{
if( _onEvent )
{
OBJECT_TO_NPVARIANT( _onEvent, *result);
NPN_RetainObject( _onEvent ); // needed ???? why??
}
else
VOID_TO_NPVARIANT(*result);
return true;
}
bool MyPluginObject::setOnEvent( NPIdentifier id, const NPVariant *value )
{
if ( value && NPVARIANT_IS_OBJECT( *value ) )
{
if( _onEvent != NULL )
{
// release any previous function retained
NPN_ReleaseObject( _onEvent );
}
_onEvent = NPVARIANT_TO_OBJECT( *value );
NPN_RetainObject( _onEvent ); // normal retain
return true;
}
return false;
}
void MyPluginObject::onEvent(void)
{
NPVariant event = [...];
if ( _onEvent!= NULL )
{
NPVariant retVal;
bool success = NPN_InvokeDefault( _Npp, _onEvent, &event, 1, &retVal );
if( success )
{
NPN_ReleaseVariantValue(&retVal);
}
}
}
What's strange is that I've been struggling with a SIGBUS problem for a while, and once I added the NPN_RetainObject() in the getter, as you can see above, everything went fine.
I didn't find in the statement that it is needed in the Mozilla doc, neither in Taxilian's awesome doc about NPAPI.
I don't get it: when the browser requests a property that I've retained, why do I have to retain it a second time?
Should I maybe retain the function when calling InvokeDefault() on it instead? But then, why?? I already stated that I wanted to retain it.
Does getProperty() or InvokeDefault() actually does an NPN_ReleaseObject() without telling me?

You always have to retain object out-parameters with NPAPI, this is not specific to property getters.
In your specific case the object may stay alive anyway, but not in the general case:
Consider returning an object to the caller that you don't plan on keeping alive from your plugin. You have to transfer ownership to the caller and you can't return objects with a retain count of 0.

Related

Swift memcpy doesn't have any effect when used with ZMQ zmq_msg_data

I've been trying to write a libzmq wrapper for Swift by building off of an existing wrapper called SwiftyZeroMQ. However, for our purposes, we require the usage of raw UDP which means we need to use ZeroMQ's Radio/Dish draft method.
I've been able to successfully write a wrapper for receiving data via the Dish socket but I'm now trying to write a wrapper for sending data via the Radio socket. There doesn't seem to be any stuff online regarding how to write a function to send data via the Radio socket but I did come across this. This function in the libzmq repo tests the sending of data via the radio socket so I figured why not try and replicate the function in Swift.
Here's what I've come up with:
Inside the Socket.swift file:
public func sendRadioMessage(_ group: String, data: NSData) throws{
var msg = zmq_msg_t.init();
var result : Int32;
let flags: SocketSendRecvOption = .none
result = zmq_msg_init(&msg);
if (result == -1) { throw ZeroMQError.last }
defer {
// Clean up message on scope exit
zmq_msg_close(&msg)
}
print("initial msg data = \(zmq_msg_data(&msg))")
print("initial data = \(data)")
memcpy(zmq_msg_data(&msg), (data as NSData).bytes, data.length);
print("msg size = \(zmq_msg_size(&msg))")
print("msg = \(zmq_msg_data(&msg))")
result = zmq_msg_set_group(&msg, group);
if (result == -1) { throw ZeroMQError.last }
result = zmq_msg_send(&msg, self.handle, flags.rawValue);
if (result == -1) { throw ZeroMQError.last }
print("sent \(result) bytes")
}
}
and that function is then called like this:
public func send(data: String) -> Bool{
do{
try radio?.sendRadioMessage(group, data: data.data(using: .utf8) as! NSData);
}
catch{
print("SEND COMMUNICATION error - \(error)");
return false;
}
return true;
}
obj.send(data: "Test Data")
This is the console output when running the program:
It seems to me then that the memcpy function that I'm calling from Swift doesn't actually seem to be doing anything. Would this be due to the fact that I'm not passing the data payload properly to the memcpy function? I'm a bit stuck.
The API-documented zmq_msg_data() function is fine & safe to "read" a data-part from a delivered ZeroMQ message.
The context of use & the order of parameters :
void* memcpy( void* dest,
const void* src,
std::size_t count
);
shows, that your code tries to "store" data into a zmq_msg_t-instance, in spite of countless warnings in the ZeroMQ API documentation not to ever attempt to "touch", the less to manipulate data "directly", but by using member functions :
Never access zmq_msg_t members directly, instead always use the zmq_msg family of functions.
Here, possibly using rather :
int zmq_msg_init_data ( zmq_msg_t *msg,
void *data, // <------- payload to load into *msg
size_t size,
zmq_free_fn *ffn, // ref. API details
void *hint // on this
);
So the illustrative example might be something alike this :
//
// Initialising a message from a supplied buffer
//
// *********************************************
void my_free ( void *data, void *hint ) // ------- a dealloc helper
{
free ( data );
}
/* ... */
void *data = malloc ( 6 ); // -------------------- a mock-up payload data
assert ( data );
memcpy ( data, "ABCDEF", 6 );
zmq_msg_t msg;
rc = zmq_msg_init_data ( &msg, data, 6, my_free, NULL );
assert ( rc == 0
&& "INF: zmq_msg_init_data() failed, saying"
&& zmq_strerror ( zmq_errno() ) // ------- a non-POSIX system workaround
);
Also note, that a proposed defense of :
result = zmq_msg_init( &msg );
if ...
did gave a sense in v2.x & v3.x, yet after moving into v4.x, it started to do nothing , as it keeps in ZeroMQ v4.3+ API, as is documented in due form and shape :
Return value
The zmq_msg_init() function always returns zero.
Some version-control and redesign efforts might be needed so as to keep handling this consistently between the port-version and the actual API-version.

What is the difference between these?

Can anyone please explain me the difference about the below used methods to insert a new object in the map container? I already know about pointers and such, I'm not really deep into virtual memory, only the basics (addresses etc..)
#include "StdAfx.h"
#include <windows.h>
#include <cstdlib>
#include <iostream>
#include <map>
using namespace std;
class CUser
{
public:
CUser() { Init(); };
~CUser() {};
public:
BOOL m_bActive;
BOOL m_bLoggedIn;
SYSTEMTIME m_sysTime;
void Init();
};
void CUser::Init()
{
(*this).m_bActive = FALSE;
m_bLoggedIn = FALSE;
GetSystemTime( &m_sysTime );
}
int main(int argc, char *argv[])
{
map<DWORD, CUser*>mUserMap;
//what is the difference between this
{
CUser pUser;
pUser.m_bActive = FALSE;
pUser.m_bLoggedIn = FALSE;
GetSystemTime( &pUser.m_sysTime );
mUserMap.insert( make_pair( 351, &pUser ) );
}
//and this?
{
CUser *pUser = new CUser;
if( pUser )
{
pUser->m_bActive = TRUE;
pUser->m_bLoggedIn = TRUE;
GetSystemTime( &pUser->m_sysTime );
mUserMap.insert( make_pair( 351, pUser ) );
}
}
/* map<DWORD, CUser*>::iterator it = mUserMap.find( 351 );
if( it == mUserMap.end() )
std::cout << "Not found" << std::endl;
else
{
CUser *pUser = it->second;
if( pUser )
std::cout << pUser->m_sysTime.wHour << std::endl;
} */
return 0;
}
In the first case, pUser is created on the stack, and will automatically be deleted when its name goes out of scope (i.e. at the next closing curly bracket). Generally speaking, it's unwise to insert pointers to stack objects into containers, because the object will cease to exist while the container still have a value pointing to it. This can cause a crash in the best case. In the worst case, it could cause erratic and hard to locate bugs in distant parts of the code.
//what is the difference between this
{
CUser pUser;
pUser.m_bActive = FALSE;
pUser.m_bLoggedIn = FALSE;
GetSystemTime( &pUser.m_sysTime );
mUserMap.insert( make_pair( 351, &pUser ) );
}
this creates a local object: your pUser variable only exists inside the scope of this block, and ceases to exist when the last } is reached. That means its destructor is called, and the memory it lived in is reclaimed and may be reused.
Now, when you store a pointer to this short-lived object in your map, you're storing a problem. If you de-reference that pointer at any time after the closing } of this block, you're invoking undefined behaviour. It may work. It may work sometimes, and then start to fail. Basically, it's a logical error and a good source of unpredictable bugs.
//and this?
{
CUser *pUser = new CUser;
if( pUser )
{
pUser->m_bActive = TRUE;
pUser->m_bLoggedIn = TRUE;
GetSystemTime( &pUser->m_sysTime );
mUserMap.insert( make_pair( 351, pUser ) );
}
}
here you explicitly create an instance which will outlive the enclosing scope, and all is good. You don't need to check if new returns NULL though: it'll throw an exception unless you explicitly request it not to.
The difference is here is that the object created by the call to new is created on the heap and not the stack. This means that once the pointer goes out of scope, the memory allocated is still in existence on the heap and you can safely reference it through the pointer stored in your map.
In the first case, you create an object on the stack and add its address to the map. This means that when your locally created variable goes out of scope it is destroyed and the pointer in your map now points to a variable that is no longer in existence. This will undoubtedly lead to problems in your code.
Use the first approach if you must use pointers and not actual objects themselves.
When you use new the memory will persist until you delete it (or get another object to take care of it like a shared pointer). Stack objects are destroyed as soon as they go out of scope.
{
CUser pUser;
pUser.m_bActive = FALSE;
pUser.m_bLoggedIn = FALSE;
GetSystemTime( &pUser.m_sysTime );
mUserMap.insert( make_pair( 351, &pUser ) );
}
//pUser is not available here
pUser (Object) unavailable (deleted), pointer in mUserMap is invalid!
{
CUser *pUser = new CUser;
if( pUser )
{
pUser->m_bActive = TRUE;
pUser->m_bLoggedIn = TRUE;
GetSystemTime( &pUser->m_sysTime );
mUserMap.insert( make_pair( 351, pUser ) );
}
}
//pUser is not available here
pUser (Pointer!!) unavailable (deleted), memory is still claimed so pointer in mUserMapis valid!

Memory validate in difficult task within thread

I'm currently creating a sound system for my project. Every call PlayAsync creating instance of sound in std::thread callback. The sound data proceed in cycle in this callback. When thread proceeds it store sound instance in static vector. When thread ends (sound complete) - it delete sound instance and decrement instance count. When application ends - it must stop all sounds immediate, sending interrupt to every cycle of sound.
The problem is in array keeping these sounds. I am not sure, but I think vector isn't right choice for this purpose.. Here is a code.
void gSound::PlayAsync()
{
std::thread t(gSound::Play,mp_Audio,std::ref(*this));
t.detach();
}
HRESULT gSound::Play(IXAudio2* s_XAudio,gSound& sound)
{
gSound* pSound = new gSound(sound);
pSound->m_Disposed = false;
HRESULT hr;
// Create the source voice
IXAudio2SourceVoice* pSourceVoice;
if( FAILED( hr = s_XAudio->CreateSourceVoice( &pSourceVoice, pSound->pwfx ) ) )
{
gDebug::ShowMessage(L"Error creating source voice");
return hr;
}
// Submit the wave sample data using an XAUDIO2_BUFFER structure
XAUDIO2_BUFFER buffer = {0};
buffer.pAudioData = pSound->pbWaveData;
buffer.Flags = XAUDIO2_END_OF_STREAM; // tell the source voice not to expect any data after this buffer
buffer.AudioBytes = pSound->cbWaveSize;
if( FAILED( hr = pSourceVoice->SubmitSourceBuffer( &buffer ) ) )
{
gDebug::ShowMessage(L"Error submitting source buffer");
pSourceVoice->DestroyVoice();
return hr;
}
hr = pSourceVoice->Start( 0 );
// Let the sound play
BOOL isRunning = TRUE;
m_soundInstanceCount++;
mp_SoundInstances.push_back(pSound); #MARK2
while( SUCCEEDED( hr ) && isRunning && pSourceVoice != nullptr && !pSound->m_Interrupted)
{
XAUDIO2_VOICE_STATE state;
pSourceVoice->GetState( &state );
isRunning = ( state.BuffersQueued > 0 ) != 0;
Sleep(10);
}
pSourceVoice->DestroyVoice();
delete pSound;pSound = nullptr; //its correct ??
m_soundInstanceCount--;
return 0;
}
void gSound::InterrupAllSoundInstances()
{
for(auto Iter = mp_SoundInstances.begin(); Iter != mp_SoundInstances.end(); Iter++)
{
if(*Iter != nullptr)//#MARK1
{
(*Iter)->m_Interrupted = true;
}
}
}
And this I call in application class before disposing sound objects, after main application loop immediate.
gSound::InterrupAllSoundInstances();
while (gSound::m_soundInstanceCount>0)//waiting for deleting all sound instances in threads
{
}
Questions:
So #MARK1 - How to check memory validation in vector? I don't have experience about it. And get errors when try check invalid memory (it's not equals null)
And #MARK2 - How to use vector correctly? Or maybe vector is bad choice? Every time I create sound instance it increases size. It's not good.
A typical issue:
delete pSound;
pSound = nullptr; // issue
This does not do what you think.
It will effectively set pSound to null, but there are other copies of the same pointer too (at least one in the vector) which do not get nullified. This is why you do not find nullptr in your vector.
Instead you could register the index into the vector and nullify that: mp_SoundInstances[index] = nullptr;.
However, I am afraid that you simply do not understand memory handling well and you lack structure. For memory handling, it's hard to tell without details and your system seems complicated enough that I am afraid it would tell too long to explain. For structure, you should read a bit about the Observer pattern.

C++ STL Set: Cannot find() last element inserted

I am in the process of writing an application in which I use the Set class in the C++ STL. I've discovered that the call to set->find() always seems to fail when I query for the last element I inserted. However, if I iterate over the set, I am able to see the element I was originally querying for.
To try to get a grasp on what is going wrong, I've created a sample application that exhibits the same behavior that I am seeing. My test code is posted below.
For the actual application itself, I need to store pointers to objects in the set. Is this what is causing the weird behavior. Or is there an operator I need to overload in the class I am storing the pointer of?
Any help would be appreciated.
#include <stdio.h>
#include <set>
using namespace std;
#define MySet set<FileInfo *,bool(*)(const FileInfo *, const FileInfo*)>
class FileInfo
{
public:
FileInfo()
{
m_fileName = 0;
}
FileInfo( const FileInfo & file )
{
setFile( file.getFile() );
}
~FileInfo()
{
if( m_fileName )
{
delete m_fileName;
m_fileName = 0;
}
}
void setFile( const char * file )
{
if( m_fileName )
{
delete m_fileName;
}
m_fileName = new char[ strlen( file ) + 1 ];
strcpy( m_fileName, file );
}
const char * getFile() const
{
return m_fileName;
}
private:
char * m_fileName;
};
bool fileinfo_comparator( const FileInfo * f1, const FileInfo* f2 )
{
if( f1 && ! f2 ) return -1;
if( !f1 && f2 ) return 1;
if( !f1 && !f2 ) return 0;
return strcmp( f1->getFile(), f2->getFile() );
}
void find( MySet *s, FileInfo * value )
{
MySet::iterator iter = s->find( value );
if( iter != s->end() )
{
printf( "Found File[%s] at Item[%p]\n", (*iter)->getFile(), *iter );
}
else
{
printf( "No Item found for File[%s]\n", value->getFile() );
}
}
int main()
{
MySet *theSet = new MySet(fileinfo_comparator);
FileInfo * profile = new FileInfo();
FileInfo * shell = new FileInfo();
FileInfo * mail = new FileInfo();
profile->setFile( "/export/home/lm/profile" );
shell->setFile( "/export/home/lm/shell" );
mail->setFile( "/export/home/lm/mail" );
theSet->insert( profile );
theSet->insert( shell );
theSet->insert( mail );
find( theSet, profile );
FileInfo * newProfile = new FileInfo( *profile );
find( theSet, newProfile );
FileInfo * newMail = new FileInfo( *mail );
find( theSet, newMail );
printf( "\nDisplaying Contents of Set:\n" );
for( MySet::iterator iter = theSet->begin();
iter != theSet->end(); ++iter )
{
printf( "Item [%p] - File [%s]\n", *iter, (*iter)->getFile() );
}
}
The Output I get from this is:
Found File[/export/home/lm/profile] at Item[2d458]
Found File[/export/home/lm/profile] at Item[2d458]
No Item found for File[/export/home/lm/mail]
Displaying Contents of Set:
Item [2d478] - File [/export/home/lm/mail]
Item [2d468] - File [/export/home/lm/shell]
Item [2d458] - File [/export/home/lm/profile]
**Edit
It's kind of sad that I have to add this. But as I mentioned before, this is a sample application that was pulled from different parts of a larger application to exhibit the failure I was receiving.
It is meant as a unit test for calling set::find on a set populated with heap allocated pointers. If you have a problem with all the new()s, I'm open to suggestions on how to magically populate a set with heap allocated pointers without using them. Otherwise commenting on "too many new() calls" will just make you look silly.
Please focus on the actual problem that was occurring (which is now solved). Thanks.
***Edit
Perhaps I should have put these in my original question. But I was hoping there would be more focus on the problem with the find() (or as it turns out fileinfo_comparator function that acts more like strcmp than less), then a code review of a copy-paste PoC unit test.
Here are some points about the code in the full application itself.
FileInfo holds a lot of data along with the filename. It holds SHA1 sums, file size, mod time, system state at last edit, among other things. I have cut out must of it's code for this post. It violates the Rule of 3 in this form (Thanks #Martin York. See comments for wiki link).
The use of char* over std::string was originally chosen because of the use of 3rd_party APIs that accept char*. The app has since evolved from then. Changing this is not an option.
The data inside FileInfo is polled from a named pipe on the system and is stored in a Singleton for access across many threads. (I would have scope issues if I didn't allocate on heap)
I chose to store pointers in the Set because the FileInfo objects are large and constantly being added/removed from the Set. I decided pointers would be better than always copying large structures into the Set.
The if statement in my destructor is needless and a left over artifact from debugging of an issue I was tracking down. It should be pulled out because it is unneeded.
Your comparison function is wrong - it returns bool, not integer as strcmp(3). The return statement should be something like:
return strcmp( f1->getFile(), f2->getFile() ) < 0;
Take a look here.
Also, out of curiosity, why not just use std::set<std::string> instead? STL actually has decent defaults and frees you from a lot of manual memory management.
It looks to me like your FileInfo doesn't work correctly (at least for use in a std::set). To be stored in a std::set, the comparison function should return a bool indicating that the two parameters are in order (true) or out of order (false).
Given what your FileInfo does (badly designed imitation of std::string), you'd probably be better off without it completely. As far as I can see, you can use std::string in its place without any loss of functionality. You're also using a lot of dynamic allocation for no good reason (and leaking a lot of what you allocate).
#include <set>
#include <iostream>
#include <iterator>
#include <string>
int main() {
char *inputs[] = { "/export/home/lm/profile", "/export/home/lm/shell", "/export/home/lm/mail" };
char *outputs[] = {"Found: ", "Could **not** find: "};
std::set<std::string> MySet(inputs, inputs+3);
for (int i=0; i<3; i++)
std::cout
<< outputs[MySet.find(inputs[i]) == MySet.end()]
<< inputs[i] << "\n";
std::copy(MySet.begin(), MySet.end(),
std::ostream_iterator<std::string>(std::cout, "\n"));
return 0;
}
Edit: even when (or really, especially when) FileInfo is more complex, it shouldn't attempt to re-implement string functionality on its own. It should still use an std::string for the file name, and implement an operator< that works with that:
class FileInfo {
std::string filename;
public:
// ...
bool operator<(FileInfo const &other) const {
return filename < other.filename;
}
FileInfo(char const *name) : filename(name) {}
};
std::ostream &operator(std::ostream &os, FileInfo const &fi) {
return os << fi.filename;
}
int main() {
// std::set<std::string> MySet(inputs, inputs+3);
std:set<FileInfo> MySet(inputs, inputs+3);
// ...
std::copy(MySet.begin(), MySet.end(),
std::ostream_iterator<FileInfo>(std::cout, "\n"));
}
In your constructor:
FileInfo( const FileInfo & file )
{
setFile( file.getFile() );
}
m_fileName seems to be not initialized.

Error handling for xml parsing

I'm using tinyxml to parse xml files, and I've found that error handling here lends itself to arrow code. Our error handling is simply reporting a message to a file.
Here is an example:
const TiXmlElement *objectType = dataRoot->FirstChildElement( "game_object" );
if ( objectType ) {
do {
const char *path = objectType->Attribute( "path" );
if ( path ) {
const TiXmlElement *instance = objectType->FirstChildElement( "instance" );
if ( instance ) {
do {
int x, y = 0;
instance->QueryIntAttribute( "x", &x );
instance->QueryIntAttribute( "y", &y );
if ( x >= 0 && y >= 0 ) {
AddGameObject( new GameObject( path, x, y ));
} else {
LogErr( "Tile location negative for GameObject in state file." );
return false;
}
} while ( instance = instance->NextSiblingElement( "instance" ));
} else {
LogErr( "No instances specified for GameObject in state file." );
return false;
}
} else {
LogErr( "No path specified for GameObject in state file." );
return false;
}
} while ( objectType = objectType->NextSiblingElement( "game_object" ));
} else {
LogErr( "No game_object specified in <game_objects>. Thus, not necessary." );
return false;
}
return true;
I'm not huffing and puffing over it, but if anyone can think of a cleaner way to accomplish this it would be appreciated.
P.S. Exceptions not an option.
Edit:
Would something like this be preferable?
if ( !path ) {
// Handle error, return false
}
// Continue
This eliminates the arrow code, but the arrow code kind of puts all of the error logging on one place.
Using return values as error codes just leads to such code, it can't be improved much. A slightly cleaner way would use goto to group all error handling into a single block and to decrease the nesting of blocks.
This does however not solve the actual problem, which is using return values as error codes. In C, there is no alternative, but in C++ exceptions are available and should be used. If they are not an option, you're are stuck with what you have.
You could create a macro for that, which encapsulates the if (!var) { .. return false; } and error reporting.
However, I do not see how this can be improved all that much; its just the way it is. C'est la vie. C'est le code...
I'm not huffing and puffing over it,
but if anyone can think of a cleaner
way to accomplish this it would be
appreciated.
I have replaced the nested ifs with return statements on error (this makes the code "flow down" instead of going "arrow shaped". I have also replaced your do loopps with for loops (so I could understand it better).
Is this what you wanted?
const TiXmlElement *objectType = dataRoot->FirstChildElement( "game_object" );
if ( !objectType ) {
LogErr( "No game_object specified in <game_objects>. Thus, not necessary." );
return false;
}
for(; objectType != 0; objectType = objectType->NextSiblingElement( "game_object" )) {
const char *path = objectType->Attribute( "path" );
if ( !path ) {
LogErr( "No path specified for GameObject in state file." );
return false;
}
const TiXmlElement *instance = objectType->FirstChildElement( "instance" );
if ( !instance ) {
LogErr( "No instances specified for GameObject in state file." );
return false;
}
for(; instance != 0; instance = instance->NextSiblingElement( "instance" )) {
int x, y = 0;
instance->QueryIntAttribute( "x", &x );
instance->QueryIntAttribute( "y", &y );
if ( x >= 0 && y >= 0 ) {
AddGameObject( new GameObject( path, x, y ));
} else {
LogErr( "Tile location negative for GameObject in state file." );
return false;
}
}
}
return true;
I know it is a little late, but I know that QueryIntAttribute returns a value which can be used for error handling in case you want this for your attributes too.
if (instance->QueryIntAttribute("x",&x)!=TIXML_SUCCESS)
cout << "No x value found";