prevent browser from closing with firebreath Plugin - c++

I have a plugin where I want to prevent the browser from closing as im saving some data that take a unknown random amount of time.
data_ready = false;
data_ready = saveData(); //using a random amount of time as the user has to specify a location
boost::unique_lock<boost::mutex> lock(mut);
while(!data_ready) {
cond.wait(lock);
}
Asking location for saving the data is prompted but crashes immediately after, which im guessing is the lock.
How could I make the browser wait for the user to be finished saving data?

You can't. It is up to you to make sure that the plugin never blocks the main thread and that all threads you start are shut down in time. Congratulations and welcome to the wonderful world of browser plugins =]
Some people have gotten around this by launching an external application that does the real work that won't close until it's done.

Related

duplicated account login checking in server

The communication is based on socket and the it is keep-alive connection. User use account name to log in, I need to implement a feature when two user use same account to log in, the former one need to be kicked off.
Codes need to updated:
void session::login(accountname) // callback when server recv login request
{
boost::shared_ptr<UserData> d = database.get_user_data(accountname);
this->data = d;
this->send(login success);
}
boost::shared_ptr<UserData> Database::get_user_data(accountname)
{
// read from db and return the data
}
The most simple way is improve Database::get_user_data(accountname)
boost::shared_ptr<UserData> Database::get_user_data(accountname)
{
add a boost::unqiue_lock<> here
find session has same accountname or find user data has same accountname in cache,
if found, then kick this session offline first, then execute below codes
// read from db and return the data
}
This modification has 2 problems:
1, too bad concurrency because the scenario happen rarely. However, if I need to check account online or not, I must cache it somewhere(user data or session), that means I need to write to a container which must has exclusive lock whatever the account same or not. So the concurrency can hardly improved.
2, kick other one off by calling "other_session->offline()" in "this thread" that might concurrent with other operations executing in other thread at same time.
If I add lock in offline(), that will result in all others function belong to session also need to add that lock, obviously, not good. Or, I can push a event to other_session, and let other_session handle the event, that will make sure "offline" executing in its own thread. But the problem is that will make offline executing async, codes below "other one offline" must executed after "offline" runs complete.
I use boost::asio, but I try to describe this problem in common because I think this is a common problem in server writing. Is there a pattern to solve this? Notice that this problem gets complex when there are N same account log in at same time
If this scenario rarely happens, I wouldn't worry about it. lock and release of mutex are not long actions the user would notice (if you had to do it thousands of times a second it could be a problem).
In general trying to fix performance issues that are not there is a bad idea.

Qt Program Hangs (Not Responding) Until Function ends then starts working again

I have a UI application in Qt, i have a couple of functions that run large scale SQL queries that returns thousands of results.
when the button that runs this query is clicked the UI windows just instantly goes to 'not responding' however i can see from console outputs that everything is still actually running in the background. As soon as the function ends the data is presented as expected and the UI is responding again and fully functional.
i know this is due to the fact that the function is looping thousands of times due to the large number of results, but i was hoping that i could have just put in a loading bar that progresses as the search does instead of just locking up the window making it look like the program has crashed. AFAIK i dont have memory leaks so does anyone have any suggestions?
oh also im thinking its not memory leaks because when i click that button task manager shows only a couple of MB of memory being used for this process and processor is by no means maxing out either
In an application, there's one thread that's responsible for handling UI events, messages, whatever you want to call them. Suppose that you have a button click event. As long as you don't return from the callback function, no other UI event can be triggered (repainting, updating, etc) and UI becomes unresponsive.
To mitigate this, you should considering performing time consuming tasks in a separate thread and once they're complete, update UI accordingly. If you need to block UI while the task is processed, you can disable your controls, display a pop up progress bar, whatever, but keep the UI thread relatively unoccupied to avoid "not responding" problem.
A simpler solution than to use threads is to use QCoreApplication::processEvents(). If your code is something like this:
void slowFunction()
{
lostOfResults = makeSqlQuery(...); // quite fast
for (r in lostOfResults)
doSomethingWithResult(r); // one call is quite fast
}
If one SQL query or one doSomethingWithResult() doesn't take too much time, you can process pending events using QCoreApplication::processEvents() like this:
void slowFunction()
{
lostOfResults = makeSqlQuery(...);
for (r in lostOfResults)
{
doSomethingWithResult(r);
QCoreApplication::processEvents();
}
}
Now the GUI events are processed and the program doesn't freeze. But if the SQL query alone takes a lot of time (several seconds) this doesn't help. Then you should consider separate thread.

Shutting down multithreaded NSDocument

I have an NSDocument-based Cocoa app and I have a couple of secondary threads that I need to terminate gracefully (wait for them to run through the current loop) when the users closes the document window or when the application quits. I'm using canCloseDocumentWithDelegate to send a flag to the threads when the document is closing and then when they're done, one of them calls [NSDocument close]. This seems to work peachy keen when the user closes the document window, but when you quit the app, it goes all kinds of wrong (crashes before it calls anything). What is the correct procedure for something like this?
The best possible way is for the threads to own the objects necessary for the thread to finish doing whatever it is doing to the point of being able to abort processing and terminate as quickly as possible.
Under non-GC, this means a -retain that the thread -releases when done. For GC, it is just a hard reference to the object(s) desired.
If there is some kind of lengthy processing that must go on and must complete before the document is closed, then drop a sheet with a progress bar and leave the document modal until done (both Aperture and iPhoto do exactly this).

Why is my file-loading thread not parallelized with the main thread?

My program does file loading and memcpy'ing in the background while the screen is meant to be updated interactively. The idea is to have async loading of files the program will soon need so that they are ready to be used when the main thread needs them. However, the loads/copies don't seem to happen in parallel with the main thread. The main thread pauses during the loading and will often wait for all loads (can be up to 8 at once) to finish before the next iteration of the main thread's main loop.
I'm using Win32, so I'm using _beginthread for creating the file-loading/copying thread.
The worker thread function:
void fileLoadThreadFunc(void *arglist)
{
while(true)
{
// s_mutex keeps the list from being updated by the main thread
s_mutex.lock(); // uses WaitForSingleObject INFINITE
// s_filesToLoad is a list added to from the main thread
while (s_filesToLoad.size() == 0)
{
s_mutex.unlock();
Sleep(10);
s_mutex.lock();
}
loadObj *obj = s_filesToLoad[0];
s_filesToLoad.erase(s_filesToLoad.begin());
s_mutex.unlock();
obj->loadFileAndMemcpy();
}
}
main thread startup:
_beginThread(fileLoadThreadFunc, 0, NULL);
code in a class that the main thread uses to "kick" the thread for loading a file:
// I used the commented code to see if main thread was ever blocking
// but the PRINT never printed, so it looks like it never was waiting on the worker
//while(!s_mutex.lock(false))
//{
// PRINT(L"blocked! ");
//}
s_mutex.lock();
s_filesToLoad.push_back(this);
s_mutex.unlock();
Some more notes based on comments:
The loadFileAndMemcpy() function in the worker thread loads via Win32 ReadFile function - does this cause the main thread to block?
I reduced the worker thread priority to either THREAD_PRIORITY_BELOW_NORMAL and THREAD_PRIORITY_LOWEST, and that helps a bit, but when I move the mouse around to see how slowly it moves while the worker thread is working, the mouse "jumps" a bit (without lowering the priority, it was MUCH worse).
I am running on a Core 2 Duo, so I wouldn't expect to see any mouse lag at all.
Mutex code doesn't seem to be an issue since the "blocked!" never printed in my test code above.
I bumped the sleep up to 100ms, but even 1000ms doesn't seem to help as far as the mouse lag goes.
Data being loaded is tiny - 20k .png images (but they are 2048x2048).. they are small size since this is just test data, one single color in the image, so real data will be much larger.
You will have to show the code for the main thread to indicate how it is notified that it a file is loaded. Most likely the blocking issue is there. This is really a good case for using asynchronous I/O instead of threads if you can work it into your main loop. If nothing else you really need to use conditions or events. One to trigger the file reader thread that there is work to do, and another to signal the main thread a file has been loaded.
Edit: Alright, so this is a game, and you're polling to see if the file is done loading as part of the rendering loop. Here's what I would try: use ReadFileEx to initiate an overlapped read. This won't block. Then in your main loop you can check if the read is done by using one of the Wait functions with a zero timeout. This won't block either.
Not sure on your specific problem but you really should mutex-protect the size call as well.
void fileLoadThreadFunc(void *arglist) {
while (true) {
s_mutex.lock();
while (s_filesToLoad.size() == 0) {
s_mutex.unlock();
Sleep(10);
s_mutex.lock();
}
loadObj *obj = s_filesToLoad[0];
s_filesToLoad.erase(s_filesToLoad.begin());
s_mutex.unlock();
obj->loadFileAndMemcpy();
}
}
Now, examining your specific problem, I can see nothing wrong with the code you've provided. The main thread and file loader thread should quite happily run side-by-side if that mutex is the only contention between them.
I say that because there may be other points of contention, such as in the standard library, that your sample code doesn't show.
I'd write that loop this way, less locking unlock which could get messed up :P :
void fileLoadThreadFunc(void *arglist)
{
while(true)
{
loadObj *obj = NULL;
// protect all access to the vector
s_mutex.lock();
if(s_filesToLoad.size() != 0)
{
obj = s_filesToLoad[0];
s_filesToLoad.erase(s_filesToLoad.begin());
}
s_mutex.unlock();
if( obj != NULL )
obj->loadFileAndMemcpy();
else
Sleep(10);
}
}
MSDN on Synchronization
if you can consider open source options, Java has a blocking queue [link] as does Python [link]. This would reduce your code to (queue here is bound to load_fcn, i.e. using a closure)
def load_fcn():
while True:
queue.get().loadFileAndMemcpy()
threading.Thread(target=load_fcn).start()
Even though you're maybe not supposed to use them, python 3.0 threads have a _stop() function and python2.0 threads have a _Thread__stop function. You could also write a "None" value to the queue and check in load_fcn().
Also, search stackoverflow for "[python] gui" and "[subjective] [gui] [java]" if you wish.
Based on the information present at this point, my guess would be that something in handler for the file loading is interacting with your main loop. I do not know the libraries involved, but based on your description the file handler does something along the following lines:
Load raw binary data for a 20k file
Interpret the 20k as a PNG file
Load into a structure representing a 2048x2048 pixel image
The following possibilities come to mind regarding the libraries you use to achieve these steps:
Could it be that the memory allocation for the uncompressed image data is holding a lock that the main thread needs for any drawing / interactive operations it performs?
Could it be that a call that is responsible for translating the PNG data into pixels actually holds a low-level game library lock that adversely interacts with your main thread?
The best way to get some more information would be to try and model the activity of your file loader handler without using the current code in it... write a routine yourself that allocates the right size of memory block and performs some processing that turns 20k of source data into a structure the size of the target block... then add further logic to it one bit at a time until you narrow down when performance crashes to isolate the culprit.
I think that your problem lies with access to the FilesToLoad object.
As I see it this object is locked by your thread when the it is actually processing it (which is every 10ms according to your code) and by your main thread as it is trying to update the list. This probably means that your main thread is waiting for a while to access it and/or the OS sorts out any race situations that may occur.
I would would suggest that you either start up a worker thread just to load a file when you as you want it, setting a semaphore (or even a bool value) to show when it has completed or use _beginthreadex and create a suspended thread for each file and then synchronise them so that as each one completes the next in line is resumed.
If you want a thread to run permenently in the background erasing and loading files then you could always have it process it's own message queue and use windows messaging to pass data back and forth. This saves a lot of heartache regarding thread locking and race condition avoidance.

How to timeout a mysql++ query in c++

I am using mysql++ in order to connect to a MySQL database to perform a bunch of data queries. Due to the fact that the tables I am reading from are constantly being written to, and that I need a consistent view of the data, I lock the tables first. However, MySQL has no concept of 'NOWAIT' in its lock query, thus if the tables are locked by something else that keeps them locked for a long time, my application sits there waiting. What I want it to do is to be able to return and say something like 'Lock could no be obtained' and try again in a few seconds. My general attempt at this timeout is below.
If I run this after locking the table on the database, I get the message that the timeout is hit, but I don't know how to then get the mysql_query line to terminate. I'd appreciate any help/ideas!
volatile sig_atomic_t success = 1;
void catch_alarm(int sig) {
cout << "Timeout reached" << endl;
success = 0;
signal(sig,catch_alarm);
}
// connect to db etc.
// *SNIP
signal (SIGALRM, catch_alarm);
alarm(2);
mysql_query(p_connection,"LOCK TABLES XYZ as write");
You can implement a "cancel-like" behavior this way:
You execute the query on a separate thread, that keeps running whether or not the timeout occurs. The timeout occurs on the main thread, and sets a variable to "1" marking that it occurred. Then you do whatever you want to do on your main thread.
The query thread, once the query completes, checks if the timeout has occurred. If it hasn't, it does the rest of the work it needs to do. If it HAS, it just unlocks the tables it just locked.
I know it sounds a bit wasteful, but the lock-unlock period should be basically instantaneous, and you get as close to the result you want as possible.
You could execute the blocking query in a different thread and never being bothered with the timeout. When some data arrives you notify the thread that needs to know about the status of the transaction.
If I was writing from scratch I would do that, but this is a server application that we are just doing an upgrade to rather than a large rework.
instead of trying to fake transactions with table locks, why not switch to innodb tables where you get actual transactions? just make sure to set the default transaction isolation level to REPEATABLE READ.
As I said, it is not so easy to 'switch' or re-architect when this is a live, in production system. I'm slightly frustrated that MySQL provides no methods to check for locks or choose not to hang waiting on a lock.
I don't know if this is a good idea in terms of resource usage and "best practices" and "cleanliness" and all the rest... but you have now repeatedly described the handcuffs that bind you in terms of re-architecting a "clean" system... so here goes.....
Could you open a new, separate connection just for sending the LOCK statement? Then close that connection when you catch the timeout alarm? By closing/destroying the connection that was dedicated to the LOCK statement, would not that essentially "cancel" the LOCK statment? I am not certain if such events would occur as I have described/guessed, but maybe it is something to test out.
My experience described so far indicates to me that closing a connection in which a query is running causes a seg fault. Therefore dispatching that query into a different connection wouldn't really help, as that would also seg fault.