Can glFenceSync be used cross thread or context boundary? - opengl

May I create a glFenceSync in one thread, and wait for it in another thread?
or
May I create a glFenceSync in one context and wait for it in another context?

May I create a glFenceSync in one thread, and wait for it in another thread?
Every GL function you can call requires you to have made current a GL context for the thread you are calling this, and a GL context can be current to at most one threadt at any point in time.
Technically, the answer to your question is still "yes", since you can issue a glFenceSync on one thread, move the context over to another thread and call gl[Client]WaitSync there - but that is probably not what you had in mind, and I also don't see an obvious use case for such a pattern.
May I create a glFenceSync in one context and wait for it in another context?
Sync objects are share-able in the GL, so if you create contexts which share objects, they will also share sync objects, and the spec explicitly allows waiting on sync objects of another context. Actually, it is even specified that there can be multiple simultaneous waits on a single sync object, and all of them will be unblocked when the sync object gets signaled (but in an implementation-dependent order).

Related

QWaitCondition, except with manual reset? (Or creating QFuture objects outside of Qt Concurrent?)

Does Qt provide a synchronization primitive that behaves in much the same way as Concurrency::event from Microsoft's Concurrency Runtime?
Specifically, I would like wait() in thread A to return even if it does not call wait() until after thread B has already called wakeAll(), but before a "reset" function is called. Also, I'd like something where reset() and set() do not have to be called from the same thread.
Basically, if I did not need to have async operations run in a specific thread (in my case it basically feeding tasks to an OpenGL rendering thread) QFuture and Qt Concurrent would be perfect.
If not specifically provided, is there a way to emulate that functionality with Qt?
Thanks!
I thought that I needed a QFuture a few times in the past as well, but always ended up using signals and slots to pass messages between the threads, carrying the data I would have put in the QFuture as an argument. Especially when there's a QEventLoop at the bottom of my thread.
Without an event loop I usually end up doing it manually with QWaitCondition, QMutex and QMutexLocker.
So sadly I would say that there isn't any higher-level class that would fit what you describe.
So now you have the mutex and the wait condition.
Simply add a boolean flag, which you access with the same mutex locked.
When you do wakeAll, also set flag to true. Before doing wait, check the flag first and don't wait it is true. And then reset is simply setting the flag to false.

Can I implement a fair "wait on multiple events" with just events, mutexes, and semaphores?

On a platform that only has events[1], mutexes, and semaphores[2] can I create a fair "wait on multiple events" implementation that returns when any of the events[3] is signaled/set. I'm assuming the existing primitives are fair.
[1] Event is a "flag" that has 4 ops: Set(), Clear(), Wait(), and WaitAndClear(). If you Wait() on an unset event, you block until someone Set()'s it. WaitAndClear() is what it sounds like, but atomic. All waiters are awoken.
[2] I do not believe the system supports semaphores values going negative.
[3] I say "events", but it could be a new object type that uses any of those primitives.
For windows, WaitForMultipleObjects with the third parameter set to false should work (also includes a timeout option). I've also seen a similar wait function implemented for a in house developed small kernel used in an X86 (80186) embedded device. For an in house kernel, if the maximum number of threads is fixed, then each event, semaphore, ..., can have an array of task control block addresses for any threads pending on that object. Another option is to make a rule that only one thread can wait for any event, semaphore, ... , (only one entry for each object type that would contain either null or the address of a pending task control block) and in the case where multiple threads need to be triggered, multiple events or semaphores would be used.
You need one of the following:
A non-blocking event tester
A ready made primitive, eg WaitForMultipleObjects
One thread per waited object, plus some overhead
If you cant have one of those, i don't think its doable.

C++11 deferred "thread" creation (i.e., specify thread function but do not wait physical thread to be created)

My goal is what i asked in title where i want caller thread does not wait the child thread to be physically created and resumed when using std::thread ctor with a thread function (non void ctors)
My problem hits (in Windows) when I try to create an std::thread object with a thread function during a DLL load.
It is a problem because (as far as i beleive)
- Thread constructor tries to create a physical thread
- Ctor somehow (and unluckly for me) waits the physical trhread to resume (go live)
- Unfortunately, Win API does not allow the threads to resume within the LoadLibrary call if they are created during that function call.
- So I have a dead-lock: LoadLibrary creates a thread, it waits for it to resume, Windows does not let it to resume.
I can invent some solutions to that problem (by having a distinct thread not using std::thread which will be constructing additional threads (std::threads), but i then miss the whole point of using "only" std::thread for my threading needs :-) ).
However, it would be best if std::thread be told not to wait for physical thread to resume if it is constructed with a thread function (or lambda or whatever).
Is there a way of doing this or I should go for work-arounds?
thanks
One more case when one would need the same
In the fast path, I may create (lazyly) an std::thread object, post some tasks to it (many tasks potentiall), and go-on (in the fast path) without getting delayed! I may not care when the child thread really physically gets created and resumes.
It would be pitty if one cannot lazly have physical threads created in such fast-pathes or during DllMain etc.
The problem is not in the way you are creating the thread, but rather in the fact itself that you are creating a thread while the DLL is being loaded. While calls to CreateThread may be safe (as long as no waiting operations are performed by the launched thread), it is in general a bad idea to create threads during DllMain.
What you should do here is to export an initializer function and require loading modules to invoke it after your DLL has been loaded. The initializer function would then instantiate all the required objects and create all the necessary threads.
Also see this Q&A on StackOverflow.

Sending messages to a thread?

I need to imlement in cocoa, a design that relies on multiple threads.
I started at the CoreFoundation level - I created a CFMessagePort and attached it to the CFRunLoop, but it was very inconvenient as (unlike on other platforms) it needs to have a (systemwide) unique name, and CFMessagePortSendRequest does not process callbacks back to the current thread while waiting. Its possible to create my own CFRunLoopSource object, but building my own thread safe queue seems like overkill.
I then switched from using POSIX threads to NSThreads, calling performSelector:onThread: to send messages to other threads. This is far easier to use than the CFMessagePort mechanism, but again, performSelector:onThread: does not allow the main thread to send messages back to the current thread - and there is no return value.
All I need is a simple - inprocess - mechanism (so I hopefully don't need to invent schemes to create 'unique' names) that lets me send a message (and wait for a reply) from thread A to thread B, and, while waiting for the message, allow thread B to send a message (and wait for a reply) to/from thread A.
A simple: A calls B re-entrantly calls A situation that's so usual on a single thread, but is deadlock hell when the messages are between threads.
use -performSelectorOnThread:withObject:waitUntilDone:. The object you pass would be something that has a property or other "slot" that you can put the return value in. e.g.
SomeObject* retObject = [[SomeObject alloc] init];
[anotherObject performSelectorOnThread: whateverThread withObject: retObject waitUntilDone: YES];
id retValue = [retObject retValue];
If you want to be really sophisticated about it, instead of passing an object of a class you define, use an NSInvocation object and simply invoke it on the other thread (make sure not to invoke the same NSInvocation on two threads simultaneously) e.g.
[invocation performSelectorOnMainThread:#selector(invoke) withObject:NULL waitUntilDone:YES];
Edit
if you don't want to wait for the processing on the other thread to complete and you want a return value, you cannot avoid the other thread calling back into your thread. You can still use an invocation e.g.
[comObject setInvocation: myInvocation];
[comObject setCallingThread: [NSThread currentThread]];
[someObject performSelectorOnMainThread: #selector(runInvocation:) withObject: comObject waitUntilDone: NO];
// in someObject's implementation
-(void) runInvocation: (ComObject*) comObject
{
[[comObject invocation] invoke];
[self perfomSelector: #selctor(invocationComplete:)
onThread: [comObject callingThread]
withObject: [comObject invocation]];
}
If you don't like to create a new class to pass the thread and the invocation, use an NSDictionary instead e.g.
comObject = [NSDictionary dictionaryWithObjectsAndKeys: invocation, "#invocation" [NSThread currentThread], #"thread", nil];
Be careful about object ownership. The various performSelector... methods retain both the receiver and the object until they are done but with asynchronous calls there might be a small window in which they could disappear if you are not careful.
Have you looked into Distributed Objects?
They're generally used for inter-process communication, but there's no real reason it can't be constrained to a single process with multiple threads. Better yet, if you go down this path, your design will trivially scale to multiple processes and even multiple machines.
You are also given the option of specifying behaviour by means of additional keywords like oneway, in, out, inout, bycopy and byref. An article written by David Chisnall (of GNUstep fame) explains the rationale for these.
All that said, the usual caveats apply: are you sure you need a threaded design, etc. etc? There are alternatives, such as using NSOperation (doc here) and NSOperationQueue, which allow you to explicitly state dependencies and let magic solve them for you. Perhaps have a good read of Apple's Concurrency Programming Guide to get a handle (no pun intended) on your options.
I only suggest this as you mentioned trying traditional POSIX threads, which leads me to believe that you may be trying to apply knowledge gleaned from other OSes and not taking full advantage of what OS X has to offer.

Difference between event object and condition variable

What is the difference between event objects and condition variables?
I am asking in context of WIN32 API.
Event objects are kernel-level objects. They can be shared across process boundaries, and are supported on all Windows OS versions. They can be used as their own standalone locks to shared resources, if desired. Since they are kernel objects, the OS has limitations on the number of available events that can be allocated at a time.
Condition Variables are user-level objects. They cannot be shared across process boundaries, and are only supported on Vista/2008 and later. They do not act as their own locks, but require a separate lock to be associated with them, such as a critical section. Since they are user- objects, the number of available variables is limited by available memory. When a Conditional Variable is put to sleep, it automatically releases the specified lock object so another thread can acquire it. When the Conditional Variable wakes up, it automatically re-acquires the specified lock object again.
In terms of functionality, think of a Conditional Variable as a logical combination of two objects working together - a keyed event and a lock object. When the Condition Variable is put to sleep, it resets the event, releases the lock, waits for the event to be signaled, and then re-acquires the lock. For instance, if you use a critical section as the lock object, SleepConditionalVariableCS() is similar to a sequence of calls to ResetEvent(), LeaveCriticalSection(), WaitForSingleObject(), and EnterCriticalSection(). Whereas if you use a SRWL as the lock, SleepConditionVariableSRW() is similar to a sequence of calls to ResetEvent(), ReleaseSRWLock...(), WaitForSingleObject(), and AcquireSRWLock...().
They are very similar, but event objects work across process boundaries, whereas condition variables do not. From the MSDN documentation on condition variables:
Condition variables are user-mode
objects that cannot be shared across
processes.
From the MSDN documentation on event objects:
Threads in other processes can open a
handle to an existing event object by
specifying its name in a call to the
OpenEvent function.
The most significant difference is the Event object is a kernel object and can be shared across processes as long as it is alive when processes/threads are trying to acquire, on the contrary, Condition variable is a user mode object which is light(only has same size as a pointer and has nothing additional to be released after using it) and has better performance.
Typically, condition variable is often used along with locks, since we need to keep data synchronized properly. When considering Condition Variable, we treat it like keyed events which was improved since Vista.
Joe duffy has a blog post http://joeduffyblog.com/2006/11/28/windows-keyed-events-critical-sections-and-new-vista-synchronization-features/ that explained more detailed information.