I develop a plugin for compiz window manager. I want to draw a transformed window texture and send an event to that window. When a transformed window is renderer I grab pointer to take control off all xevents, the main part of compiz function which grabs pointer looks like this:
status = XGrabPointer (privateScreen.dpy, privateScreen.eventManager.getGrabWindow(), true,
POINTER_GRAB_MASK,
GrabModeAsync, GrabModeAsync,
privateScreen.rootWindow(), cursor,
CurrentTime);
When a pointer is grabbed I recalculate button press coordinates and I use XSendEvent to send events to the destination window. It works fine for google chrome window or for a simple xwindow application like this: link Unfortunately it doesn't work correctly for a window which performs OpenGL rendering - I have tested SDL and GLFW. Such window receives click events but with different parameters (xbutton.x_root, xbutton.y_root, xbutton.x, xbutton.y) than I specified in XSendEvent. Each time when I send an event those parameters contains the same values which seem to be the mouse position before xgrabpointer was called.
When a pointer is not grabbed events (from XSendEvent) are received correctly. There must be some specific relation between xsendevent, xgrabpointer and a window which performs OpenGL rendering. Moreover clients (windows) implementation may differ because my code doesn't work only for that specific windows or maybe I do something wrong ?
Edit1
Let's consider following example: I have a fullscreen window, then I can use a plugin to draw transformed window texture (for example scale transformation (0.5, 0.5, 1.0)), but X11 still sees a fullscreen window, so when I click on the region outside a transformed texture event go to the window. When I grab pointer in the plugin I'm the only receiver off all events, then I can recalculate cooridnates based on my window transfomation and send them to the window.
Edit2
When I use Freeglut all events are sent correctly to the destination window when pointer is grabbed. There have to be some differences between library implementations.
OpenGL is not concerned with input events, it just draws things, without even knowing what X11 or pointer events are.
This must be something both SDL and GLFW do in their window setup.
However I wonder why you are grabbing the pointer at all? The X Composite extension, which is the foundation on which Compiz and other compositing WMs are built already has a dedicated API for pointer input transformation, see http://www.x.org/archive/X11R7.5/doc/compositeproto/compositeproto.txt RedirectCoordinate and TransformCoordinate. You should use those functions instead of messing with grabbing the pointer. Grabbing the pointer is a really, really bad idea.
Okay, it seems that the copy of the X composite extension I linked (and I have on my computer) is an earlier draft still with the RedirectCoordinate in it. There's a newer version (which unfortunately carries the same date in its head – WTF?) where coordinate redirection has been removed.
SDL updates a mouse position when handling MouseMotion event. It doesn't use coordinates which are stored in ButtonPress event. When a window manager grabs a pointer and sends ButtonPress event to the SDL window, a mouse position inside receiver is not updated. Here is example solution (following code should be added to the section after ButtonPress label in SDL_x11events.c:
if(xevent.xany.send_event)
{
SDL_Mouse *mouse = SDL_GetMouse();
mouse->x = xevent.xbutton.x;
mouse->y = xevent.xbutton.y;
}
Related
I have a problem handling GLFW poll events. As far as I know, all user input events are handled via callbacks or via constantly checking keyboard / mouse states. The latter is not so efficient an can even result in missing some input (e. g. when button pressed and then released between checking state). What is more, some events like window resizing cannot be handled without callbacks.
So, the problem is that whenever user starts resizing window (presses mouse button but doesn't move mouse), the app seems to freeze. This is, assuming resize callback is enabled and defined validly (even when copied right from GLFW API). And the problem is not that window doesn't redraw. Redraw on callback can be done with creating and calling own render() function in callback function.
The actual problem is that even when I handle resize event properly and redraw on callback, there is still some lag. This lag is after mouse press on decorated window border and when mouse is not moving. Here's a demonstration (button click is highlighted green):
Sorry for messed up GIF. All callbacks listed in GLFW API are enabled and handled (window-, input-, joystick- and monitor-callbacks) and redraw is called in each one. It seems that I'm missing some of the callbacks or GLFW just works like that.
According to this answer, this can't be done without threading:
That only works when the user moves the mouse while holding - just holding left-click on the resize window part still stalls. To fix that, you need to render in a separate thread in addition to this. (No, you can't do that without threading. Sorry, this is how GLFW works, no one except them can change it.)
So, the questions are:
How can I fix this issue without threading? If I can't, I guess I can emulate resizing with different cursors shapes and resizing zones or smth like that...
If this is still impossible to solve in GLFW, do other GLFW alternatives have this issue?
Are there any problems with GLFW similar to this one?
GLFW is not at fault here. It's how the operating system handles certain user input events like mouse down on the decorator resize handles of a window or moving the whole window.
See this answer for a more elaborate detail: Win32: My Application freezes while the user resizes the window
GLFW uses the standard Windows PeekMessage -> TranslateMessage/DispatchMessage loop which you will find in any GUI Windows application. This will get invoked when you call glfwPollEvents() and it processes all Window event messages that the OS has accumulated so far for all windows in this process. After all messages so far have been processed, the call to glfwPollEvents() will return and will allow your own window/game loop to continue.
What happens is that once the user clicks down the window decoration's resize handles, effectively the call to glfwPollEvents() will block within the OS itself in order for the OS / window-manager to intercept the mouse and keyboard messages to do its window resizing/reshaping thing.
I'm afraid that even though Windows will inform the process about the start of a window resize or move action (after which the OS will have control of the window message processing) and GLFW already handling these events internally, right now GLFW will not notify the client application about this. It would be possible though for GLFW to provide an appropriate event callback to the application, so that the application can start a timer or thread only for as long as the window resize/move action happens (as is also mentioned in the linked other Stackoverflow answer).
So, the only thing that you can do in order to keep rendering while the user holds onto the resize handles or while the user moves the window around, is to render in a separate thread.
I'm trying to draw on the screen (the whole screen, on top of every other window) using GDI+.
I've passed NULL to GetDC to get a HDC to the screen, and then used that to create a Graphics object, and used DrawRectangle to draw rectangles on the screen.
Everything works..except...the inside of the rectangle won't update.
Like if I draw it over a command prompt, and move the command prompt, the inside of the rectangle remains black.
I expect to see whats under the rectangle.
Here's the code that's doing the drawing..
Pen BluePen(Color(255, 0, 255, 0), 2);
Graphics graphics(screenDC);
graphics.DrawRectangle(&BluePen, myRect);
Pretty simple, so is there something I have to do to get the inside of the rectangle to update when the screen does? Or to get it truely transparent.
================= EDIT =================
Well I had given up on this, and assumed it wasn't possible, until...I realized the Inspect tool that comes with the Windows SDK does this perfectly.
I would like to recreate something similar to the highlight rectangle, and if I select a window (such as Firefox) and then bring Inspect into focus I can move it around freely with everything being updated perfectly.
There's not even any flickering.
So...does anyone know how Inspect manages to do this?
Also answers in GDI instead of GDI+ are fine...
In windows the screen (and the windows ...) surface(s) are ... volatile, like sandboxes. The "overlapping" of windows and the re-painting of uncovered surfaces is an illusion made by proper event management.
Everything is drawn remain there until something else is drawn over it.
"Uncovering" a surface makes the window representing that surface to receive a WM_PAINT message. It's up to that window procedure to react to that message by re-painting everything is supposed to be under it.
Now, unless you intercept somehow the WM_PAINT message that is sent to the desktop window, you have mostly no chance to know the desktop needs a repaint and hence your paint code will not be called and no repaint will happen. Or better it happens following just the desktop window updating code, that's not aware of your paint.
I started building my program with SDL. I am using SDL to render a frame buffer live on the screen and also looking for user input from the keyboard and the mouse.
I have been using the following code to display a 5-6-5 RGB frame buffer on the screen.
SDL_Surface* screen = SDL_SetVideoMode(width, height, bitsPerPixel, SDL_SWSURFACE);
SDL_Surface* surface = SDL_CreateRGBSurfaceFrom(frameBuffer, width, height, depth, lineWidth, Rmask, Gmask, Bmask, Amask);
SDL_BlitSurface(surface, NULL, screen, NULL);
SDL_Flip(screen);
Note that Rmask, Gmask and Bmask and Amask are all 0. I am using SDL's default masking.
For the keyboard and mouse, I have been using the following code:
while (run) {
SDL_WaitEvent(&event);
switch (event.type) {
// Key is pressed
case SDL_KEYDOWN:
map<int, int>::iterator it = SDLToAndroid.find(event.key.keysym.sym);
if (it != SDLToAndroid.end()) { /* ... */ }
break;
case SDL_MOUSEBUTTONDOWN:
// Left click
if (event.button.button == SDL_BUTTON_LEFT) { /* ... */ }
break;
Where for keypresses I look them up in map to convert them to some other keys.
This works great, but I would like to use the capabilities of GTK (in terms of UI). I started including SDL into my GTK+ window using putenv environment (The SDL_WINDOWID hack) but I have two problems:
a) SDL events are not received with this solution.
b) The frame buffer display is always put on the top left of the window (0,0) but I would actually like this display to show up somewhere in the middle on my window, with some buttons above and below.
I am thinking of getting rid of SDL and just use GTK+ itself. There are a few things that I would please like to ask you, as I am very new with GTK+.
Could you please tell me what kind of GtkWidget should be used to display my frame buffer inside a window? Does anyone also please know what function I could use in GTK to perform the same task as SDL_CreateRGBSurfaceFrom() (if it is even possible)? And finally, would you please link me to a way to get the x and y coordinates in the GtkWidget that gets clicked or moved in, and not the coordinates of the whole window, as well as keyboards input?
(I have found some solution for the mouse and keyboard. I've seen the "configure-event" used to get the x and y values, but I am not sure if this event works on any GtkWidget. Also for keyboard, what I have found is to use "activate", but I'd like to be confirmed this is the right approach.)
a) SDL events are not received with this solution.
As the page mentions, the events are not received by SDL event loop but by Gtk event loop. So you can try capturing events through Gtk event loop.
b) The frame buffer display is always put on the top left of the window (0,0) but I would actually like this display to show up somewhere in the middle on my window, with some buttons above and below.
This is possible. The hack which you are using, basically makes use of the fact that SDL (in case hardware not used directly) & Gtk are dependent on the Windowing system to display onto window & more precisely on X11 on most Linux desktops. Thus window creation is done by X which has an XID, which SDL is using from the one created for GtkWidget. Currently you are using GtkWindow's corresponding X window, instead if you use GtkDrawingArea's X window you can make the display as per your wish. Now to get the events, register callback. I tried to create a mash up where on clicking "Start SDL Animation", SDL animation starts & clicking in the area of animation will trigger the "button-release-event" which prints the relative x & y coordinates onto the console output but key events are not being received. Hopefully you can build up on this or use it for future reference (in case).
Could you please tell me what kind of GtkWidget should be used to display my frame buffer inside a window?
You can look at GtkDrawingArea which is meant for custom UI interfaces or GtkImage with GdkPixbuf as few of the alternatives. You can see gtk-demo code or Cairo animation sample to see how to proceed for your requirements.
Does anyone also please know what function I could use in GTK to perform the same task as SDL_CreateRGBSurfaceFrom() (if it is even possible)?
For this you can have a look at GdkPixbuf. It is possible to set raw data to GdkPixbuf & the pixbuf can be used to create GtkImage and such which you can use for displaying.
And finally, would you please link me to a way to get the x and y coordinates in the GtkWidget that gets clicked or moved in, and not the coordinates of the whole window, as well as keyboards input?
For mouse events you need to register callback for either "button-press-event" or "button-release-event". The signal callback has a GdkEvent parameter. Typecast that in the callback to GdkEventButton & get the information which you need like relative x & y coordinates etc.
For keyboard events you need to register callback for either "key-press-event" or "key-release-event". The signal callback has a GdkEvent parameter. Typecast that in the callback to GdkEventKey & get the information which you need. Additionally for keyboard events the widget should be able to grab focus which you can enforce through the call gtk_widget_set_can_focus
Hope this helps!
I want to have a red line instead of mouse pointer in my (written in C++ with OpenGL) application. For example when I move the mouse over an OpenGL window, I would like the cursor to become a red line. How can I do that?
Also, how can I call the mouse related functions (OpenGL) for that line and specify the functionality?
As Nicol Bolas said in his comment, OpenGL knows nothing of the mouse. You'll need to interact with the windowing system one way or another (via direct Win32/X11/etc. API or via a portable windowing library a la Qt, wxWidgets, etc) to monitor mouse position.
If the cursor you're trying to draw is a bitmap, your best bet is likely to handle mouse enter/leave events sent to your window and respond to them by using an API function to change the cursor. This will handle automatically updating the cursor as the mouse moves around and will add minimal overhead to your application (windowing system cursors generally draw in an overlay plane that avoids sending redraw events to your window every time the mouse moves).
If you have a more procedural description of your cursor (that is, you intend to draw it with OpenGL drawing commands), you'll instead want to handle the mouse enter/leave events by using HideCursor()/ShowCursor() commands or the equivalent to turn off the windowing system's native cursor when the mouse is over your window. Then, you'll hook the mouse move callbacks and redraw your scene, adding whatever commands you need to draw the cursor at the position specified in the mouse move event.
The first approach is definitely preferred for performance & latency reasons--but there are some cursor types (think full screen crosshairs) that can't be accomodated that way.
I want to be able to render to an X Window given just its id.
In this case I have a window created in python by gtk.
I can get the window ID of a gtk.Drawable and pass it into my C python module, but can I then make OpenGL calls render to it?
I am aware of gtkglext, but would rather not use it if possible.
Update 1:
Ok, so (rather obviously now that I see it) You just do an XCreateWindow with a parent of the Window id that you get from gtk.window.xid, using the correct flags for an opengl window, and hey presto.
The only problem is I can't get it to work if there are not multiple widgets in the window, otherwise it seems that the xid represents a window that covers the entire toplevel window. Not sure how to rectify this.
Update 2:
It turns out that if you have a gl window that is the same size as the toplevel then the toplevel window will not get expose events until the gl window has its buffers swapped. You just have to keep swapping the buffers and things wil be fine.
Update 3:
To answer #babele's comment:
This page in the python gtk docs say how to make a gtk window from an existing xid. After that you just have to remeber to keep calling glXSwapBuffers for that window (if it is an opengl buffered window, otherwise it should just work when you use window_foreign_new).
So the process goes:
Create a gtk widget that will contain your OpenGL window (a DrawingArea is a good choice - you can't use eg a label as it won't have its own xid)
Get the widget's gtk.gdk.Window (docs)
Get the xid from the gtk.gdk.Window (call this window W1)
Pass it to your C/C++ code
Create the opengl capable window (W2) as a child of W1
Pass the xid of W2 back to python
Use window_foreign_new with W2's xid to create the new gtk.gdk.window object
Each time you call glXSwapBuffers on W2 gtk should then be able to react to expose events.
One bit that really threw me is that if W2 covers the whole of W1 then W1 won't receive events until W2's buffers get swapped. This is particularly confusing if W1 is a top-level window as it can be that nothing appears on your screen at all (the window is there but it looks like whatever is behind it until it gets painted to, which won't happen until it gets an expose event).
Also note that you'll have to manually manage the resizing of W2 by connecting to the gtk resize events. You can do this by connecting to this signal, then calling this function in the handler and passing the results to your c/c++ module where you can resize W2 appropriately. Its a good idea to request a minimum size.
You don't need to create a new window, you pass an existing window to glXMakeCurrent().
In your current setup you'd need to:
XGetWindowAttributes() to retrieve the visual the window was created with
glXCreateContext() using this visual
glXMakeCurrent() using this context and the window id
However, this is doing it backwards, forcing OpenGL to use the visual used for CreateWindow. This often works because the default visual has sane GLX properties, but a correct application will glXChooseVisual with desired GLX properties, then use that visual to create the window.