I'm currently creating a 2D game using C++ and OpenGL. I was wondering whether anyone could explain the best way to change the current cursor from the mouse icon. I am creating a top down shooting game and therefore would like the cursor to be displayed as a crosshair instead.
You can either use your OS-specific function for manually changing the mouse pointer icon (and the "hot point" of that icon), or you can use the OS-specific functions for hiding the mouse pointer, and manually drawing your own image using the last mouse-move event that your application received.
Alternatively, instead of using OS-specific functions, you can use a cross-platform API that wraps those kinds of functions for you (SDL, SFML, Qt, to name a few).
OpenGL doesn't have any functions that specifically operate on mouse pointers - that'd be a windowing API thing, not a 3D graphics thing. OpenGL only deals with drawing graphics.
If you are using GLUT (which isn't 'OpenGL' but an add-on library), you can call:
glutSetCursor(GLUT_CURSOR_NONE);
Related
So im currently coding a snake game , and i need to draw the first pixel that indicates the head of the snake (positioned in the middle of the software). But i can't seem to find any function that does drawing on the screen . I've tried using DrawRectang and DrawPixel.
Any help ples?
wxWidgets has capabilities to custom draw a widget/window (or a small invalidated part of one) through it's own drawing API.
This is usually used for customized buttons or other controls, graphs, etc. You can handle EVT_PAINT (wxPaintEvent) where you can create a DC ("Device Context"). As well as on creation or size changes, you can force a redraw with wxWindow::Refresh or wxWindow::RefreshRect (for a small part). You might do so using a timer.
Note that the performance and capability is fairly limited. You can use OpenGL or Direct3D , or various high level libraries with wxWidgets, the native platform window handle is obtainable through wxWindow::GetHandle.
I need to create a GUI with a file menu and menu in which the user can input parameters. The parameters are then used for drawing rectangles in a canvas which is part of the application window. Is there a way to scale the OpenGL subwindow to just one part of the screen and the parameter input to the other? The application needs to be written in C++.
Is it possible to create a GUI with QT and draw the rectangles in the same window using OpenGL? If not, what is the common way to integrate a GUI with OpenGL? (or any other graphics library which I can use to draw rectangles from points as easy as possible)
EDIT: I am not sure If OpenGL is necessary or there is a way to paint the rectangles on the canvas like you can in Java with paintComponent().
I have never used QT before so I am not aware of its capabilites.
you can use opengl window singly or use this in the common mainwindow. previous example (in first answer) show how to use opengl window in qt singly and without communication with other components of Qt (like menu, toolbar and ...). but you can add a opengl window to a mainwindow (like other widget) and use it alongside other widgets . this example can help you.
Yes, using OpenGL together with Qt is absolutely possible. There is even an example for that and Qt provides classes for a more object oriented way of using OpenGL. Have a look here (Section "OpenGL and OpenGL ES Integration") for more details.
I've been looking around for awhile about how to produce buttons using Direct2D and DirectWrite with no luck. I'm comfortable with shapes, text and that jazz. However, it suddenly occurred to me I might be looking about it in the wrong way.
Take the sentence:
you draw your controls and content for your app using the Direct2D and
DirectWrite APIs, handling all the input events directly.
I'm now thinking this means that instead of being able to quickly produce a fully functional button as I would using XAML. I would draw the button, manually check the location of the mouse on click, whether it's within the button boundaries and then handle the event? Similar method for hovering without the click.
Is this the kind of method required when using Direct2D and DirectWrite?
I haven't any experience with DirectX, but in OpenGL I build my buttons from scratch. Assuming you have animated sprites implemented, your buttons are essentially sprites that play certain animations in response to being clicked, hovered over, etc., and which you can register callbacks with. In my 2D engine, I have a class called UiButton, which inherits Sprite, and listens for various UI events. It gets more complicated when you want to handle keyboard navigation (arrow keys + enter to select) as you have to think about how the buttons are connected and which of them has focus at any given moment.
Here is my implementation for reference:
Headers: https://github.com/RobJinman/dodge/tree/master/Dodge/include/dodge/ui
Source: https://github.com/RobJinman/dodge/tree/master/Dodge/src/ui
If you're not prepared to roll your own, Googling "direct2d gui framework" seems to bring up some promising results.
Sorry I can't be of more help.
Yes, to draw a UI Button with Direct2D, you need to handle everything yourself, why? Direct2D is a 2D graphics API, not controls library. you need to draw the layout of your button, and handle the message of your button(such as click, mouse hover...), you lost lots of convenient and that's time-consuming, but the most important thing is: you can control it by yourself!
Direct2D is a graphics library. UI controls like, Text-selection, Textbox, and Buttons is not a part of it. However the benefits of using Direct2D and DirectWrite is we can implement our own UI controls, and having a full control of it.
Please also see: ID2D1Geometry::FillsContainsPoint() for hit-testing task.
What I'm trying to achieve is something like this - that is, an OpenGL view contained in a standard window, alongside some buttons, menus, etc.
However, I'm trying to use non-managed C++ and WinAPI to accomplish that (project requirements), and, if possible, (free)GLUT.
However, it seems to me that the only thing possible using GLUT is to create a separate window. Am I right, or is there actually a way to pass a window handle to GLUT for rendering? Or am I completely off the track?
Yeah, as far as I am aware GLUT only lets you do full-blown windows and will not let you paint into an arbitrary rectangle. There are a number of tutorials on setting up a render context in Windows using nothing but the Win32 and WGL APIs. Once you get your context setup, you can effectively do everything the same way you would with GLUT, but use the appropriate WGL function for swapping buffers.
Here's a high-level overview of what would be involved, though it's really text-heavy and related to MFC it outlines the whole process. You should be able to lookup the appropriate WGL API reference to implement this.
There's really no point in using GLUT if you've already decided to use the Win32 API to be honest, it is going to try to hide everything from you including the message pump that you'll need to handle dialog initialization and button events. If your requirements weren't limited to Win32 API, I would suggest something a little more portable like Qt for a framework that's dialog-friendly and supports OpenGL.
I am writing a program in OpenGL and I need some sort of interfacing toolbar. My initial reactions were to use a GUI, then further investigation into C++ I realized that GUI's are dependent on the OS you are using (I am on Windows). Therefore, I decided to use QT to help me.
My Question is if I am taking the best/appropriate approach to this solution. Am I even able to write my OpenGL program and have the GUI I want to create interface with the C++ code to do what I want it to do.
For example, If I create a simple "control panel" with arrows in each direction. And on screen I have a box object created by glut can I interface the arrows to be clicked on and interact with the openGL program to move the box?
Using Qt is coherent for your problem: it provides good integration of OpenGl through the QtOpenGL module.
Derive your display classes from QGLWidget (until Qt 4.8) or from QOpenGLWidget (since Qt 5.4) and implement virtual methods paintGL() etc.
You will have access to the Qt's signal and slot system so that you will be able to catch Gui events and update the OpenGl display.
You can use regular non-OpenGL Qt widgets on top of a QGLWidget so what you describe is do-able.
One thing I came across when doing this was that the regular widgets had to be opaque. The moment they were transparent in any way there was all sorts of garbage underneath them. That was a while ago so maybe the latest version addresses this issue. May have been platform-specific too so YMMV.
Stay in GLUT. There's no need to add Qt for a control panel. You could open a sub window (or second window, whichever works better for your program design), and draw the controls into that window, and use GLUT to handle the mouse interaction.
Furthermore, Qt and GLUT each have their own event loops. To use just Qt's event loop, you'd have to abandon much of the GLUT structure, since the GLUT event loop would not be there to call your callback functions. Qt does have the functionality to have let somebody else event loop call Qt's event processing code, but I don't think there's an easy way to make GLUT hand off the event information.