I have a few questions about using them both. At the moment I have a preexisting renderer I'm trying to use with QT and OpenGL.
A few questions are:
How can I get my results to draw in a QGraphicsScene? Is that even the right output to attempt to be using.
With OpenGL I want to be able to load textures and then be displayed in a window? Do I need to coordinate where to draw the texture or can I just say in the centre of a QWidget?
What paramenter would I usually need, I persume I need a Gluint for the texture, and then parameters for the size?
At the moment my results are quite poor, it seems to render something but basically not either in the correct window or not in the window of choice and basically it seems to 'hide' text e.g. hello, I can only see e. Odd I think.
I'm pretty sure this link will help you code with Qt and OpenGL:
http://wesley.vidiqatch.org/03-08-2009/nehe-opengl-lessons-in-qt-chapter-1-and-2
I used this and the NeHe tutorial to code a small Qt/OpenGL application, so all information you need is contained in both tutorials.
Related
Context
I'm a beginner in 3D graphics and I'm starting out with Vulkan, which I already know it's not recommended save it please, currently working on a university project to develop the base of a 3D computer graphics engine based on the Vulkan API.
The problem
Example of running the app to render the classic 2D triangle
Drawing a 3D mesh after having drawn the triangle
So as you can see in the images above I want to be able to:
Run the engine.
Choose an object to be drawn.
Close the window.
Choose another object to be drawn.
Open the same window back up with only the last object chosen visible.
And the way I have been doing this is by essentially cleaning up the whole swap chain and recreating it from scratch once the window is closed and a new object has been chosen. Now I'm aware this probably sounds like terrorism for any computer graphics engineer but the reason I'm doing this is because I don't know a better way, I have just finished the vulkan tutorial.
Solutions tried
I have checked that I do a vkDestroyBuffer and vkFreeMemory on the current vertex buffer before recreating it again once I choose a different object.
I have disabled depth testing entirely in case it had something to do with it, it doesn't.
Note: The code is extensive and I really don't have a clue of which part of it could be relevant to the problem, so I opted for not cluttering the question, if there is an specific part you think it might help you find the solution please request it.
Thank you for taking the time to read my question.
A comment by user369070 ended up drawing my attention to the function I use to read OBJ files which made me realize that this function wasn't cleaning a data structure I use to store the vertices of the object chosen to be drawn before passing them to the vertex buffer.
I just had to add vertices = {}; at the top of the function to solve it.
I'm working with OpenGL and Qt. I render a scene in an OpenGLWidget. When hovering over objects in the scene, I would like to display a box near the selected object with some text. I have already implemented the selection of the object.
I thought of two possible approaches.
Place a widget (such as a QLabel) above the OpenGLWidget in
which the scene is rendered.
Render the text in a quad directly in OpenGL.
Which of the two approaches you recommend and you could please give me some suggestions on implementation. Alternatively, you could recommend another approach. Thanks!
Hi #Artic I am not a Qt expert so I can't give you information on widgets, but I can give you some pointers for creating a label with OpenGL. Giving a full implementation is tricky here because it depends a lot on how you want to display the text. But I'll try to outline some of your options.
To render text in OpenGL most people go with a technique known as bitmap fonts, see more here:
https://learnopengl.com/In-Practice/Text-Rendering
The concept of bitmap fonts is fairly straight forward, all characters are pre-rasterized to a texture and then you can sample from each part of the texture depending on the character you need. You build your label out of quads, textured with each part of the bitmap you sample from for each character.
Signed distance fields essentially use the same technique but the pre-rasterized texture of characters are rendered using signed distance fields which deal with some of the issues that standard bitmaps fonts have.
In basic terms, SDF works by generating a special texture, or image, of the font that stores the distance from the edge of each character to its centre, using the colour channels of the image to record the data.
If you use signed distance fields it won't be enough to just sample from your bitmap, fonts rendered this way require extra work (typically done using a shader program) to produce the correct rendering.
Once you have a way of generating a label you can decide if you want to display it in screen space or in world space.
If you want to display it in world space (where the label is hovering over the model or item) you will need to do more work if you want that label to always face the camera and this technique is called billboarding.
You could also render your text "on the fly" if you just want to render some text to the screen in screen space. You can use a library like SDL_ttf.
See: http://lazyfoo.net/tutorials/SDL/16_true_type_fonts/index.php
In this example you use SDL_ttf to render a string of text to a surface with dimensions of your choosing, you can then create an OpenGL texture from that surface and render it to the screen.
Sorry if this information is a bit broad, I would need a more specific question to give you further implementation details.
For an implementation, I would evaluate the pros and cons based on what you need. If you haven't implemented a system for rendering text before it's probably best to stick with something simple; there are more techniques for text rendering than I have listed here such as turning text in to polygons and other libraries which attempt to deal with some of the issues with traditional font rendering techniques but you probably don't need anything complicated.
For a recommendation on which to use I would go with the technique that you feel most comfortable with, typically doing things from scratch in OpenGL will take more time but it can provide you with a nicer set of functionality to use in the future. However if Qt already has something nice for rendering a label (such as a widget that you mentioned) it is probably worth taking the time to learn how to use it as it may yield faster results and you don't want to reinvent the wheel if you don't need to. On that note though doing things from scratch with OpenGL can be very rewarding and greatly improve your understanding since you have to get familiar with how things are done when you don't have a layer of abstraction to depend on. Ultimately it depends on you. Good luck!
You could use tool tips in Qt. The string will appear when the OpenGlWidget is hovered over. You can change the text of the tool tip based on the mouse location in the scene, using the tool tip method showText():
QToolTip::showText(QPoint &position, QString &text, QWidget *w);
There are more options for the showText() method and can be found in Qt's tool tip documentation. Also, here are more examples on how to use Qt tool tips.
I want to write a simple application using opengl under linux. I want to open the image and allow the user to interactively select a rectangle. After that user can save it to a specific location.
Could anyone give me the startup links or sample code.
From your question I take it that you think OpenGL was some kind of imaging library. This is not the case.
OpenGL is meant only for drawing nice pictures to the screen. It deals neither with image loading, or storing. It's also not meant for imaging operations like cropping (although this is actually quite easy to implement with OpenGL).
Regarding your question: OpenGL can be used for the "display the image" and "draw a rectangle around it" part. Loading and saving the image, and doing the actual crop is not to be done using OpenGL.
I need to draw a VBO consisting of font data, mainly numbers. How do I obtain the data and send it to the VBO?
I know that there is a library called freetype which should do this, but that uses bitmap fonts and I do not need bitmaps in my project. I just want polygon data which I can fill with my own color and reposition/scale.
Freetype also does outline fonts, but how do I go about tessellating the outline fonts to create accurate geometry?
Is what I am trying to achieve difficult? Can I find some examples of something similar?
Is what I am trying to achieve difficult?
In the case of rendering crisp fonts at all sizes with proper gamma correction and antialiasing: Yes!
This is actually a subject of active research.
Can I find some examples of something similar?
Just use a ready to use font drawing library for OpenGL, like FTGL.
A solution that could work is to save the font data as XY coordinates with indices from a 3D Modeling program. Than this data is loaded at startup, the result being the desired one.
Of course that this does not work when changing fonts and it takes time, but if the font will not change, it does its job.
I'm trying to develop a custom set of libraries for creating GUIs in Linux, with, you know, widgets, buttons, etc. So I'm now learning to creating user interfaces using X11 and its Xlib. I get to the point of having a nice window of a size specified, at a position specified, of a specified background color, and the possibility of drawing points, rectangles, arcs. However as I drew my first circle I got really disappointed by the fact that the circle is not antialiased. I can see every single pixel as a square.
Now the question is easy. Is there any way to tell X: please antialias anything before drawing? Or do I have to avoid using XDrawArc and use a custom function which calls XDrawPoint for each point of the circle? Or there is a third solution?
Thanks in advance.
The short answer is "no". Xlib doesn't do anti-aliasing.
The longer answer is "you can use a higher level API such as Cairo Graphics". It's not necessary to roll your own.
What you encountered are the limitations of the X11 core protocol; technically it would be perfectly possible to add antialiasing to it, but that didn't happen.
Instead there's the XRender extension, that provides nice antialiased primitives. You'll also want to look into Xft to render antialiased text using vector fonts.
You can roll your own antialiasing algorithm. You have the only 2 primitives you need: 1) a function to draw TrueColor points (namely, xcb_poly_point(), if you're using XCB), and 2) for loops.