This is NOT about how to create and render curves (e.g. Bezeier, NURBS, etc) BUT about how to be able to interactively 'pick' such curves by mouse click when the mouse cursor is hovering over ANY part of such a curve.
After doing the required computing I render a curve, in either 2D or 3D, by subdividing it into lots of individual smaller line segments. The more such line segments there are the smoother and more realistic the curve looks.
I deliberately create each of these GL_LINE segments as separate entities (each being assigned their own unique ID number). These line segments belong to the Curve 'object' (which also has it's own unique ID). Then, using using Ray Casting I can enable mouse-line collsion detection and know when an individual line segment has been 'hit' - and highlight it (e.g. temporarily change its color).
BUT at the same time also highlight all the other line segments that belong to the Curve - and so give appearance of the whole curve being selected.
THE PROBLEM is that because each curve is made up of not just the 'core' control points, which effectively define the curve, but also the thousands of points that effectively draw the curve there is quickly a very noticable slowing down of graphics performance.
I am aware that I could more efficiently instead compute the all the subdivision points and use LINE_STRIP instead to render the curve as one graphical object? BUT then that will NOT allow me to use the ray casting technique described to be be able hover the mouse cursor over any part of the curve and 'select' the curve object.
So....how can I more efficiently 'pick' curves in OpenGL?
You got 2 obvious options for this:
use ID buffer
when you render your curve you assign color to the RGBA frame buffer. So simply also assign ID of renderd curve to separate buffer (of the same resolution as view) and then simply pick the pixel position under mouse from this buffer to see exactly which curve or object you select.
This is pixel perfect and O(1) super fast ... However if your objects are too thin you might have problems using it for mouse picking from single pixel so test up to some distance from mouse (you can glReadPixels rectangle around mouse) end return either all the IDs present or just the most often one.
See this:
OpenGL 3D-raypicking with high poly meshes
Do not forget to clear the ID buffer before rendering and if you got too many objects that will not fit into 8bit stencil use different buffer... In case of 2D you can use depth buffer, or you can render to RGBA framebufer in 2 passes first the ID then read into CPU side memory and ten normal render. You can also render to texture ...
compute distance to curve
its doable for example see:
Is it possible to express âtâ variable from Cubic Bezier Curve equation?
as you can see its possible to use this also for the rendering itself (no line approximation just the "perfect" curve) and even with speed ...
So simply compute the distance of mouse to each of your curves and remember the closest one. If the distance is bigger than some threshold/distance no selection occurs.
Related
I've written my own 3D Game Engine in the past few years and wanted to actually use it for a game.
I stumbled accros the following problem:
I have multiple planes in my game but lets talk about one single plane.
Naturally, planes are not able to dive into the ground and fly under the terrain.
Therefor, I need to implement something that detects the collision between a plane/jet and my ground.
The informations given are the following:
Grid of terrain [2- dimensional array; stores height at according x,z coordinate]
Hitbox of my plane (it moves with my plane, so the bounds etc. are all already calculated and given)
So about the hitboxes:
I though about which method to use. The best one in terms of performance seems to be simple spheres with different radius.
About the ground: Graphically, the ground is subdivided into triangles:
So what I need now is the optimal type of hitbox (sphere, AABB,...) and the according most efficient calculations.
My attempt was to get every surrounding triangle and calculate the distance from that one to each center of my hitbox spheres. If the distance is less than the radius, it has successfully detected a collision. But when I have up to 10/20 spheres in my plane and like 100 triangles to check, it will take to much time.
Another attempt was to get the vertical distance to the ground from each hitbox sphere. This one needs way less calculations but fails when getting near steep surfaces.
I would be very happy if someone could help me implementing an efficient version of plane/terrain collision detection :)
render terrain
May be you could try liner depth buffer to improve accuracy.
read depth texture
you can use glReadPixels with GL_DEPTH_COMPONENT and GL_FLOAT. That will copy depth buffer into CPU side memory. So now you can do also collision on CPU side or any computation related to ground in view...
use the depth buffer as texture
so copy it back GPU with glTexImage2D. I know this is slow (but most likely much faster then your current computation of collision. In case you are not using Intel HD Graphics You can instead #2,#3 use FBO for depth which will render depth buffer directly to texture. But on Intel this does not work reliably (or at all).
now render your objects (off screen) with GLSL
inside fragment shader just compare rendered position with depth (attached as texture). If bellow output the collision somewhere. If done in compute shaders than you can store results in some texture. Or you could use some attachment or FBO for this.
In case you can not use FBO you could render to "screen" with specifically color encoded collisions. Then read it with glReadPixels and scan for it to handle what ever collision logic you have on CPU side...
Do not write to Depth buffer in this pass !!! And also do not use CULL_FACE because that could miss some collision of the back side of your object.
now render the objects normally
in case you do not render in #4 or you encode collision to screen buffer you need to overwrite/render the stuff. Otherwise this step is not needed. But rendering after collision detection is good because in case of collision you most likely change the object position/orientation/mesh and already rendered object could be hindering the altered one.
[Notes]
Copying image between CPU and GPU is slow so use FBO and render to texture if you can instead.
If you are not familiar with multiple pass rendering see some QAs for inspiration:
OpenGL Scale Single Pixel Line
Render filled complex polygons with large number of vertices with OpenGL
This works only in view ... but you can do just collision rendering pass (per object). Render with camera set to view from top to down (birdseye) and covering only area around your object... Also you do not need too big resolution for this so it should be relatively fast ... So you can divide your screen to square areas (using glViewport) testing more objects in single frame to lover the sync time slowdowns as much as possible (use less glReadPixel calls). Also you do not need any vertex colors or textures for this.
I have researched and the methods used to make a blooming effects are usually based on having a sharp and blurred image conjoined to give the glow effect. But I want to know how I can make gl_lines(or any line) have brightness. Since in my game I am randomly generating a simple 2D terrain, I wish to make the terrain line segments glow.
Use a fragment shader to calculate the distance from a fragment to the edge and color the fragment with the appropriate color value. You can use a simple control curve to control the radius and intensity anlong of the glow(like in photoshop). It can also be tuned to act like wireframe visualization. The idea is you don't really rasterize points to lines using a draw call, just shade each pixel based on its distance from the corresponding edge.
The difference from using a blur pass is that you will first get better performance, and second - per-pixel control over the glow, you can have non-uniform glow which you cannot get by using blur because it is not really aware of the actual line geometry, it just blindly works on pixels, whereas with edge distance detection you do use the actual geometry data as input without flatting it down to pixels. You can also have stuff like gradient glows, e.g. the glow color is different and changes with the radius.
Is there a way to extract a point cloud from a rendered 3D Scene (using OPENGL)?
in Detail:
The input should be a rendered 3D Scene.
The output should be e.g a three dimensional array with vertices(x,y,z).
Mission possible or impossible?
Render your scene using an orthographic view so that all of it fits on screen at once.
Use a g-buffer (search for this term or "fat pixel" or "deferred rendering") to capture
(X,Y,Z, R, G, B, A) at each sample point in the framebuffer.
Read back your framebuffer and put the (X,Y,Z,R,G,B,A) tuple at each sample point in a
linear array.
You now have a point cloud sampled from your conventional geometry using OpenGL. Apart from the readback from the GPU to the host, this will be very fast.
Going further with this:
Use depth peeling (search for this term) to generate samples on surfaces that are not
nearest to the camera.
Repeat the rendering from several viewpoints (or equivalently for several rotations
of the scene) to be sure of capturing fragments from a the nooks and crannies of the
scene and append the points generated from each pass into one big linear array.
I think you should take your input data and manually multiply it by your transformation and modelview matrices. No need to use OpenGL for that, just some vector/matrices math.
If I understand correctly, you want to deconstruct a final rendering (2D) of a 3D scene. In general, there is no capability built-in to OpenGL that does this.
There are however many papers describing approaches to analyzing a 2D image to generate a 3D representation. This is for example what the Microsoft Kinect does to some extent. Look at the papers presented at previous editions of SIGGRAPH for a starting point. Many implementations probably make use of the GPU (OpenGL, DirectX, CUDA, etc.) to do their magic, but that's about it. For example, edge-detection filters to identify the visible edges of objects and histogram functions can run on the GPU.
Depending on your application domain, you might be in for something near impossible or there might be a shortcut you can use to identify shapes and vertices.
edit
I think you might have a misunderstanding of how OpenGL rendering works. The application produces and sends to OpenGL the vertices of triangles forming polygons and 3d objects. OpenGL then rasterizes (i.e. converts to pixels) these objects to form a 2d rendering of the 3d scene from a particular point of view with a particular field of view. When you say you want to retrieve a "point cloud" of the vertices, it's hard to understand what you want since you are responsible for producing these vertices in the first place!
Short Version
How can I draw short text labels in an OpenGL mapping application without having to manually recompute coordinates as the user zooms in and out?
Long Version
I have an OpenGL-based mapping application where I need to be able to draw data sets with up to about 250k points. Each point can have a short text label, usally about 4 or 5 characters long.
Currently, I do this using a single textue containing all the characters. For each point, I define a quad for each character in its label. So a point with the label "Fred" would have four quads associated with it, and each quad uses texture coordinates into that single texture to draw its corresponding character.
When I draw the map, I draw the map points themselves in map coordinates (e.g., longitude/latitude). Then I compute the position of each point in screen coordinates and update the four corner points for each of that point's label quads, again in screen coordinates. (For instance, if I determine the point is drawn at screen point 100, 150, I could set the quad for the first character in the point's label to be the rectangle starting with left-top point of 105, 155 and having a width of 6 pixels and a height of 12 pixels, as appropriate for the particular character. Then the second character might start at 120, 155, and so on.) Then once all these label character quads are positioned correctly, I draw them using an orthogonal screen projection.
The problem is that the process of updating all of those character quad coordinates is slow, taking about half a second for a particular test data set with 150k points (meaning that, since each label is about four characters long, there are about 150k * [ 4 characters per point] * [ 4 coordinate pairs per character] coordinate pairs that need to be set on each update.
If the map application didn't involve zooming, I would not need to recompute all these coordinates on each refresh. I could just compute the label coordinates once and then simply shift my viewing rectangle to show the right area. But with zooming, I can't see how to make it work without doing coordniate computation, because otherwise the characters will grow huge as you zoom in and tiny as you zoom out.
What I want (and what I understand OpenGL doesn't provide) is a way to tell OpenGL that a quad should be drawn in a fixed screen-coordinate rectangle, but that the top-left position of that rectangle should be a fixed distance from a given point in map coordinate space. So I want both a primitive hierarchy (a given map point is that parent of its label character quads) and the ability to mix two different coordinate systems within this hierarchy.
I'm trying to understand whether there is some magic transformation matrix I can set that will do all this form me, but I can't see how to do it.
The other alternative I've considered is using a shader on each point to handle computing the label character quad coordinates for that point. I haven't worked with shaders before, and I'm just trying to understand (a) if it's possible to use shaders to do this, and (b) whether computing all those points in shader code actually buys me anything over computing them myself. (By the way, I have confirmed that the big bottleneck is computing the quad coordinates, not in uploading the updated coordinates to the GPU. The latter takes a bit of time, but it's the computation, the sheer number of coordinates being updated, that takes up the bulk of that half second.)
(Of course, the other other alternative is to be smarter about which labels need to be drawn in a given view in the first place. But for now I'd like to concentrate on the solution assuming all labels need to be drawn.)
So the basic problem ("because otherwise the characters will grow huge as you zoom in and tiny as you zoom out") is that you are doing calculations in map coordinates rather than screen coordinates? And if you did it in screen coords, this would require more computations? Obviously, any rendering needs to translate from map coordinates to screen coordinates. The problem seems to be that you are translating from map to screen too late. Therefore, rather than doing a single map-to-screen for each point, and then working in screen coords, you are working mostly in map coords, and then translating per-character to screen coords at the very end. And the slow part is that you are working in screen coords, then having to manually translate back to map coords just to tell OpenGL the map coords, and it will convert those back to screen coords! Is that a fair assessment of your problem?
The solution therefore is to push that transformation earlier in your pipeline. However, I can see why it is tricky, because at first glance, OpenGL seems want to do everything in "world coordinates" (for you, map coords), but not in screen coords.
Firstly, I am wondering why you are doing separate coordinate calculations for each character. What font rendering system are you using? Something like FreeType will automatically generate a bitmap image of an entire string, and doesn't require you to work per-character [edit: this isn't quite true; see comments]. You definitely shouldn't need to calculate the map coordinate (or even screen coordinate) for every character. Calculate the screen coordinate for the top-left corner of the label, and have your font rendering system produce the bitmap of the entire label in one go. That should speed things up about fourfold (since you assume 4 characters per label).
Now as for working in screen coords, it may be helpful to learn a bit about shaders. The more you learn about OpenGL, the more you learn that really it isn't a 3D rendering engine at all. It's just a 2D graphics library with some very fast matrix primitives built-in. OpenGL actually works, at the lowest level, in screen coordinates (not pixel coordinates -- it works in normalized screen space, I think from memory from -1 to 1 in both the X and Y axis). The only reason it "feels" like you're working in world coordinates is because of these matrices you have set up.
So I think the reason why you are working in map coords all the way until the end is because it's easiest: OpenGL naturally does the map-to-screen transform for you (using the matrices). You have to change that, because you want to work in screen coords yourself, and therefore you need to make the transformation a long time before OpenGL gets its hands on your data. So when you go to draw a label, you should manually apply the map-to-screen transformation matrix on each point, as follows:
You have a particular point (which needs a label drawn) in map coords.
Apply the map-to-screen matrix to convert the point to screen coords. This probably means multiplying the point by the MODELVIEW and PROJECTION matrices, using the same algorithm that OpenGL does when it's rendering a vertex. So you could either glGet the GL_MODELVIEW_MATRIX and GL_PROJECTION_MATRIX to extract OpenGL's current matrices, or you could manually keep around a copy of the matrix yourself.
Now you have the map label in screen coords, compute the position of the label's text. This is simply adding 5 pixels in the X and Y axis, as you said above. However, remember that you aren't in pixel space, but normalised screen space, so you are working in percentages (add 0.05 units, would add 5% of the screen space, for example). It's probably better not to think in pixels, because then your application will scale to match the resolution. But if you really want to think in pixels, you will have to calculate the pixels-to-units based on the resolution.
Use glPushMatrix to save the current matrix, then glLoadIdentity to set the current matrix to the identity -- tell OpenGL not to transform your vertices. (I think you will have to do this for both the PROJECTION and MODELVIEW matrices.)
Draw your label, in screen coordinates.
So you don't really need to write a shader. You could certainly do this in a shader, and it would certainly make step 2 faster (no need to write your own software matrix multiply code; multiplying matrices on the GPU is extremely fast). But that would be a later optimisation, and a lot of work. I think the above steps will help you work in screen coordinates and avoid having to waste a lot of time just to give OpenGL map coordinates.
Side comment on:
"""
generate a bitmap image of an entire string, and doesn't require you to work per-character
...
Calculate the screen coordinate for the top-left corner of the label, and have your font rendering system produce the bitmap of the entire label in one go. That should speed things up about fourfold (since you assume 4 characters per label).
"""
Freetype or no, you could certainly compute a bitmap image for each label, rather than each character, but that would require one of:
storing thousands of different textures, one for each label
It seems like a bad idea to store that many textures, but maybe it's not.
or
rendering each label, for each point, at each screen update.
this would certainly be too slow.
Just to follow up on the resolution:
I didn't really solve this problem, but I ended up being smarter about when I draw labels in the first place. I was able to quickly determine whether I was about to draw too many characters (i.e., so many characters that on a typical screen with a typical density of points the labels would be too close together to read in a useful way) and then I simply don't label at all. With drawing up to about 5000 characters at a time there isn't a noticeable slowdown recomputing the character coordinates as described above.
I'm working with Qt and QWt3D Plotting tools, and extending them to provide some 3-D and 2-D plotting functionality that I need, so I'm learning some OpenGL in the process.
I am currently able to plot points using OpenGL, but only as circles (or "squares" by turning anti-aliasing off). These points act the way I like - i.e. they don't change size as I zoom in, although their x/y/z locations move appropriately as I zoom, pan, etc.
What I'd like to be able to do is plot points using a myriad of shapes (^,<,>,*,., etc.). From what I understand of OpenGL (which isn't very much) this is not trivial to accomplish because OpenGL treats everything as a "real" 3-D object, so zooming in on any openGL shape but a "point" changes the object's projected size.
After doing some reading, I think there are (at least) 2 possible solutions to this problem:
Use OpenGL textures. This doesn't seem to difficult, but I believe that the texture images will get larger and smaller as I zoom in - is that correct?
Use OpenGL polygons, lines, etc. and draw *'s, triangles, or whatever. But here again I run into the same problem - how do I prevent OpenGL from re-sizing the "points" as I zoom?
Is the solution to simply bite the bullet and re-draw the whole data set each time the user zooms or pans to make sure that the points stay the same size? Is there some way to just tell openGL to not re-calculate an object's size?
Sorry if this is in the OpenGL doc somewhere - I could not find it.
What you want is called a "point sprite." OpenGL1.4 supports these through the ARB_point_sprite extension.
Try this tutorial
http://www.ploksoftware.org/ExNihilo/pages/Tutorialpointsprite.htm
and see if it's what you're looking for.
The scene is re-drawn every time the user zooms or pans, anyway, so you might as well re-calculate the size.
You suggested using a textured poly, or using polygons directly, which sound like good ideas to me. It sounds like you want the plot points to remain in the correct position in the graph as the camera moves, but you want to prevent them from changing size when the user zooms. To do this, just resize the plot point polygons so the ratio between the polygon's size and the distance to the camera remains constant. If you've got a lot of plot points, computing the distance to the camera might get expensive because of the square-root involved, but a lookup table would probably solve that.
In addition to resizing, you'll want to keep the plot points facing the camera, so billboarding is your solution, there.
An alternative is to project each of the 3D plot point locations to find out their 2D screen coordinates. Then simply render the polygons at those screen coordinates. No re-scaling necessary. However, gluProject is quite slow, so I'd be very surprised if this wasn't orders of magnitude slower than simply rescaling the plot point polygons like I first suggested.
Good luck!
There's no easy way to do what you want to do. You'll have to dynamically resize the primitives you're drawing depending on the camera's current zoom. You can use a technique known as billboarding to make sure that your objects always face the camera.