I'm working on a data visualisation application where I need to draw about 20 different time series overlayed in 2D, each consisting of a few million data points. I need to be able to zoom and pan the data left and right to look at it and need to be able to place cursors on the data for measuring time intervals and to inspect data points. It's very important that when zoomed out all the way, I can easily spot outliers in the data and zoom in to look at them. So averaging the data can be problematic.
I have a naive implementation using a standard GUI framework on linux which is way too slow to be practical. I'm thinking of using OpenGL instead (testing on a Radeon RX480 GPU), with orthogonal projection. I searched around and it seems VBOs to draw line strips might work, but I have no idea if this is the best solution (would give me the best frame rate).
What is the best way to send data sets consisting of millions of vertices to the GPU, assuming the data does not change, and will be redrawn each time the user interacts with it (pan/zoom/click on it)?
In modern OpenGL (versions 3/4 core profile) VBOs are the standard way to transfer geometric / non-image data to the GPU, so yes you will almost certainly end up using VBOs.
Alternatives would be uniform buffers, or texture buffer objects, but for the application you're describing I can't see any performance advantage in using them - might even be worse - and it would complicate the code.
The single biggest speedup will come from having all the data points stored on the GPU instead of being copied over each frame as a 2D GUI might be doing. So do the simplest thing that works and then worry about speed.
If you are new to OpenGL, I recommend the book "OpenGL SuperBible" 6th or 7th edition. There are many good online OpenGL guides and references, just make sure you avoid the older ones written for OpenGL 1/2.
Hope this helps.
Voxel engine (like Minecraft) optimization suggestions?
As a fun project (and to get my Minecraft-adict son excited for programming) I am building a 3D Minecraft-like voxel engine using C# .NET4.5.1, OpenGL and GLSL 4.x.
Right now my world is built using chunks. Chunks are stored in a dictionary, where I can select them based on a 64bit X | Z<<32 key. This allows to create an 'infinite' world that can cache-in and cache-out chunks.
Every chunk consists of an array of 16x16x16 block segments. Starting from level 0, bedrock, it can go as high as you want (unlike minecraft where the limit is 256, I think).
Chunks are queued for generation on a separate thread when they come in view and need to be rendered. This means that chunks might not show right away. In practice you will not notice this. NOTE: I am not waiting for them to be generated, they will just not be visible immediately.
When a chunk needs to be rendered for the first time a VBO (glGenBuffer, GL_STREAM_DRAW, etc.) for that chunk is generated containing the possibly visible/outside faces (neighboring chunks are checked as well). [This means that a chunk potentially needs to be re-tesselated when a neighbor has been modified]. When tesselating first the opaque faces are tesselated for every segment and then the transparent ones. Every segment knows where it starts within that vertex array and how many vertices it has, both for opaque faces and transparent faces.
Textures are taken from an array texture.
When rendering;
I first take the bounding box of the frustum and map that onto the chunk grid. Using that knowledge I pick every chunk that is within the frustum and within a certain distance of the camera.
Now I do a distance sort on the chunks.
After that I determine the ranges (index, length) of the chunks-segments that are actually visible. NOW I know exactly what segments (and what vertex ranges) are 'at least partially' in view. The only excess segments that I have are the ones that are hidden behind mountains or 'sometimes' deep underground.
Then I start rendering ... first I render the opaque faces [culling and depth test enabled, alpha test and blend disabled] front to back using the known vertex ranges. Then I render the transparent faces back to front [blend enabled]
Now... does anyone know a way of improving this and still allow dynamic generation of an infinite world? I am currently reaching ~80fps#1920x1080, ~120fps#1024x768 (screenshots: http://i.stack.imgur.com/t4k30.jpg, http://i.stack.imgur.com/prV8X.jpg) on an average 2.2Ghz i7 laptop with a ATI HD8600M gfx card. I think it must be possible to increase the number of frames. And I think I have to, as I want to add entity AI, sound and do bump and specular mapping. Could using Occlusion Queries help me out? ... which I can't really imagine based on the nature of the segments. I already minimized the creation of objects, so there is no 'new Object' all over the place. Also as the performance doesn't really change when using Debug or Release mode, I don't think it's the code but more the approach to the problem.
edit: I have been thinking of using GL_SAMPLE_ALPHA_TO_COVERAGE but it doesn't seem to be working?
gl.Enable(GL.DEPTH_TEST);
gl.Enable(GL.BLEND); // gl.Disable(GL.BLEND);
gl.Enable(GL.MULTI_SAMPLE);
gl.Enable(GL.SAMPLE_ALPHA_TO_COVERAGE);
To render a lot of similar objects, I strongly suggest you take a look into instanced draw : glDrawArraysInstanced and/or glDrawElementsInstanced.
It made a huge difference for me. I'm talking from 2 fps to over 60 fps to render 100000 similar icosahedrons.
You can parametrize your cubes by using Attribs ( glVertexAttribDivisor and friends ) to make them differents. Hope this helps.
It's on ~200fps currently, should be OK. The 3 main things that I've done are:
1) generation of both chunks on a separate thread.
2) tessellation the chunks on a separate thread.
3) using a Deferred Rendering Pipeline.
Don't really think the last one contributed much to the overall performance but had to start using it because of some of the shaders. Now the CPU is sort of falling asleep # ~11%.
This question is pretty old, but I'm working on a similar project. I approached it almost exactly the same way as you, however I added in one additional optimization that helped out a lot.
For each chunk, I determine which sides are completely opaque. I then use that information to do a flood fill through the chunks to cull out the ones that are underground. Note, I'm not checking individual blocks when I do the flood fill, only a precomputed bitmask for each chunk.
When I'm computing the bitmask, I also check to see if the chunk is entirely empty, since empty chunks can obviously be ignored.
I am emulating an old client and most of the textures are 128x128. As I know switching textures and render calls are expensive, could I get away with at runtime, creating a very large texture atlas of all the tiny textures.
Then from there, could I bind the large texture and render via shaders with the texture offset in the atlas? What kind of performance hit would this have?
My second question is involving sorting. The level files are broken up into little chunks of a BSP tree. They are very small and there are often thousands per level. The current way, which is definitely slower is that I render each group of textures in each BSP leaf that are in my frustum and PVS (Quake 3 style). What would be the idea fix for this?
Would I want to run through each region (back to front) I could see, and group all of the visible triangles by texture and then render all at once. For some reason, I always feel this may be slower. Does it make more sense to sort first and render all at once or skip sorting and render each region at a time?
Yes, it should be possible to stuff textures into a texture atlas. Careful consideration would need to be given to issues like whether the textures are being tiled, and what you want to happen with mip-maps and filtering.
I'd probably suggest holding off on doing that until you've got the geometry sorted out - reducing the number of draw calls to a sane number may well be all you need to do. The cost of changing a texture isn't necessarily that bad on modern hardware.
As for rendering the level, on modern hardware it's more usual to render front to back, rather than back to front, in order to take advantage of the z-buffer and early culling (i.e. if you have a wall right in front of the camera, and you draw that first with z-buffering enabled, the hardware is fairly good at rejecting stuff you then try to draw behind it).
One possible approach would be to reprocess the BSP into a rougher structure, such as a simple grid (or maybe a quad or oct tree). You don't even need to split polygons neatly, just have a bunch of sectors with a bounding-box around them, and frustum-cull then loosely sort (front to back) the boxes. You can keep a PVS with this approach too, but the usefulness of that might drop when your rendering chunks become larger and coarser.
However before doing any of this, I'd definitely recommend setting up some benchmark tests and recording performance information. You won't know for sure if you're doing the right thing unless you actually analyse the performance. If you can pinpoint exactly what the worst performant thing you are doing is, you just need to fix that.
I'm trying to, in JOGL, pick from a large set of rendered quads (several thousands). Does anyone have any recommendations?
To give you more detail, I'm plotting a large set of data as billboards with procedurally created textures.
I've seen this post OpenGL GL_SELECT or manual collision detection? and have found it helpful. However it can take my program up to several minutes to complete a rendering of the full set, so I don't think drawing 2x (for color picking) is an option.
I'm currently drawing with calls to glBegin/glVertex.../glEnd. Given that I made the switch to batch rendering on the GPU with vao's and vbo's, do you think I would receive a speedup large enough to facilitate color picking?
If not, given all of the recommendations against using GL_SELECT, do you think it would be worth me using it?
I've investigated multithreaded CPU approaches to picking these quads that completely sidestep OpenGL all together. Do you think a OpenGL-less CPU solution is the way to go?
Sorry for all the questions. My main question remains to be, whats a good way that one can pick from a large set of quads using OpenGL (JOGL)?
The best way to pick from a large number of quad cannot be easily defined. I don't like color picking or similar techniques very much, because they seem to be to impractical for most situations. I never understood why there are so many tutorials that focus on people that are new to OpenGl or even programming focus on picking that is just useless for nearly everything. For exmaple: Try to get a pixel you clicked on in a heightmap: Not possible. Try to locate the exact mesh in a model you clicked on: Impractical.
If you have a large number of quads you will probably need a good spatial partitioning or at least (better also) a scene graph. Ok, you don't need this, but it helps A LOT. Look at some tutorials for scene graphs for further information's, it's a good thing to know if you start with 3D programming, because you get to know a lot of concepts and not only OpenGl code.
So what to do now to start with some picking? Take the inverse of your modelview matrix (iirc with glUnproject(...)) on the position where your mouse cursor is. With the orientation of your camera you can now cast a ray into your spatial structure (or your scene graph that holds a spatial structure). Now check for collisions with your quads. I currently have no link, but if you search for inverse modelview matrix you should find some pages that explain this better and in more detail than it would be practical to do here.
With this raycasting based technique you will be able to find your quad in O(log n), where n is the number of quads you have. With some heuristics based on the exact layout of your application (your question is too generic to be more specific) you can improve this a lot for most cases.
An easy spatial structure for this is for example a quadtree. However you should start with they raycasting first to fully understand this technique.
Never faced such problem, but in my opinion, I think the CPU based picking is the best way to try.
If you have a large set of quads, maybe you can group quads by space to avoid testing all quads. For example, you can group the quads in two boxes and firtly test which box you
I just implemented color picking but glReadPixels is slow here (I've read somehere that it might be bad for asynchron behaviour between GL and CPU).
Another possibility seems to me using transform feedback and a geometry shader that does the scissor test. The GS can then discard all faces that do not contain the mouse position. The transform feedback buffer contains then exactly the information about hovered meshes.
You probably want to write the depth to the transform feedback buffer too, so that you can find the topmost hovered mesh.
This approach works also nice with instancing (additionally write the instance id to the buffer)
I haven't tried it yet but I guess it will be a lot faster then using glReadPixels.
I only found this reference for this approach.
I'm using the solution that I've borrowed from DirectX SDK, there's a nice example how to detect the selected polygon in a vertext buffer object.
The same algorithm works nice with OpenGL.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
The community reviewed whether to reopen this question last month and left it closed:
Original close reason(s) were not resolved
Improve this question
There are already a number of questions about text rendering in OpenGL, such as:
How to do OpenGL live text-rendering for a GUI?
But mostly what is discussed is rendering textured quads using the fixed-function pipeline. Surely shaders must make a better way.
I'm not really concerned about internationalization, most of my strings will be plot tick labels (date and time or purely numeric). But the plots will be re-rendered at the screen refresh rate and there could be quite a bit of text (not more than a few thousand glyphs on-screen, but enough that hardware accelerated layout would be nice).
What is the recommended approach for text-rendering using modern OpenGL? (Citing existing software using the approach is good evidence that it works well)
Geometry shaders that accept e.g. position and orientation and a character sequence and emit textured quads
Geometry shaders that render vector fonts
As above, but using tessellation shaders instead
A compute shader to do font rasterization
Rendering outlines, unless you render only a dozen characters total, remains a "no go" due to the number of vertices needed per character to approximate curvature. Though there have been approaches to evaluate bezier curves in the pixel shader instead, these suffer from not being easily antialiased, which is trivial using a distance-map-textured quad, and evaluating curves in the shader is still computationally much more expensive than necessary.
The best trade-off between "fast" and "quality" are still textured quads with a signed distance field texture. It is very slightly slower than using a plain normal textured quad, but not so much. The quality on the other hand, is in an entirely different ballpark. The results are truly stunning, it is as fast as you can get, and effects such as glow are trivially easy to add, too. Also, the technique can be downgraded nicely to older hardware, if needed.
See the famous Valve paper for the technique.
The technique is conceptually similar to how implicit surfaces (metaballs and such) work, though it does not generate polygons. It runs entirely in the pixel shader and takes the distance sampled from the texture as a distance function. Everything above a chosen threshold (usually 0.5) is "in", everything else is "out". In the simplest case, on 10 year old non-shader-capable hardware, setting the alpha test threshold to 0.5 will do that exact thing (though without special effects and antialiasing).
If one wants to add a little more weight to the font (faux bold), a slightly smaller threshold will do the trick without modifying a single line of code (just change your "font_weight" uniform). For a glow effect, one simply considers everything above one threshold as "in" and everything above another (smaller) threshold as "out, but in glow", and LERPs between the two. Antialiasing works similarly.
By using an 8-bit signed distance value rather than a single bit, this technique increases the effective resolution of your texture map 16-fold in each dimension (instead of black and white, all possible shades are used, thus we have 256 times the information using the same storage). But even if you magnify far beyond 16x, the result still looks quite acceptable. Long straight lines will eventually become a bit wiggly, but there will be no typical "blocky" sampling artefacts.
You can use a geometry shader for generating the quads out of points (reduce bus bandwidth), but honestly the gains are rather marginal. The same is true for instanced character rendering as described in GPG8. The overhead of instancing is only amortized if you have a lot of text to draw. The gains are, in my opinion, in no relation to the added complexity and non-downgradeability. Plus, you are either limited by the amount of constant registers, or you have to read from a texture buffer object, which is non-optimal for cache coherence (and the intent was to optimize to begin with!).
A simple, plain old vertex buffer is just as fast (possibly faster) if you schedule the upload a bit ahead in time and will run on every hardware built during the last 15 years. And, it is not limited to any particular number of characters in your font, nor to a particular number of characters to render.
If you are sure that you do not have more than 256 characters in your font, texture arrays may be worth a consideration to strip off bus bandwidth in a similar manner as generating quads from points in the geometry shader. When using an array texture, the texture coordinates of all quads have identical, constant s and t coordinates and only differ in the r coordinate, which is equal to the character index to render.
But like with the other techniques, the expected gains are marginal at the cost of being incompatible with previous generation hardware.
There is a handy tool by Jonathan Dummer for generating distance textures: description page
Update:
As more recently pointed out in Programmable Vertex Pulling (D. Rákos, "OpenGL Insights", pp. 239), there is no significant extra latency or overhead associated with pulling vertex data programmatically from the shader on the newest generations of GPUs, as compared to doing the same using the standard fixed function.
Also, the latest generations of GPUs have more and more reasonably sized general-purpose L2 caches (e.g. 1536kiB on nvidia Kepler), so one may expect the incoherent access problem when pulling random offsets for the quad corners from a buffer texture being less of a problem.
This makes the idea of pulling constant data (such as quad sizes) from a buffer texture more attractive. A hypothetical implementation could thus reduce PCIe and memory transfers, as well as GPU memory, to a minimum with an approach like this:
Only upload a character index (one per character to be displayed) as the only input to a vertex shader that passes on this index and gl_VertexID, and amplify that to 4 points in the geometry shader, still having the character index and the vertex id (this will be "gl_primitiveID made available in the vertex shader") as the sole attributes, and capture this via transform feedback.
This will be fast, because there are only two output attributes (main bottleneck in GS), and it is close to "no-op" otherwise in both stages.
Bind a buffer texture which contains, for each character in the font, the textured quad's vertex positions relative to the base point (these are basically the "font metrics"). This data can be compressed to 4 numbers per quad by storing only the offset of the bottom left vertex, and encoding the width and height of the axis-aligned box (assuming half floats, this will be 8 bytes of constant buffer per character -- a typical 256 character font could fit completely into 2kiB of L1 cache).
Set an uniform for the baseline
Bind a buffer texture with horizontal offsets. These could probably even be calculated on the GPU, but it is much easier and more efficient to that kind of thing on the CPU, as it is a strictly sequential operation and not at all trivial (think of kerning). Also, it would need another feedback pass, which would be another sync point.
Render the previously generated data from the feedback buffer, the vertex shader pulls the horizontal offset of the base point and the offsets of the corner vertices from buffer objects (using the primitive id and the character index). The original vertex ID of the submitted vertices is now our "primitive ID" (remember the GS turned the vertices into quads).
Like this, one could ideally reduce the required vertex bandwith by 75% (amortized), though it would only be able to render a single line. If one wanted to be able to render several lines in one draw call, one would need to add the baseline to the buffer texture, rather than using an uniform (making the bandwidth gains smaller).
However, even assuming a 75% reduction -- since the vertex data to display "reasonable" amounts of text is only somewhere around 50-100kiB (which is practically zero to a GPU or a PCIe bus) -- I still doubt that the added complexity and losing backwards-compatibility is really worth the trouble. Reducing zero by 75% is still only zero. I have admittedly not tried the above approach, and more research would be needed to make a truly qualified statement. But still, unless someone can demonstrate a truly stunning performance difference (using "normal" amounts of text, not billions of characters!), my point of view remains that for the vertex data, a simple, plain old vertex buffer is justifiably good enough to be considered part of a "state of the art solution". It's simple and straightforward, it works, and it works well.
Having already referenced "OpenGL Insights" above, it is worth to also point out the chapter "2D Shape Rendering by Distance Fields" by Stefan Gustavson which explains distance field rendering in great detail.
Update 2016:
Meanwhile, there exist several additional techniques which aim to remove the corner rounding artefacts which become disturbing at extreme magnifications.
One approach simply uses pseudo-distance fields instead of distance fields (the difference being that the distance is the shortest distance not to the actual outline, but to the outline or an imaginary line protruding over the edge). This is somewhat better, and runs at the same speed (identical shader), using the same amount of texture memory.
Another approach uses the median-of-three in a three-channel texture details and implementation available at github. This aims to be an improvement over the and-or hacks used previously to address the issue. Good quality, slightly, almost not noticeably, slower, but uses three times as much texture memory. Also, extra effects (e.g. glow) are harder to get right.
Lastly, storing the actual bezier curves making up characters, and evaluating them in a fragment shader has become practical, with slightly inferior performance (but not so much that it's a problem) and stunning results even at highest magnifications.
WebGL demo rendering a large PDF with this technique in real time available here.
http://code.google.com/p/glyphy/
The main difference between GLyphy and other SDF-based OpenGL renderers is that most other projects sample the SDF into a texture. This has all the usual problems that sampling has. Ie. it distorts the outline and is low quality. GLyphy instead represents the SDF using actual vectors submitted to the GPU. This results in very high quality rendering.
The downside is that the code is for iOS with OpenGL ES. I'm probably going to make a Windows/Linux OpenGL 4.x port (hopefully the author will add some real documentation, though).
The most widespread technique is still textured quads. However in 2005 LORIA developed something called vector textures, i.e. rendering vector graphics as textures on primitives. If one uses this to convert TrueType or OpenType fonts into a vector texture you get this:
http://alice.loria.fr/index.php/publications.html?Paper=VTM#2005
I'm surprised Mark Kilgard's baby, NV_path_rendering (NVpr), was not mentioned by any of the above. Although its goals are more general than font rendering, it can also render text from fonts and with kerning. It doesn't even require OpenGL 4.1, but it is a vendor/Nvidia-only extension at the moment. It basically turns fonts into paths using glPathGlyphsNV which depends on the freetype2 library to get the metrics, etc. Then you can also access the kerning info with glGetPathSpacingNV and use NVpr's general path rendering mechanism to display text from using the path-"converted" fonts. (I put that in quotes, because there's no real conversion, the curves are used as is.)
The recorded demo for NVpr's font capabilities is unfortunately not particularly impressive. (Maybe someone should make one along the lines of the much snazzier SDF demo one can find on the intertubes...)
The 2011 NVpr API presentation talk for the fonts part starts here and continues in the next part; it is a bit unfortunate how that presentation is split.
More general materials on NVpr:
Nvidia NVpr hub, but some material on the landing page is not the most up-to-date
Siggraph 2012 paper for the brains of the path-rendering method, called "stencil, then cover" (StC); the paper also explains briefly how competing tech like Direct2D works. The font-related bits have been relegated to an annex of the paper. There are also some extras like videos/demos.
GTC 2014 presentation for an update status; in a nutshell: it's now supported by Google's Skia (Nvidia contributed the code in late 2013 and 2014), which in turn is used in Google Chrome and [independently of Skia, I think] in a beta of Adobe Illustrator CC 2014
the official documentation in the OpenGL extension registry
USPTO has granted at least four patents to Kilgard/Nvidia in connection with NVpr, of which you should probably be aware of, in case you want to implement StC by yourself: US8698837, US8698808, US8704830 and US8730253. Note that there are something like 17 more USPTO documents connected to this as "also published as", most of which are patent applications, so it's entirely possible more patents may be granted from those.
And since the word "stencil" did not produce any hits on this page before my answer, it appears the subset of the SO community that participated on this page insofar, despite being pretty numerous, was unaware of tessellation-free, stencil-buffer-based methods for path/font rendering in general. Kilgard has a FAQ-like post at on the opengl forum which may illuminate how the tessellation-free path rendering methods differ from bog standard 3D graphics, even though they're still using a [GP]GPU. (NVpr needs a CUDA-capable chip.)
For historical perspective, Kilgard is also the author of the classic "A Simple OpenGL-based API for Texture Mapped Text", SGI, 1997, which should not be confused with the stencil-based NVpr that debuted in 2011.
Most if not all the recent methods discussed on this page, including stencil-based methods like NVpr or SDF-based methods like GLyphy (which I'm not discussing here any further because other answers already cover it) have however one limitation: they are suitable for large text display on conventional (~100 DPI) monitors without jaggies at any level of scaling, and they also look nice, even at small size, on high-DPI, retina-like displays. They don't fully provide what Microsoft's Direct2D+DirectWrite gives you however, namely hinting of small glyphs on mainstream displays. (For a visual survey of hinting in general see this typotheque page for instance. A more in-depth resource is on antigrain.com.)
I'm not aware of any open & productized OpenGL-based stuff that can do what Microsoft can with hinting at the moment. (I admit ignorance to Apple's OS X GL/Quartz internals, because to the best of my knowledge Apple hasn't published how they do GL-based font/path rendering stuff. It seems that OS X, unlike MacOS 9, doesn't do hinting at all, which annoys some people.) Anyway, there is one 2013 research paper that addresses hinting via OpenGL shaders written by INRIA's Nicolas P. Rougier; it is probably worth reading if you need to do hinting from OpenGL. While it may seem that a library like freetype already does all the work when it comes to hinting, that's not actually so for the following reason, which I'm quoting from the paper:
The FreeType library can rasterize a glyph using sub-pixel anti-aliasing in RGB mode.
However, this is only half of the problem, since we also want to achieve sub-pixel
positioning for accurate placement of the glyphs. Displaying the textured quad at
fractional pixel coordinates does not solve the problem, since it only results in texture
interpolation at the whole-pixel level. Instead, we want to achieve a precise shift
(between 0 and 1) in the subpixel domain. This can be done in a fragment shader [...].
The solution is not exactly trivial, so I'm not going to try to explain it here. (The paper is open-access.)
One other thing I've learned from Rougier's paper (and which Kilgard doesn't seem to have considered) is that the font powers that be (Microsoft+Adobe) have created not one but two kerning specification methods. The old one is based on a so-called kern table and it is supported by freetype. The new one is called GPOS and it is only supported by newer font libraries like HarfBuzz or pango in the free software world. Since NVpr doesn't seem to support either of those libraries, kerning might not work out of the box with NVpr for some new fonts; there are some of those apparently in the wild, according to this forum discussion.
Finally, if you need to do complex text layout (CTL) you seem to be currently out of luck with OpenGL as no OpenGL-based library appears to exist for that. (DirectWrite on the other hand can handle CTL.) There are open-sourced libraries like HarfBuzz which can render CTL, but I don't know how you'd get them to work well (as in using the stencil-based methods) via OpenGL. You'd probably have to write the glue code to extract the re-shaped outlines and feed them into NVpr or SDF-based solutions as paths.
I think your best bet would be to look into cairo graphics with OpenGL backend.
The only problem I had when developing a prototype with 3.3 core was deprecated function usage in OpenGL backend. It was 1-2 years ago so situation might have improved...
Anyway, I hope in the future desktop opengl graphics drivers will implement OpenVG.