how to make opengl mipmaps sharper? - opengl

I am rewriting an opengl-based gis/mapping program. Among other things, the program allows you to load raster images of nautical charts, fix them to lon/lat coordinates and zoom and pan around on them.
The previous version of the program uses a custom tiling system, where in essence it manually creates mipmaps of the original image, in the form of 256x256-pixel tiles at various power-of-two zoom levels. A tile for zoom level n - 1 is constructed from four tiles from zoom level n, using a simple average-of-four-points algorithm. So, it turns off opengl mipmapping, and instead when it comes time to draw some part of the chart at some zoom level, it uses the tiles from the nearest-match zoom level (i.e., the tiles are in power-of-two zoom levels but the program allows arbitrary zoom levels) and then scales the tiles to match the actual zoom level. And of course it has to manage a cache of all these tiles at various levels.
It seemed to me that this tiling system was overly complex. It seemed like I should be able to let the graphics hardware do all of this mipmapping work for me. So in the new program, when I read in an image, I chop it into textures of 1024x1024 pixels each. Then I fix each texture to its lon/lat coordinates, and then I let opengl handle the rest as I zoom and pan around.
It works, but the problem is: My results are a bit blurrier than the original program, which matters for this application because you want to be able to read text on the charts as early as possible, zoom-wise. So it's seeming like the simple average-of-four-points algorithm the original program uses gives better results than opengl + my GPU, in terms of sharpness.
I know there are several glTexParameter settings to control some aspects of how mipmaps work. I've tried various combinations of GL_TEXTURE_MAX_LEVEL (anywhere from 0 to 10) with various settings for GL_TEXTURE_MIN_FILTER. When I set GL_TEXTURE_MAX_LEVEL to 0 (no mipmaps), I certainly get "sharp" results, but they are too sharp, in the sense that pixels just get dropped here and there, so the numbers are unreadable at intermediate zooms. When I set GL_TEXTURE_MAX_LEVEL to a higher value, the image looks quite good when you are zoomed far out (e.g., when the whole chart fits on the screen), but as you zoom in to intermediate zooms, you notice the blurriness especially when looking at text on the charts. (I.e., if it weren't for the text you might think "wow, opengl is doing a nice job of smoothly scaling my image." but with the text you think "why is this chart out of focus?")
My understanding is that basically you tell opengl to generate mipmaps, and then as you zoom in it picks the appropriate mipmaps to use, and there are some limited options for interpolating between the two closest mipmap levels, and either using the closest pixels or averaging the nearby pixels. However, as I say, none of these combinations seem to give quite as clear results, at the same zoom level on the chart (i.e., a zoom level where text is small but not minuscule, like the equivalent of "7 point" or "8 point" size), as the previous tile-based version.
My conclusion is that the mipmaps that opengl creates are simply blurrier than the ones the previous program created with the average-four-point algorithm, and no amount of choosing the right mipmap or LINEAR vs NEAREST is going to get the sharpness I need.
Specific questions:
(1) Does it seem right that opengl is in fact making blurrier mipmaps than the average-four-points algorithm from the original program?
(2) Is there something I might have overlooked in my use of glTexParameter that could give sharper results using the mipmaps opengl is making?
(3) Is there some way I can get opengl to make sharper mipmaps in the first place, such as by using a "cubic" filter or otherwise controlling the mipmap creation process? Or for that matter it seems like I could use the same average-four-points code to manually generate the mipmaps and hand them off to opengl. But I don't know how to do that...

(1) it seems unlikely; I'd expect it just to use a box filter, which is average four points in effect. Possibly it's just switching from one texture to a higher resolution one at a different moment — e.g. it "Chooses the mipmap that most closely matches the size of the pixel being textured", so a 256x256 map will be used to texture a 383x383 area, whereas the manual system it replaces may always have scaled down from 512x512 until the target size was 256x256 or less.
(2) not that I'm aware of in base GL, but if you were to switch to GLSL and the programmable pipeline then you could use the 'bias' parameter to texture2D if the problem is that the lower resolution map is being used when you don't want it to be. Similarly, the GL_EXT_texture_lod_bias extension can do the same in the fixed pipeline. It's an NVidia extension from a decade ago and is something all programmable cards could do, so it's reasonably likely you'll have it.
(EDIT: reading the extension more thoroughly, texture bias migrated into the core spec of OpenGL in version 1.4; clearly my man pages are very out of date. Checking the 1.4 spec, page 279, you can supply a GL_TEXTURE_LOD_BIAS)
(3) yes — if you disable GL_GENERATE_MIPMAP then you can use glTexImage2D to supply whatever image you like for every level of scale, that being what the 'level' parameter dictates. So you can supply completely unrelated mip maps if you want.

To answer your specific points, the four-point filtering you mention is equivalent to box-filtering. This is less blurry than higher-order filters, but can result in aliasing patterns. One of the best filters is the Lanczos filter. I suggest you calculate all of your mipmap levels from the base texture using a Lanczos filter and crank up the anisotropic filtering settings on your graphics card.
I assume that the original code managed textures itself because it was designed to view data sets that are too large to fit into graphics memory. This was probably a bigger problem in the past, but is still a concern.

Related

What does my choice of GLFW_SAMPLES actually do?

What does setting this variable do? For instance, if I set it to 4, what does that mean?
I read a description on glfw.org (see here: GLFW Window Guide) under the "Framebuffer related hints" section. The manual says "GLFW_SAMPLES specifies the desired number of samples to use for multisampling. Zero disables multisampling. GLFW_DONT_CARE means the application has no preference."
I also read a description of multisampling in general (see here: Multisampling by Shawn Hargreaves).
I have a rough idea of what multisampling means: when resizing and redrawing an image, the number of points used to redraw the image should be close enough together that what we see is an accurate representation of the image. The same idea pops up with digital oscilloscopes---say you're sampling a sinusoidal signal. If the sampling rate just so happens to be exactly equal to the frequency (f) of the wave, the scope displays a constant voltage, which is much different than the input signal you're hoping to see. To avoid that, the Nyquist Theorem tells us that we should sample at a rate of at least 2f. So I see how a problem can arise in computer graphics, but I don't know what exactly the function
glfwWindowHint(GLFW_SAMPLES, 4); does.
What does setting this variable do? For instance, if I set it to 4, what does that mean?
GLFW_SAMPLES is used to enable multisampling. So glfwWindowHint(GLFW_SAMPLES, 4) is a way to enable 4x MSAA in your application.
4x MSAA means that each pixel of the window's buffer consists of 4 subsamples, which means that each pixel consists of 4 pixels so to speak. Thus a buffer with the size of 200x100 pixels would actually be 800x400 pixels.
If you were to create an additional framebuffer that is 4 times bigger than the screen. Then using it as a texture sampler with GL_LINEAR as the filter, would basically achieve the same result. Note that this is only the case for 4x MSAA, as GL_LINEAR only takes 4 samples closest to the pixel in question.
When it comes to anti-aliasing, then using MSAA is a really effective but expensive solution. If you want a very clean and good looking result, then it's definitely the way to go. 4x MSAA is usually chosen as there's a good balance between quality and performance using it.
The cheaper alternative in terms of performance is to use FXAA. Which is done as a post-processing step and usually comes at no cost. The difference is that MSAA renders at a bigger size and downsamples to to the wanted size, not loosing any quality. Where as FXAA simply averages the pixels as is, basically bluring the image. FXAA usually gives a really decent result.
Also your driver will most likely enables it by default. But if it doesn't then use glEnable(GL_MULTISAMPLE).
Lastly if you haven't already, then I highly recommend reading LearnOpenGL's Anti-Aliasing tutorial. It gives a really in-depth explanation of all of this.

Proper Implementation of Texture Atlas

I'm currently working alongside a piece of software that generates game maps by taking several images and then tiling them into a game map. Right now I'm working with OpenGL to draw these maps. As you know, switching states in OpenGL and making multiple draw calls is costly. I've decided to implement a texture atlas system, which would allow me to draw the entire map in a single draw call with no state switching. However, I'm having a problem with implementing the texture atlas. Firstly, would it be better to store each TILE in the texture atlas, or the images themselves? Secondly, not all of the images are guaranteed to be square, or even powers of two. Do I pad them to the nearest power of two, a square, or both? Another thing that concerns me is that the images can get quite large, and I'm worried about exceeding the OpenGL size limitation for textures, which would force me to split the map up, ruining the entire concept.
Here's what I have so far, conceptually:
-Generate texture
-Bind texture
-Generate image large enough to hold textures (Take padding into account?)
-Sort textures?
-Upload subtexture to blank texture, store offsets
-Unbind texture
This is not so much a direct answer, but I can't really answer directly since you are asking many questions at once. I'll simply try to give you as much info as I can on the related subjects.
The following is a list of considerations for you, allowing you to rethink exactly what your priorities are and how you wish to execute them.
First of all, in my experience (!!), using texture arrays is much easier than using a texture atlas, and the performance is about equal. Texture arrays do exactly what you think they would do, you can sample them in shaders based on a variable name and an index, instead of just a name (ie: mytexarray[0]). One of the big drawbacks include having the same texture size for all textures in the array, advantages being: easy indexing of subtextures and binding in one draw call.
Second of all, always use powers of 2. I don't know if some recent systems allow for non-power of 2 textures totally without problems, but (again in my experience) it is best to use powers of 2 everywhere. One of the problems I had in a 500*500 texture was black lines when drawing textured quads, these black lines were exactly the size needed to pad to a nearest power of two (12 pixels on x and y). So OpenGL somewhat creates this problem for you even on recent hardware.
Third of all (is this even english?), concerning size. All your problems seem to handle images, textures. You might want to look at texturebuffers, they allow for large amounts of data to be streamed to your GC and are updated easier than textures (this allows for LOD map systems). This is mostly nice if you use textures but only need the data in them represented in their colors, not the colors directly.
Finally you might want to look at "texture splatting", this is a way to increase detail without increasing data. I don't know exactly what you are making so I don't know if you can use it, but it's easy and it's being used in the game industry alot. You create a set of textures (rock, sand, grass, etc) you use everywhere, and one big texture keeping track of which smaller texture is applied where.
I hope at least one of the things I wrote here will help you out,
Good luck!
PS: openGL texture size limitations depend on the graphics card of the user, so be careful with sizes greater than 2048*2048, even if your computer runs fine others might have serious issues. Safe values are anything upto 1024*1024.
PSS: excuse any grammer mistakes, ask for clarification if needed. Also, this is my first answer ever, excuse my lack of protocol.

Difference between SphericalMapping and CubeMapping for environmental reflection in OpenGL?

I'm working with an environmental reflection in OpenGL+GLSL.
I want to reflect the environment around an object in the most accurate way possible.
I found basically two way to do this, one is called SphericalMapping and the other is CubeMapping.
They differ in the shader code but really don't understand what is the difference between them.
Obviously for the cubemapping shader I have 6 images printed on a cube that are needed for the fragment shader to look the right pixel, and for my Spheric mapping shader a single image which is distorted with a photo-retouch software or obtained by taking a photo of a specular reflective sphere.
The drawbacks of spherical mapping seems to be that the camera (and the person which holds it) is always showed in the image and the sampling is non-uniform. What is meant by this latest statement? What is meant by "black-hole" effect in spherical mapping?
I would like to find an interactive demonstration of the differences and drawbacks of these two approaches, it seems like cubemapping is the best, but don't know why.
What is the best of the two especially for a realtime simulation with head tracking in your opinion?
Spheremaps are usually for small, low quality stuff.
The drawbacks of spherical mapping seems to be that the camera (and the person which holds it) is always showed in the image
We're talking about computer graphics here; there is no real camera, or no real person. Try imagegoogling "spheremap", you won't see anybody in the pictures.
the sampling is non-uniform
This means that the center of the spheremap has many pixels for a relatively small area, while near the border, you have few pixels for a relatively large area.
Cubemaps are almost always better : you can generate them at runtime easily, it's faster to sample for the hardware, and even though you have 6 textures instead of 1, you can use a lower resolution and still get the same quality.

What is state-of-the-art for text rendering in OpenGL as of version 4.1? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
The community reviewed whether to reopen this question last month and left it closed:
Original close reason(s) were not resolved
Improve this question
There are already a number of questions about text rendering in OpenGL, such as:
How to do OpenGL live text-rendering for a GUI?
But mostly what is discussed is rendering textured quads using the fixed-function pipeline. Surely shaders must make a better way.
I'm not really concerned about internationalization, most of my strings will be plot tick labels (date and time or purely numeric). But the plots will be re-rendered at the screen refresh rate and there could be quite a bit of text (not more than a few thousand glyphs on-screen, but enough that hardware accelerated layout would be nice).
What is the recommended approach for text-rendering using modern OpenGL? (Citing existing software using the approach is good evidence that it works well)
Geometry shaders that accept e.g. position and orientation and a character sequence and emit textured quads
Geometry shaders that render vector fonts
As above, but using tessellation shaders instead
A compute shader to do font rasterization
Rendering outlines, unless you render only a dozen characters total, remains a "no go" due to the number of vertices needed per character to approximate curvature. Though there have been approaches to evaluate bezier curves in the pixel shader instead, these suffer from not being easily antialiased, which is trivial using a distance-map-textured quad, and evaluating curves in the shader is still computationally much more expensive than necessary.
The best trade-off between "fast" and "quality" are still textured quads with a signed distance field texture. It is very slightly slower than using a plain normal textured quad, but not so much. The quality on the other hand, is in an entirely different ballpark. The results are truly stunning, it is as fast as you can get, and effects such as glow are trivially easy to add, too. Also, the technique can be downgraded nicely to older hardware, if needed.
See the famous Valve paper for the technique.
The technique is conceptually similar to how implicit surfaces (metaballs and such) work, though it does not generate polygons. It runs entirely in the pixel shader and takes the distance sampled from the texture as a distance function. Everything above a chosen threshold (usually 0.5) is "in", everything else is "out". In the simplest case, on 10 year old non-shader-capable hardware, setting the alpha test threshold to 0.5 will do that exact thing (though without special effects and antialiasing).
If one wants to add a little more weight to the font (faux bold), a slightly smaller threshold will do the trick without modifying a single line of code (just change your "font_weight" uniform). For a glow effect, one simply considers everything above one threshold as "in" and everything above another (smaller) threshold as "out, but in glow", and LERPs between the two. Antialiasing works similarly.
By using an 8-bit signed distance value rather than a single bit, this technique increases the effective resolution of your texture map 16-fold in each dimension (instead of black and white, all possible shades are used, thus we have 256 times the information using the same storage). But even if you magnify far beyond 16x, the result still looks quite acceptable. Long straight lines will eventually become a bit wiggly, but there will be no typical "blocky" sampling artefacts.
You can use a geometry shader for generating the quads out of points (reduce bus bandwidth), but honestly the gains are rather marginal. The same is true for instanced character rendering as described in GPG8. The overhead of instancing is only amortized if you have a lot of text to draw. The gains are, in my opinion, in no relation to the added complexity and non-downgradeability. Plus, you are either limited by the amount of constant registers, or you have to read from a texture buffer object, which is non-optimal for cache coherence (and the intent was to optimize to begin with!).
A simple, plain old vertex buffer is just as fast (possibly faster) if you schedule the upload a bit ahead in time and will run on every hardware built during the last 15 years. And, it is not limited to any particular number of characters in your font, nor to a particular number of characters to render.
If you are sure that you do not have more than 256 characters in your font, texture arrays may be worth a consideration to strip off bus bandwidth in a similar manner as generating quads from points in the geometry shader. When using an array texture, the texture coordinates of all quads have identical, constant s and t coordinates and only differ in the r coordinate, which is equal to the character index to render.
But like with the other techniques, the expected gains are marginal at the cost of being incompatible with previous generation hardware.
There is a handy tool by Jonathan Dummer for generating distance textures: description page
Update:
As more recently pointed out in Programmable Vertex Pulling (D. Rákos, "OpenGL Insights", pp. 239), there is no significant extra latency or overhead associated with pulling vertex data programmatically from the shader on the newest generations of GPUs, as compared to doing the same using the standard fixed function.
Also, the latest generations of GPUs have more and more reasonably sized general-purpose L2 caches (e.g. 1536kiB on nvidia Kepler), so one may expect the incoherent access problem when pulling random offsets for the quad corners from a buffer texture being less of a problem.
This makes the idea of pulling constant data (such as quad sizes) from a buffer texture more attractive. A hypothetical implementation could thus reduce PCIe and memory transfers, as well as GPU memory, to a minimum with an approach like this:
Only upload a character index (one per character to be displayed) as the only input to a vertex shader that passes on this index and gl_VertexID, and amplify that to 4 points in the geometry shader, still having the character index and the vertex id (this will be "gl_primitiveID made available in the vertex shader") as the sole attributes, and capture this via transform feedback.
This will be fast, because there are only two output attributes (main bottleneck in GS), and it is close to "no-op" otherwise in both stages.
Bind a buffer texture which contains, for each character in the font, the textured quad's vertex positions relative to the base point (these are basically the "font metrics"). This data can be compressed to 4 numbers per quad by storing only the offset of the bottom left vertex, and encoding the width and height of the axis-aligned box (assuming half floats, this will be 8 bytes of constant buffer per character -- a typical 256 character font could fit completely into 2kiB of L1 cache).
Set an uniform for the baseline
Bind a buffer texture with horizontal offsets. These could probably even be calculated on the GPU, but it is much easier and more efficient to that kind of thing on the CPU, as it is a strictly sequential operation and not at all trivial (think of kerning). Also, it would need another feedback pass, which would be another sync point.
Render the previously generated data from the feedback buffer, the vertex shader pulls the horizontal offset of the base point and the offsets of the corner vertices from buffer objects (using the primitive id and the character index). The original vertex ID of the submitted vertices is now our "primitive ID" (remember the GS turned the vertices into quads).
Like this, one could ideally reduce the required vertex bandwith by 75% (amortized), though it would only be able to render a single line. If one wanted to be able to render several lines in one draw call, one would need to add the baseline to the buffer texture, rather than using an uniform (making the bandwidth gains smaller).
However, even assuming a 75% reduction -- since the vertex data to display "reasonable" amounts of text is only somewhere around 50-100kiB (which is practically zero to a GPU or a PCIe bus) -- I still doubt that the added complexity and losing backwards-compatibility is really worth the trouble. Reducing zero by 75% is still only zero. I have admittedly not tried the above approach, and more research would be needed to make a truly qualified statement. But still, unless someone can demonstrate a truly stunning performance difference (using "normal" amounts of text, not billions of characters!), my point of view remains that for the vertex data, a simple, plain old vertex buffer is justifiably good enough to be considered part of a "state of the art solution". It's simple and straightforward, it works, and it works well.
Having already referenced "OpenGL Insights" above, it is worth to also point out the chapter "2D Shape Rendering by Distance Fields" by Stefan Gustavson which explains distance field rendering in great detail.
Update 2016:
Meanwhile, there exist several additional techniques which aim to remove the corner rounding artefacts which become disturbing at extreme magnifications.
One approach simply uses pseudo-distance fields instead of distance fields (the difference being that the distance is the shortest distance not to the actual outline, but to the outline or an imaginary line protruding over the edge). This is somewhat better, and runs at the same speed (identical shader), using the same amount of texture memory.
Another approach uses the median-of-three in a three-channel texture details and implementation available at github. This aims to be an improvement over the and-or hacks used previously to address the issue. Good quality, slightly, almost not noticeably, slower, but uses three times as much texture memory. Also, extra effects (e.g. glow) are harder to get right.
Lastly, storing the actual bezier curves making up characters, and evaluating them in a fragment shader has become practical, with slightly inferior performance (but not so much that it's a problem) and stunning results even at highest magnifications.
WebGL demo rendering a large PDF with this technique in real time available here.
http://code.google.com/p/glyphy/
The main difference between GLyphy and other SDF-based OpenGL renderers is that most other projects sample the SDF into a texture. This has all the usual problems that sampling has. Ie. it distorts the outline and is low quality. GLyphy instead represents the SDF using actual vectors submitted to the GPU. This results in very high quality rendering.
The downside is that the code is for iOS with OpenGL ES. I'm probably going to make a Windows/Linux OpenGL 4.x port (hopefully the author will add some real documentation, though).
The most widespread technique is still textured quads. However in 2005 LORIA developed something called vector textures, i.e. rendering vector graphics as textures on primitives. If one uses this to convert TrueType or OpenType fonts into a vector texture you get this:
http://alice.loria.fr/index.php/publications.html?Paper=VTM#2005
I'm surprised Mark Kilgard's baby, NV_path_rendering (NVpr), was not mentioned by any of the above. Although its goals are more general than font rendering, it can also render text from fonts and with kerning. It doesn't even require OpenGL 4.1, but it is a vendor/Nvidia-only extension at the moment. It basically turns fonts into paths using glPathGlyphsNV which depends on the freetype2 library to get the metrics, etc. Then you can also access the kerning info with glGetPathSpacingNV and use NVpr's general path rendering mechanism to display text from using the path-"converted" fonts. (I put that in quotes, because there's no real conversion, the curves are used as is.)
The recorded demo for NVpr's font capabilities is unfortunately not particularly impressive. (Maybe someone should make one along the lines of the much snazzier SDF demo one can find on the intertubes...)
The 2011 NVpr API presentation talk for the fonts part starts here and continues in the next part; it is a bit unfortunate how that presentation is split.
More general materials on NVpr:
Nvidia NVpr hub, but some material on the landing page is not the most up-to-date
Siggraph 2012 paper for the brains of the path-rendering method, called "stencil, then cover" (StC); the paper also explains briefly how competing tech like Direct2D works. The font-related bits have been relegated to an annex of the paper. There are also some extras like videos/demos.
GTC 2014 presentation for an update status; in a nutshell: it's now supported by Google's Skia (Nvidia contributed the code in late 2013 and 2014), which in turn is used in Google Chrome and [independently of Skia, I think] in a beta of Adobe Illustrator CC 2014
the official documentation in the OpenGL extension registry
USPTO has granted at least four patents to Kilgard/Nvidia in connection with NVpr, of which you should probably be aware of, in case you want to implement StC by yourself: US8698837, US8698808, US8704830 and US8730253. Note that there are something like 17 more USPTO documents connected to this as "also published as", most of which are patent applications, so it's entirely possible more patents may be granted from those.
And since the word "stencil" did not produce any hits on this page before my answer, it appears the subset of the SO community that participated on this page insofar, despite being pretty numerous, was unaware of tessellation-free, stencil-buffer-based methods for path/font rendering in general. Kilgard has a FAQ-like post at on the opengl forum which may illuminate how the tessellation-free path rendering methods differ from bog standard 3D graphics, even though they're still using a [GP]GPU. (NVpr needs a CUDA-capable chip.)
For historical perspective, Kilgard is also the author of the classic "A Simple OpenGL-based API for Texture Mapped Text", SGI, 1997, which should not be confused with the stencil-based NVpr that debuted in 2011.
Most if not all the recent methods discussed on this page, including stencil-based methods like NVpr or SDF-based methods like GLyphy (which I'm not discussing here any further because other answers already cover it) have however one limitation: they are suitable for large text display on conventional (~100 DPI) monitors without jaggies at any level of scaling, and they also look nice, even at small size, on high-DPI, retina-like displays. They don't fully provide what Microsoft's Direct2D+DirectWrite gives you however, namely hinting of small glyphs on mainstream displays. (For a visual survey of hinting in general see this typotheque page for instance. A more in-depth resource is on antigrain.com.)
I'm not aware of any open & productized OpenGL-based stuff that can do what Microsoft can with hinting at the moment. (I admit ignorance to Apple's OS X GL/Quartz internals, because to the best of my knowledge Apple hasn't published how they do GL-based font/path rendering stuff. It seems that OS X, unlike MacOS 9, doesn't do hinting at all, which annoys some people.) Anyway, there is one 2013 research paper that addresses hinting via OpenGL shaders written by INRIA's Nicolas P. Rougier; it is probably worth reading if you need to do hinting from OpenGL. While it may seem that a library like freetype already does all the work when it comes to hinting, that's not actually so for the following reason, which I'm quoting from the paper:
The FreeType library can rasterize a glyph using sub-pixel anti-aliasing in RGB mode.
However, this is only half of the problem, since we also want to achieve sub-pixel
positioning for accurate placement of the glyphs. Displaying the textured quad at
fractional pixel coordinates does not solve the problem, since it only results in texture
interpolation at the whole-pixel level. Instead, we want to achieve a precise shift
(between 0 and 1) in the subpixel domain. This can be done in a fragment shader [...].
The solution is not exactly trivial, so I'm not going to try to explain it here. (The paper is open-access.)
One other thing I've learned from Rougier's paper (and which Kilgard doesn't seem to have considered) is that the font powers that be (Microsoft+Adobe) have created not one but two kerning specification methods. The old one is based on a so-called kern table and it is supported by freetype. The new one is called GPOS and it is only supported by newer font libraries like HarfBuzz or pango in the free software world. Since NVpr doesn't seem to support either of those libraries, kerning might not work out of the box with NVpr for some new fonts; there are some of those apparently in the wild, according to this forum discussion.
Finally, if you need to do complex text layout (CTL) you seem to be currently out of luck with OpenGL as no OpenGL-based library appears to exist for that. (DirectWrite on the other hand can handle CTL.) There are open-sourced libraries like HarfBuzz which can render CTL, but I don't know how you'd get them to work well (as in using the stencil-based methods) via OpenGL. You'd probably have to write the glue code to extract the re-shaped outlines and feed them into NVpr or SDF-based solutions as paths.
I think your best bet would be to look into cairo graphics with OpenGL backend.
The only problem I had when developing a prototype with 3.3 core was deprecated function usage in OpenGL backend. It was 1-2 years ago so situation might have improved...
Anyway, I hope in the future desktop opengl graphics drivers will implement OpenVG.

OpenGL equivalent of GDI's HatchBrush or PatternBrush?

I have a VB6 application (please don't laugh) which does a lot of drawing via BitBlt and the standard VB6 drawing functions. I am running up against performance issues (yes, I do the regular tricks like drawing to memory). So, I decided to investigate other ways of drawing, and have come upon OpenGL.
I've been doing some experimenting, and it seems straightforward to do most of what I want; the application mostly only uses very simple drawing -- relatively large 2D rectangles of solid colors and such -- but I haven't been able to find an equivalent to something like a HatchBrush or PatternBrush.
More specifically, I want to be able to specify a small monochrome pixel pattern, choose a color, and whenever I draw a polygon (or whatever), instead of it being solid, have it automatically tiled with that pattern, not translated or rotated or skewed or stretched, with the "on" bits of the pattern showing up in the specified color, and the "off" bits of the pattern left displaying whatever had been drawn under the area that I am now drawing on.
Obviously I could do all the calculations myself. That is, instead of drawing as a polygon which will somehow automatically be tiled for me, I could calculate all of the lines or pixels or whatever that actually need to be drawn, then draw them as lines or pixels or whatever. But is there an easier way? Like in GDI, where you just say "draw this polygon using this brush"?
I am guessing that "textures" might be able to accomplish what I want, but it's not clear to me (I'm totally new to this and the documentation I've found is not entirely obvious); it seems like textures might skew or translate or stretch the pattern, based upon the vertices of the polygon? Whereas I want the pattern tiled.
Is there a way to do this, or something like it, other than brute force calculation of exactly the pixels/lines/whatever that need to be drawn?
Thanks in advance for any help.
If I understood correctly, you're looking for glPolygonStipple() or glLineStipple().
PolygonStipple is very limited as it allows only 32x32 pattern but it should work like PatternBrush. I have no idea how to implement it in VB though.
First of all, are you sure it's the drawing operation itself that is the bottleneck here? Visual Basic is known for being very slow (Especially if your program is compiled to intermediary VM code - which is the default AFAIRC. Be sure you check the option to compile to native code!), and if it is your code that is the bottleneck, then OpenGL won't help you much - you'll need to rewrite your code in some other language - probably C or C++, but any .NET lang should also do.
OpenGL contains functions that allow you to draw stippled lines and polygons, but you shouldn't use them. They're deprecated for a long time, and got removed from OpenGL in version 3.1 of the spec. And that's for a reason - these functions don't map well to the modern rendering paradigm and are not supported by modern graphics hardware - meaning you will most likely get a slow software fallback if you use them.
The way to go is to use a small texture as a mask, and tile it over the drawn polygons. The texture will get stretched or compressed to match the texture coordinates you specify with the vertices. You have to set the wrapping mode to GL_REPEAT for both texture coordinates, and calculate the right coordinates for each vertex so that the texture appears at its original size, repeated the right amount of times.
You could also use the stencil buffer as you described, but... how would you fill that buffer with the pattern, and do it fast? You would need a texture anyway. Remember that you need to clear the stencil buffer every frame, before you start drawing. Not doing so could cost you a massive performance hit (the exact value of "massive" depending on the graphics hardware and driver version).
It's also possible to achieve the desired effect using a fragment shader, but learning shaders for that would be an overkill for an OpenGL beginner like yourself :-).
Ah, I think I've found it! I can make a stencil across the entire viewport in the shape of the pattern I want (or its mask, I guess), and then enable that stencil when I want to draw with that pattern.
You could just use a texture. Put the pattern in as in image and turn on texture repeating and you are good to go.
Figured this out a a year or two ago.