Drawing text to a 2D texture in DirectX 11? - c++

I'd like to be able to apply a transparent texture to each face of a simple 3D cube. Each texture is simply the name of the face, e.g. "Front", "Back", "Left", etc. Normally I'd just create an image in a paint app, then save it, and load the image as a texture at runtime. However, the twist is that I'd like this text to be in multiple (and growing) different languages and although creating six new images for each language is one method, it would be preferable to dynamically generate textures from translated strings instead.
So the ideal solution is to be able to render a text string to a 2D texture at runtime (using any font and colour I choose), then render the resultant texture over each surface as a transparent overlay. Can this be done relatively easily?
e.g. this sort of process:
Getting the texture onto the cube is easy enough. It's the getting the text (string) rendered to a texture (ID3D11Texture2D) which is the tricky part. It seems as though it should be very do-able. Very new at this, so as far as possible keep any explanations in beginner-friendly language where possible.

I guess the cube textures do not change too frequently, since the language should not be switched to often by the user. In this case I would use a toolkit like Qt to generate the textures - similar like this:
QImage image(width,height,QImage::Format_ARGB32);
image.fill(Qt::transparent);
QPainter painter;
painter.begin(&image);
painter.setRenderHints(QPainter::HighQualityAntialiasing
| QPainter::TextAntialiasing);
painter.setFont(font);
painter.setPen(Qt::black);
painter.drawText(0, metric.ascent(), QString(c));
painter.end();
To access the texture data:
auto* data = image.data();
A different solution would be to use Signed Distance Text Rendering. Doing this you need to render this text to a texture first (render to texture) and then apply it to the cube.

Related

What's the best way to render a lot of text with OpenGL

I work on a software who need to print a lot of texts on the screen, around 200 to 400 strings and a lot unique character (some objects are represented by a character), the software is already drawing a lot of things using OpenGL.
I already worked around with text rendering and I'm able to render text, but drawing more than 200 string using 200 draw call lead in a performance issue.
The software draw only in 2D.
It's important to notice that I the software run on 32bit computer with old graphics card, I can't use a more recent version of OpenGL than 2.0.
What would be the best options from your opinion?
Render everything in one draw call per font, using one big buffer with all information.
Render each string in a texture and then make one draw call for every string. (the text don't change a lot, every seconds max)
Any other idea?
While I don't think my way of rendering the text is anywhere close to standard, I prerendered the font into one giant glyph map, and stored the metadata (the location of glyphs) into a ssbo buffer (that stores an array of glyphs data except the bitmap and the advance). After that, when I need to render a bunch of text, I converted the charcodes to an array of glyph IDs. Then by looping over it and only passing over a vec2 position and a hint glyph Id per char, I turned points to a rectangle in the geometry shader by reading data in the ssbo, passing along the texture coordinate to fragment shaders.
This method is so fast that it can easily render 10000s of text. It also allows fast switching of font sets, and able to support any unicode char as long as the font has the glyph.
From what I learned, the standard approach would be to make a font atlas texture. This is the place where you can rasterize the glyphs to, and save the uv coordinates of the rasterized glyphs into a lookup table. Then batch the vertices and draw all of the text in one draw call by using a single texture (your atlas) and having one buffer for all vertex positions.
All of the fonts and character sizes can be written to the same atlas, provided that you make a large enough texture. You could also probably use a second atlas texture for the unique characters.
Since OpenGl does not have any support to render fonts on screen , it is left to the programmer to define a system.
Freetype is a good Library to render fonts.
https://www.freetype.org/
You can render Font Texture to Quads or Make Font Atlas and than render them to Quads
Generally 200 strings should not be challenging but you can check techniques like signed distance fields.
This is a good tutorial to render fonts in opengl.
https://learnopengl.com/In-Practice/Text-Rendering

Encode image and change color in SDL2 and C ++

Is there a method or function that: when loading a texture, it is coded to apply color changes?
How Sprite Works in NES
You need to do it yourself. SDL wasn't done to works with NES texture format.
You'll need to load your texture array. Create a new surface with the right size. After that, you can fill the pixels with the colour corresponding to your colour palette. You could do it with a custom SDL_Palette, but this isn't a good practice.
An SDL_Palette should never need to be created manually. It is automatically created when SDL allocates an SDL_PixelFormat for a surface. The colors values of an SDL_Surface's palette can be set with SDL_SetPaletteColors().
SDL_Palette Wiki Page

OpenGL - Set frame buffer texture to fully transparent

I'm trying to render some text, but at the moment I'm rendering each glyph separately, which is slow and ineffective.
Therefore I want to change the system, so the text is just rendered into a separate texture once, whenever it changes, and then that texture should be rendered onscreen in the main render pass.
So far so good, the problem is, to draw it over the main scene, I only have two options. I could specify a specific color (e.g. green) as 'transparent', clear the frame buffer texture of the text with that color, draw the text and use a shader afterwards to render the result onto the main scene, minus the transparent color.
While that would work, I wouldn't be able to use that color for the actual text anymore.
Instead I'd much rather clear the alpha of the frame buffer texture entirely (to get a colorless, blank slate essentially) and then draw the text, but that doesn't seem to be possible?
glColorMask(GL_FALSE,GL_FALSE,GL_FALSE,GL_TRUE);
glClearColor(0,0,0,0);
glClear(GL_COLOR_BUFFER_BIT);
Doing this will just apply the specified rgb values with the alpha as 'intensity' of those colors. In this case it wouldn't do anything at all because the color components are disabled. But I need to change the existing alpha of the texture in the frame buffer, without using glDrawPixels (which is too slow).
Now, I could of course write an additional shader to set the alpha-value for each fragment to 0, but that doesn't seem as effective / fast.
What's the best way to handle something like this?
So far so good, the problem is, to draw it over the main scene, I only have two options. I could specify a specific color (e.g. green) as 'transparent', clear the frame buffer texture of the text with that color, draw the text and use a shader afterwards to render the result onto the main scene, minus the transparent color.
You're overcomplicating the whole thing. If you render your text/glyphs into a texture that has just a single channel that's being used as alpha channel, that gives you the glphys shape. The color is controlled in form of a vertex attribute and combined with the alpha from the texture upon rendering.
If you want to get fancy, instead of rendering the bare glyphs to the texture, you might instead want to produce a signed distance field map, to save on texture size, while retaining high quality text output.

How to draw texts on a 3D objects (such as sphere)

I learn OpenGL under Linux platform. Recently, I try to use texts created by glutBitmapCharacter() as the texture of some quadrics objects provided by glu or glut. However, glutBitmapCharacter() does not return a pointer so that I can't feed it to the glTexImage2D(). I had google it for quite a while, but all I found is some topic related to Android SDK which I have no experience to it.
All I can think of is to render texts and read it form buffer using glReadPixels(), then save it to a file. Next, read the pixels back from the file and refer it to a pointer. Finally, draw 3D objects with these texts as the texture (i.e. feed the pointer to the glTexImage2D()).
However, it's kind of silly. What I want to ask is: Are there some other alternative way to this?
Applying text on top of a 3D surface is not trivial with pure OpenGL. GLUT does not provide any tools for that. One possible option would be for you to implement your own text rendering methods, possibly loading glyphs using Freetype then create a texture with the glyphs and apply that texture to the polygons. Freetype-GL is a tiny helper library that would facilitate a lot if you were to do that.
Another option would be to again load the text glyphs into a texture and then apply them as decals over the geometry. That way you could still simulate a 2D text drawing in a flat surface (the decal) and then apply that on top of a 3D object.

What exactly is a buffer in OpenGL, and how can I use multiple ones to my advantage?

Not long ago, I tried out a program from an OpenGL guidebook that was said to be double buffered; it displayed a spinning rectangle on the screen. Unfortunately, I don't have the book anymore, and I haven't found a clear, straightforward definition of what a buffer is in general. My guess is that it is a "place" to draw things, where using a lot could be like layering?
If that is the case, I am wondering if I can use multiple buffers to my advantage for a polygon clipping program. I have a nice little window that allows the user to draw polygons on the screen, plus a utility to drag and draw a selection box over the polygons. When the user has drawn the selection rectangle and lets go of the mouse, the polygons will be clipped based on the rectangle boundaries.
That is doable enough, but I also want the user to be able to start over: when the escape key is pressed, the clip box should disappear, and the original polygons should be restored. Since I am doing things pixel-by-pixel, it seems very difficult to figure out how to change the rectangle pixel colors back to either black like the background or the color of a particular polygon, depending on where they were drawn (unless I find a way to save the colors when each polygon pixel is drawn, but that seems overboard). I was wondering if it would help to give the rectangle its own buffer, in the hopes that it would act like a sort of transparent layer that could easily be cleared off (?) Is this the way buffers can be used, or do I need to find another solution?
OpenGL does know multiple kinds of buffers:
Framebuffers: Portions of memory to which drawing operations are directed changing pixel values in the buffer. OpenGL by default has on-screen buffers, which can be split into a front and a backbuffer, where drawing operations happen invisible on the backbuffer and are swapped to the front when finishes. In addition to that OpenGL uses a depth buffer for depth testing Z sort implementation, a stencil buffer used to limit rendering to cut-out (=stencil) like selected portions of the framebuffer. There used to be auxiliary and accumulation buffers. However those have been superseeded by so called framebuffer objects, which are user created object, combining several textures or renderbuffers into new framebuffers which can be rendered to.
Renderbuffers: User created render targets, to be attached to framebuffer objects.
Buffer Objects (Vertex and Pixel): User defined data storage. Used for geometry and image data.
Textures: Textures are sort of buffers, i.e. they hold data, which can be sources in drawing operations
The usual approach with OpenGL is to rerender the whole scene whenever something changes. If you want to save those drawing operations you can copy the contents of the framebuffer to a texture and then just draw that texture to a single quad and overdraw it with your selection rubberband rectangle.