Encode image and change color in SDL2 and C ++ - c++

Is there a method or function that: when loading a texture, it is coded to apply color changes?
How Sprite Works in NES

You need to do it yourself. SDL wasn't done to works with NES texture format.
You'll need to load your texture array. Create a new surface with the right size. After that, you can fill the pixels with the colour corresponding to your colour palette. You could do it with a custom SDL_Palette, but this isn't a good practice.
An SDL_Palette should never need to be created manually. It is automatically created when SDL allocates an SDL_PixelFormat for a surface. The colors values of an SDL_Surface's palette can be set with SDL_SetPaletteColors().
SDL_Palette Wiki Page

Related

Is it possible to have a display framebuffer in OpenGL

I want to display a 2D array of pixels directly to the screen. The pixel-data is not static and changes on user triggered event like a mousemove. I wish to have a display framebuffer to which I could write directly to the screen.
I have tried to create a texture with glTexImage2D(). I then render this texture to a QUAD. And then I update the texture with glTexSubImage2D() whenever a pixel is modified.
It works!
But this is not the efficient way I guess. The glTexSubImage2D copies whole array including the unmodified pixels back to the texture which is not good performance wise.
Is there any other way, like having a "display-framebuffer" to which I could write only the modified pixels and change will reflect on the screen.
glBlitFramebuffer is what you want.
Copies a rectangular block of pixels from one frame buffer to another. Can stretch or compress, but doesn't go through shaders and whatnot.
You'll probably also need some combination of glBindFramebuffer, glFramebufferTexture, glReadBuffer to set up the source and destination.

Drawing text to a 2D texture in DirectX 11?

I'd like to be able to apply a transparent texture to each face of a simple 3D cube. Each texture is simply the name of the face, e.g. "Front", "Back", "Left", etc. Normally I'd just create an image in a paint app, then save it, and load the image as a texture at runtime. However, the twist is that I'd like this text to be in multiple (and growing) different languages and although creating six new images for each language is one method, it would be preferable to dynamically generate textures from translated strings instead.
So the ideal solution is to be able to render a text string to a 2D texture at runtime (using any font and colour I choose), then render the resultant texture over each surface as a transparent overlay. Can this be done relatively easily?
e.g. this sort of process:
Getting the texture onto the cube is easy enough. It's the getting the text (string) rendered to a texture (ID3D11Texture2D) which is the tricky part. It seems as though it should be very do-able. Very new at this, so as far as possible keep any explanations in beginner-friendly language where possible.
I guess the cube textures do not change too frequently, since the language should not be switched to often by the user. In this case I would use a toolkit like Qt to generate the textures - similar like this:
QImage image(width,height,QImage::Format_ARGB32);
image.fill(Qt::transparent);
QPainter painter;
painter.begin(&image);
painter.setRenderHints(QPainter::HighQualityAntialiasing
| QPainter::TextAntialiasing);
painter.setFont(font);
painter.setPen(Qt::black);
painter.drawText(0, metric.ascent(), QString(c));
painter.end();
To access the texture data:
auto* data = image.data();
A different solution would be to use Signed Distance Text Rendering. Doing this you need to render this text to a texture first (render to texture) and then apply it to the cube.

On OpenGL, is there any way to tell glTexSubImage2d not to overwrite transparent pixels?

On OpenGL, I'm using glTexSubImage2d to overwrite specific parts of a 2D texture with rectangular sprites. Those sprites have, though, some transparent pixels (0x00000000) that I want to be ignored - that is, I don't want those pixels to overwrite whatever is on their positions on the target texture. Is there any way to tell OpenGL not to overwrite those pixels?
This must be compatible with OpenGL versions as low as possible.
No, the glTexSubImage2d will copy the data to the texture directly no matter what the source or the target is.
I can only suggest you to create another texture with the data you are trying to push using glTexSubImage2d and then draw this texture to your target texture. This will lead to a pretty standard drawing pipeline so you can do whatever you want using blend functions or shaders.

How to make sharp textures in cocos2d-x

I have a plain image data (32 bit RGBA) of size in a char array and want to make a cocos2d-x Texture2D out of it as in:
Texture2D * tex = new Texture2D();
tex->initWithData(data, 4*size.width*size.height, Texture2D::PixelFormat::RGBA8888,size.width, size.height, size);
tex->drawAtPoint(Point(offset));
but the resulting image is blurry (like a compressed jpeg image)..
Both the input and texture (as seen above) have size size and I am not making any transformations. I just put the code in HelloWorld::draw(Renderer *, const kmMat4&, bool) in a newly created project. How can I get a sharp image (the exact representation of data)? Any other suggestions (like a direct frame-buffer like thing ...etc if any exist) are also welcome.
I am using v3.0 in mac os 10.8.x
Edit: I tried setting Texture params (GL_NEAREST) just after Texture2D creation and before initWithData and nothing changed. Data and texture are the same size so it should not do resizing anyway... Here is a picture; corners of the rectangles are not sharp:
Edit2: setting texture parameters (GL_NEAREST) after initWithData worked:
...
Texture2D::TexParams tp = {GL_NEAREST, GL_NEAREST, GL_CLAMP_TO_EDGE, GL_CLAMP_TO_EDGE};
tex->setTexParameters(tp);
tex->drawAtPoint(Point(offset));
From your description of the problem, I have to assume that the GL texture associated with a Texture2D object is not actually created until you call Texture2D::initWithData (...).
In OpenGL, the default texture magnification filter is GL_LINEAR. You do not want the default mag filter in this case, but you also cannot change the filter until after you initialize your texture. Setting a GL_NEAREST mag filter after the texture is created will eliminate the smoothing when you upscale the texture.
Try this :
usually the game needs to keep the look of the pixel images sharp /
none blurred
http://www.gamedevcraft.com/2015/10/disable-texture-anti-aliasing.html

Difference between glBitmap and glTexImage2D

I need to display image in openGL window.
Image changes every timer tick.
I've checked on google how, and as I can see it can be done using or glBitmap or glTexImage2D functions.
What is the difference between them?
The difference? These two functions have nothing in common.
glBitmap is a function for drawing binary images. That's not a .BMP file or an image you load (usually). The function's name doesn't refer to the colloquial term "bitmap". It refers to exact that: a map of bits. Each bit in the bitmap represents a pixel. If the bit is 1, then the current raster color will be written to the framebuffer. If the bit is 0, then the pixel in the framebuffer will not be altered.
glTexImage2D is for allocating textures and optionally uploading pixel data to them. You can later draw triangles that have that texture mapped to them. But glTexImage2D by itself does not draw anything.
What you are probably looking for is glDrawPixels, which draws an image directly into the framebuffer. If you use glTexImage2D, you have to first update the texture with the new image, then draw a shape with that texture (say, a fullscreen quad) to actually render the image.
That said, you'll be better off with glTexImage2D if...
You're using a library like JOGL that makes binding textures from images an easy operation, or
You want to scale the image or display it in perspective