is there a way to get the index of color from an indexed PNG? - glsl

I'm trying to write a webgl shader to do a palette swap of an indexed png. sometimes the texture will be used with the rgb palette defined in the png itself, which of course requires no custom code, but other times I would like to display it with a different palette. however, it seems that simply sampling the texture returns a vec4, i.e. the rgb color from the palette, which is not what I want. is there a way to load a indexed png as an array of int values representing the color indices?

Related

OpenGL: Saving depth map as 2d array

I am able to render depth maps of 3d models to the screen using openGL. I am trying to obtain a 2d array (or matrix) representation of the depth map, say as a grayscale image, so I can perform image processing operations on it, like masking and segmentation.
So far, my depth map simply prints depth values instead of the colors in the fragment shader. How can I save the resulting depth map display as a matrix?
You have to use frame buffer object. Attach texture to it as a depth attachment, and then use as normal texture. Have a look at this tutorial for example.

Encode image and change color in SDL2 and C ++

Is there a method or function that: when loading a texture, it is coded to apply color changes?
How Sprite Works in NES
You need to do it yourself. SDL wasn't done to works with NES texture format.
You'll need to load your texture array. Create a new surface with the right size. After that, you can fill the pixels with the colour corresponding to your colour palette. You could do it with a custom SDL_Palette, but this isn't a good practice.
An SDL_Palette should never need to be created manually. It is automatically created when SDL allocates an SDL_PixelFormat for a surface. The colors values of an SDL_Surface's palette can be set with SDL_SetPaletteColors().
SDL_Palette Wiki Page

OpenGL color index in frag shader?

I have a large sprite library and I'd like to cut GPU memory requirements. Can I store textures on the gpu with only 1 byte per pixel and use that for an RGB color look up in a fragment shader? I see conflicting reports on the use of GL_R8.
I'd say this really depends on whether your hardware supports that texture format or not. How about skipping the whole issue by using a A8R8G8B8 texture instead? It would just be compressed, i.e. using a bit mask (or r/g/b/a members in glsl) to read "sub pixel" values. Like the first pixel is stored in alpha channel, second pixel in red channel, third pixel in green channel, etc.
You could even use this to store up to 4 layers in a single image (cutting max texture width/height); picking just one shouldn't be an issue.

Good way to deal with alpha channels in 8-bit bitmap? - OpenGL - C++

I am loading bitmaps with OpenGL to texture a 3d mesh. Some of these bitmaps have alpha channels (transparency) for some of the pixels and I need to figure out the best way to
obtain the values of transparency for each pixel
and
render them with the transparency applied
Does anyone have a good example of this? Does OpenGL support this?
First of all, it's generally best to convert your bitmap data to 32-bit so that each channel (R,G,B,A) gets 8 bits. When you upload your texture, specify a 32bit format.
Then when rendering, you'll need to glEnable(GL_BLEND); and set the blend function, eg: glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);. This tells OpenGL to mix the RGB of the texture with that of the background, using the alpha of your texture.
If you're doing this to 3D objects, you might also want to turn off back-face culling (so that you see the back of the object through the front) and sort your triangles back-to-front (so that the blends happen in the correct order).
If your source bitmap is 8-bit (ie: using a palette with one colour specified as the transparency mask), then it's probably easiest to convert that to RGBA, setting the alpha value to 0 when the colour matches your transparency mask.
Some hints to make things (maybe) look better:
Your alpha channel is going to be an all-or-nothing affair (either 0x00 or 0xff), so apply some blur algorithm to get softer edges, if that's what you're after.
For texels (texture-pixels) with an alpha of zero (fully transparent), replace the RGB colour with the closest non-transparent texel. When texture coordinates are being interpolated, they wont be blended towards the original transparency colour from your BMP.
If your pixmap are 8-bit single channel they are either grayscale or use a palette. What you first need to do is converting the pixmap data into RGBA format. For this you allocate a buffer large enough to hold a 4-channel pixmap of the dimensions of the original file. Then for each pixel of the pixmap use that pixel's value as index into the palette (look up table) and put that color value into the corresponding pixel in the RGBA buffer. Once finished, upload to OpenGL using glTexImage2D.
If your GPU supports fragment shaders (very likely) you can do that LUT transformation in the shader: Upload the 8-bit pixmal as a GL_RED or GL_LUMINANCE 2D texture. And upload the palette as a 1D GL_RGBA texture. Then in the fragment shader:
uniform sampler2D texture;
uniform sampler1D palette_lut;
void main()
{
float palette_index = texture2D(texture,gl_TexCoord[0].st).r;
vec4 color = texture1D(palette_lut, palette_index);
gl_FragColor = color;
}
Blended rendering conflicts with the Z buffer algorithm, so you must sort your geometry back-to-front for things to look properly. As long as this affects objects at a whole this is rather simple, but it becomes tedious if you need to sort the faces of a mesh rendering each and every frame. A method to avoid this is breaking down meshes into convex submeshes (of course a mesh that's convex already can not be broken down further). Then use the following method:
Enable face culling
for convex_submesh in sorted(meshes, far to near):
set face culling to front faces (i.e. the backside gets rendered)
render convex_submesh
set face culling to back faces (i.e. the fronside gets rendered)
render convex_submesh again

OpenGL colorize filters

I have an open GL quad that is rendered with a grayscale gradient. I would like to colorize it by applying a filter, something like:
If color = 0,0,0 then set color to 255,255,255
If color = 0,0,1 then set color to 255,255,254
etc, or some scheme I decide on.
Note the reason I do this in grayscale because the algorithm I'm using was designed to be drawn in grayscale and then colorized since the colors may not be known immediately.
This would be similar to the java LookupOp http://download.oracle.com/javase/6/docs/api/java/awt/image/LookupOp.html.
Is there a way to do this in openGL?
thanks,
Jeff
You could interpret those colours from the grayscale gradient as 1-D texture coordinates and then specify your look-up table as a 1-D texture. This seems to fit your situation.
Alternatively, you can use a fragment program (shader) to perform arbitrary colour transformations on individual pixels.
Some more explanation: What is a texture? A texture, conceptually, is some kind of lookup function, with some additional logic on top.
A 2-D texture is something which for any pair of coordinates (s,t) or (x,y) in the range of [0,0] - [1,1] yields a specific colour (RGB, RGBA, L, whatever). Additionally it has some settings like warping or filtering.
Underneath, a texture is described by discrete data of a given "density" - perhaps 16x16, perhaps 256x512. The filtering process makes it possible to specify a colour for any real number between [0,0] and [1,1] (by mixing/interpolating neighbouring texels or just taking the nearest one).
A 1-D texture is identical, except that it maps just a single real value to a colour. Therefore, it can be thought of as a specific type of a "lookup table". You can consider it equivalent to a 2-D texture based on a 1xN image.
If you have a grayscale gradient, you may render it directly by treating the gradient value as a colour - or you can treat it as texture coordinates (= indices in the lookup table) and using the 1-D texture for an arbitrary colour space transform.
You'd just need to translate the gradient values (from 0..255 range) to the [0..1] range of texture indices. I'd recommend something like out = (in+0.5)/256.0. The 0.5 makes for the half-texel offset as we want to point to the middle of a texel (a value inside a texture), not to a corner between 2 values.
To only have the exact RGB values from lookup table (= 1-D texture), also set the texture filters to GL_NEAREST.
BTW: Note that if you already need another texture to draw the gradient, then it gets a bit more complicated, because you'd want to treat the values received from one texture as coordinates for another texture - and I believe you'd need pixel shaders for that. Not that shaders are complicated or anything... they are extremely handy when you learn the basics.