Display contents of OpenglES buffer - c++

I want to display a yuv to rgb converted frame to the default display. Currently i am doing it with the following code where the yuv to rgb conversion is done by an assembly code which loads CPU. I have found some code to do the same with opengles.
Yuv420_to_RGB(ui8buf, buffer1, h1, w1); /* RGB data will be resulted in buffer1 */
window = ANativeWindow_fromSurface(env, surface);
ANativeWindow_acquire(window);
wid = ANativeWindow_getWidth(window);
hei = ANativeWindow_getHeight(window);
ANativeWindow_setBuffersGeometry(window,w1,h1,1)
if (ANativeWindow_lock(window, &buffer, NULL) == 0)
{
memcpy(buffer.bits, buffer1, (4* w1*h1));
ANativeWindow_unlockAndPost(window);
}
ANativeWindow_release(window);
I have the opengles routine ending with glDrawArrays. How can i display the result of opengles conversion?

Nothing of the code you posted does anything with OpenGL-ES. The typical method to implement color space conversion with OpenGL(-ES) is to load the image into a texture, load a fragment shader performing the color conversion and draw a (full viewport) textured quad (that's what glDrawArrays will do, if a quad's geometry has been loaded into the vertex arrays before).

Related

How to fill non RGB OpenGL texture in glium?

I use OpenGL shaders to do color conversion from YUV to RGB. For example, on YUV420P, I create 3 textures (one for Y, one for U, one for V) and use the texture GLSL call to get each texture. Then I use matrix multiplication to get the RGB value. Each of thes textures have the format GL_RED, because they store only 1 component.
This all works on C++. Now I'm using the safe OpenGL Rust library glium. I'm creating a texture like this:
let mipmap = glium::texture::MipmapsOption::NoMipmap;
let format = glium::texture::UncompressedFloatFormat::U8;
let y_texture = glium::texture::texture2d::Texture2d::empty_with_format(&display, format, mipmap, width as u32, height as u32).unwrap();
let u_texture = glium::texture::texture2d::Texture2d::empty_with_format(&display, format, mipmap, (width as u32)/2, (height as u32)/2).unwrap();
let v_texture = glium::texture::texture2d::Texture2d::empty_with_format(&display, format, mipmap, (width as u32)/2, (height as u32)/2).unwrap();
See that the sizes of the U and V textures are 1/4 of the Y texture, as expected for YUV420P.
as you see, for YUV420P I've chosen glium::texture::UncompressedFloatFormat::U8, which I think is the same as GL_RED.
The problem is that I don't know how to fill this texture with data. Its write method expect something that can be converted into a RawImage2D. However, all the filling for methods for RawImage2D expect an RGB image.
I need a method to fill only Y to the first texture, then only U to the second, and only V to the third.

Identify pixel data format in SDL

Running on OS X, I've loaded a texture in OpenGL using SDL_Image library (using IMG_LOAD() which returns SDL_Surface*). It appeared that the color channels have been swapped, i.e. I had to set GL_BGRA as a pixel format parameter in glTexImage2D().
Is there a way to determine the correct data format (BGRA or RGBA or etc...) without just simply compiling and checking the texture? And what is the reason that SDL swaps these color channels?
Yes. Following link has code examples of how to determine the channel shift for each component: http://wiki.libsdl.org/SDL_PixelFormat#Code_Examples
From the site:
SDL_PixelFormat *fmt;
SDL_Surface *surface;
Uint32 temp, pixel;
Uint8 red, green, blue, alpha;
.
.
.
fmt = surface->format;
SDL_LockSurface(surface);
pixel = *((Uint32*)surface->pixels);
SDL_UnlockSurface(surface);
/* Get Red component */
temp = pixel & fmt->Rmask; /* Isolate red component */
temp = temp >> fmt->Rshift; /* Shift it down to 8-bit */
temp = temp << fmt->Rloss; /* Expand to a full 8-bit number */
red = (Uint8)temp;
You should be able to sort the Xmasks by value. Then you may determine if its RGBA or BGRA. If Xmask == 0 then the color channel does not exist.
I have no idea why the swaps occur.
Edit: Changed from Xshift to Xmask as the latter may be used to determine both location AND existance of color channels.

How to go from raw Bitmap data to SDL Surface or Texture?

I'm using a library called Awesomium and it has the following function:
void Awesomium::BitmapSurface::CopyTo ( unsigned char * dest_buffer, // output
int dest_row_span, // input that I can select
int dest_depth, // input that I can select
bool convert_to_rgba, // input that I can select
bool flip_y // input that I can select
) const
Copy this bitmap to a certain destination. Will also set the dirty bit to False.
Parameters
dest_buffer A pointer to the destination pixel buffer.
dest_row_span The number of bytes per-row of the destination.
dest_depth The depth (number of bytes per pixel, is usually 4 for BGRA surfaces and 3 for BGR surfaces).
convert_to_rgba Whether or not we should convert BGRA to RGBA.
flip_y Whether or not we should invert the bitmap vertically.
This is great because it gives me an unsigned char * dest_buffer which contains raw bitmap data. I've been trying for several hours to convert this raw bitmap data into some sort of usable format that I can use in SDL but I'm having trouble. =[ Is there any way I can load it into a SDL texture or surface? It would be ideal to have examples for both but if I only get one example (either texture or surface), that is sufficient and I will be very grateful. :) I tried to use SDL_LoadBMP_RW but that crashed. I'm not even sure if I should be using that method.
SDL_LoadBMP_RW is for loading an image in the BMP file format. And it expects an SDL_RWops*, which is a file stream, not a pixel buffer. The function you want is SDL_CreateRGBSurfaceFrom. I believe this call should work for your purposes:
SDL_Surface* surface =
SDL_CreateRGBSurfaceFrom(
pixels, // dest_buffer from CopyTo
width, // in pixels
height, // in pixels
depth, // in bits, so should be dest_depth * 8
pitch, // dest_row_span from CopyTo
Rmask, // RGBA masks, see docs
Gmask,
Bmask,
Amask
);

How to create one bitmap from parts of many textures (C++, SDL 2)?

I have *.png files and I want to get different 8x8 px parts from textures and place them on bitmap (SDL_Surface, I guess, but maybe not), smth like this:
Now I'm rendering that without bitmap, i.e. I call each texture and draw part directly on screen each frame, and it's too slow. I guess I need to load each *.png to separate bitmap and use them passing video memory, then call just one big bitmap, but maybe I'm wrong. I need the fastest way of doing that, I need code of this (SDL 2, not SDL 1.3).
Also maybe I need to use clear OpenGL here?
Update:
Or maybe I need to load *.png's to int arrays somehow and use them just like usual numbers and place them to one big int array, and then convert it to SDL_Surface/SDL_Texture? It seems this is the best way, but how to write this?
Update 2:
Colors of pixels in each block are not the same as it presented at the picture and also can they be transparent. Picture is just an example.
Assumming you already have your bitmaps loaded up as SDL_Texture(s), composing them into a different texture is done via SDL_SetRenderTarget .
SDL_SetRenderTarget(renderer, target_texture);
SDL_RenderCopy(renderer, texture1, ...);
SDL_RenderCopy(renderer, texture2, ...);
...
SDL_SetRenderTarget(renderer, NULL);
Every render operation you perform between setting your Render Target and resetting it (by calling SDL_SetRenderTarget with a NULL texture parameter) will be renderer to the designated texture. You can then use this texture as you would use any other.
Ok so, when I asked about "solid colour", I meant - "in that 8x8 pixel area in the .png that you are copying from, do all 64 pixels have the same identical RGB value?" It looks that way in your diagram, so how about this:
How about creating an SDL_Surface, and directly painting 8x8 pixel areas of the memory pointed to by the pixels member of that SDL_Surface with the values read from the original .png.
And then when you're done, convert that surface to an SDL_Texture and render that?
You would avoid all the SDL_UpdateTexture() calls.
Anyway here is some example code. Let's say that you create a class called EightByEight.
class EightByEight
{
public:
EightByEight( SDL_Surface * pDest, Uint8 r, Uint8 g, Uint8 b):
m_pSurface(pDest),
m_red(r),
m_green(g),
m_blue(b){}
void BlitToSurface( int column, int row );
private:
SDL_Surface * m_pSurface;
Uint8 m_red;
Uint8 m_green;
Uint8 m_blue;
};
You construct an object of type EightByEight by passing it a pointer to an SDL_Surface and also some values for red, green and blue. This RGB corresponds to the RGB value taken from the particular 8x8 pixel area of the .png you are currently reading from. You will paint a particular 8x8 pixel area of the SDL_Surface pixels with this RGB value.
So now when you want to paint an area of the SDL_Surface, you use the function BlitToSurface() and pass in a column and row value. For example, if you divided the SDL_Surface into 8x8 pixel squares, BlitToSurface(3,5) means the paint the square at the 4th column, and 5th row with the RGB value that I set on construction.
The BlitToSurface() looks like this:
void EightByEight::BlitToSurface(int column, int row)
{
Uint32 * pixel = (Uint32*)m_pSurface->pixels+(row*(m_pSurface->pitch/4))+column;
// now pixel is pointing to the first pixel in the correct 8x8 pixel square
// of the Surface's pixel memory. Now you need to paint a 8 rows of 8 pixels,
// but be careful - you need to add m_pSurface->pitch - 8 each time
for(int y = 0; y < 8; y++)
{
// paint a row
for(int i = 0; i < 8; i++)
{
*pixel++ = SDL_MapRGB(m_pSurface->format, m_red, m_green, m_blue);
}
// advance pixel pointer by pitch-8, to get the next "row".
pixel += (m_pSurface->pitch - 8);
}
}
I'm sure you could probably speed things up further by pre-calculating an RGB value on construction. Or if you're reading a pixel from the texture, you could probably dispense with the SDL_MapRGB() (but it's just there in case the Surface has different pixel format to the .png).
memcpy is probably faster than 8 individual assignments to the RGB value - but I just want to demonstrate the technique. You could experiment.
So, all the EightByEight objects you create, all point to the same SDL_Surface.
And then, when you're done, you just convert that SDL_Surface to an SDL_Texture and blit that.
Thanks to everyone who took part, but we solved it with my friends. So here is an example (source code is too big and unnecessary here, I'll just describe the main idea):
int pitch, *pixels;
SDL_Texture *texture;
...
if (!SDL_LockTexture(texture, 0, (void **)&pixels, &pitch))
{
for (/*Conditions*/)
memcpy(/*Params*/);
SDL_UnlockTexture(texture);
}
SDL_RenderCopy(renderer, texture, 0, 0);

Load bmp file as texture using auxDIBImageLoad in OpenGL

I am learning OpenGL NeHe Production.When I read lesson22 Bump-Mapping、Multi-texture,I got a problem.
When I load logo bmp file,I need to load two bmp files:one stores color information ,and another stores alpha information.
here is the two bmp files:
OpenGL_Alpha.bmp:
and OpenGL.bmp :
Here is the code:
if (Image=auxDIBImageLoad("Data/OpenGL_ALPHA.bmp")) {
alpha=new char[4*Image->sizeX*Image->sizeY];
for (int a=0; a<Image->sizeX*Image->sizeY; a++)
alpha[4*a+3]=Image->data[a*3]; //???????
if (!(Image=auxDIBImageLoad("Data/OpenGL.bmp"))) status=false;
for (a=0; a<Image->sizeX*Image->sizeY; a++) {
alpha[4*a]=Image->data[a*3];//??????????
alpha[4*a+1]=Image->data[a*3+1];
alpha[4*a+2]=Image->data[a*3+2];
}
glGenTextures(1, &glLogo);
glBindTexture(GL_TEXTURE_2D, glLogo);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, Image->sizeX, Image->sizeY, 0, GL_RGBA, GL_UNSIGNED_BYTE, alpha);
delete alpha;
}
My question is :why the index of Image->data is a*3???
Could someone interpret for me ?
I am learning OpenGL NeHe Production.When I read lesson22 Bump-Mapping
Why? The NeHe tutorials are terribly outdated, and the Bump Mapping technique outlined there completely obsolete. It's been superseeded by shader based normal mapping for well over 13 years (until 2003 texture combiners were used instead of shaders).
Also instead of BMPs you should use a image file format better suited for textures (with alpha channel). Like:
TGA
PNG
OpenEXR
Also the various compressed DX texture formats are a good choice for several applications.
My question is :why the index of Image->data is a*3???
Extracting the red channel of a RGB DIB.
It's the channel offset. The RGB data is stored as three consecutive bytes. Here 'a' represents which pixel (group of 3 bytes, one for R, one for G, one for B).
Think of a*3 as a pointer to an array of 3 bytes:
char* myPixel = Image->data + (a*3);
char red = myPixel[0];
char green = myPixel[1];
char blue = myPixel[2];