draw part of image with openGL glDrawPixels - opengl

I have a function to draw an image in an openGL context. (used in that case to render to a texture) That works for the whole image, but should also be able to render only a rectangular part. Rendering parts works if the part has the same width as the image. For parts that are less wide than the image-data it fails.
Here is the function (reduced to only the part for small width, no cleanup,etc)
void drawImage(uint32 imageWidth, uint32 imageHeight, uint8* pData,
uint32 offX, uint32 partWidth) // (offX+partWidth<=imageWidth)
{
uint8* p(pData);
if (partWidth != imageWidth)
{
glPixelStorei(GL_PACK_ROW_LENGTH, imageWidth);
p = calcFrom(offX, pData); // point at pixel in row
}
glDrawPixels(partWidth, ImageHeight, GL_BGRA, GL_UNSIGNED_BYTE, p);
}
As said: if (widthPart==imageWidth) the rendering works fine. For some combinations of partWidth and imageWidth it works also but that seems to be a very special case, mainly width very small images and a some special partWidths.
I found no examples for this, but from the docs I think this shold be possible to do somehow like that. Did I missunderstand the whole thing, or have I just overseen a small pit-fall??
Thanks,
Moritz
P.S: it's running on windows
[Edited:] P.P.S: by now I have tried to do that as texture. If I replace glDrawPixels with glTexImage2D I have the same problem...(could upload the whole image and render only part, but for small small parts of big pictures that might not e the best way...)

AAArrrghh!!
GL_UNPACK_ROW_LENGTH not GL_PACK_ROW_LENGTH!!!!

Related

Greyscale image in SDL2

I have an array of uint8_t which represents a greyscale picture, where each pixel is one uint8_t. I would like to display this in a window using the SDL2 library.
I have tried to create an SDL_Surface from the array by doing
mSurface = SDL_CreateRGBSurfaceFrom(mData, mWidth, mHeight, 8, mWidth, 0xFF0000, 0xFF0000, 0xFF0000, 0xFF0000);
However, the problem is that when a depth of 8 bits is passed to SDL_CreateRGBSurfaceFrom (as I have done here), according to the SDL2 wiki "If depth is 4 or 8 bits, an empty palette is allocated for the surface" . If it wasn't for that, then I would be able to tell SDL that each pixel is one byte, and to use that byte for the R, G, and B values.
I want a depth of 8 bits per pixel because thats how my data is stored, but I don't want to use a pallete.
Is there any way to make SDL not assume I want a pallete, and just display the image with the r, g, and b masks all set to that byte?
I understand that an alternative solution would be to convert my greyscale image into RGB by copying each byte three times, and then to display it. However, I would like to avoid doing that if possible because all that copying would be slow.
SDL_CreateRGBSurfaceFrom() does not handle 8-bit true color formats. As you noted, it creates a blank palette for 8-bit depths. The most obvious thing to do is to fill in the palette and just let it do its thing.
Here's some code for a grayscale palette:
SDL_Color colors[256];
int i;
for(i = 0; i < 256; i++)
{
colors[i].r = colors[i].g = colors[i].b = i;
}
SDL_SetPaletteColors(mSurface->format->palette, colors, 0, 256);
Also, a rule of thumb: Never avoid something that works just for being "slow". Do avoid things that are "too slow". You might only know when something is "too slow" by trying it out.
In this case, you might only be loading this image once and then after that you experience a negligible performance effect.

How to create one bitmap from parts of many textures (C++, SDL 2)?

I have *.png files and I want to get different 8x8 px parts from textures and place them on bitmap (SDL_Surface, I guess, but maybe not), smth like this:
Now I'm rendering that without bitmap, i.e. I call each texture and draw part directly on screen each frame, and it's too slow. I guess I need to load each *.png to separate bitmap and use them passing video memory, then call just one big bitmap, but maybe I'm wrong. I need the fastest way of doing that, I need code of this (SDL 2, not SDL 1.3).
Also maybe I need to use clear OpenGL here?
Update:
Or maybe I need to load *.png's to int arrays somehow and use them just like usual numbers and place them to one big int array, and then convert it to SDL_Surface/SDL_Texture? It seems this is the best way, but how to write this?
Update 2:
Colors of pixels in each block are not the same as it presented at the picture and also can they be transparent. Picture is just an example.
Assumming you already have your bitmaps loaded up as SDL_Texture(s), composing them into a different texture is done via SDL_SetRenderTarget .
SDL_SetRenderTarget(renderer, target_texture);
SDL_RenderCopy(renderer, texture1, ...);
SDL_RenderCopy(renderer, texture2, ...);
...
SDL_SetRenderTarget(renderer, NULL);
Every render operation you perform between setting your Render Target and resetting it (by calling SDL_SetRenderTarget with a NULL texture parameter) will be renderer to the designated texture. You can then use this texture as you would use any other.
Ok so, when I asked about "solid colour", I meant - "in that 8x8 pixel area in the .png that you are copying from, do all 64 pixels have the same identical RGB value?" It looks that way in your diagram, so how about this:
How about creating an SDL_Surface, and directly painting 8x8 pixel areas of the memory pointed to by the pixels member of that SDL_Surface with the values read from the original .png.
And then when you're done, convert that surface to an SDL_Texture and render that?
You would avoid all the SDL_UpdateTexture() calls.
Anyway here is some example code. Let's say that you create a class called EightByEight.
class EightByEight
{
public:
EightByEight( SDL_Surface * pDest, Uint8 r, Uint8 g, Uint8 b):
m_pSurface(pDest),
m_red(r),
m_green(g),
m_blue(b){}
void BlitToSurface( int column, int row );
private:
SDL_Surface * m_pSurface;
Uint8 m_red;
Uint8 m_green;
Uint8 m_blue;
};
You construct an object of type EightByEight by passing it a pointer to an SDL_Surface and also some values for red, green and blue. This RGB corresponds to the RGB value taken from the particular 8x8 pixel area of the .png you are currently reading from. You will paint a particular 8x8 pixel area of the SDL_Surface pixels with this RGB value.
So now when you want to paint an area of the SDL_Surface, you use the function BlitToSurface() and pass in a column and row value. For example, if you divided the SDL_Surface into 8x8 pixel squares, BlitToSurface(3,5) means the paint the square at the 4th column, and 5th row with the RGB value that I set on construction.
The BlitToSurface() looks like this:
void EightByEight::BlitToSurface(int column, int row)
{
Uint32 * pixel = (Uint32*)m_pSurface->pixels+(row*(m_pSurface->pitch/4))+column;
// now pixel is pointing to the first pixel in the correct 8x8 pixel square
// of the Surface's pixel memory. Now you need to paint a 8 rows of 8 pixels,
// but be careful - you need to add m_pSurface->pitch - 8 each time
for(int y = 0; y < 8; y++)
{
// paint a row
for(int i = 0; i < 8; i++)
{
*pixel++ = SDL_MapRGB(m_pSurface->format, m_red, m_green, m_blue);
}
// advance pixel pointer by pitch-8, to get the next "row".
pixel += (m_pSurface->pitch - 8);
}
}
I'm sure you could probably speed things up further by pre-calculating an RGB value on construction. Or if you're reading a pixel from the texture, you could probably dispense with the SDL_MapRGB() (but it's just there in case the Surface has different pixel format to the .png).
memcpy is probably faster than 8 individual assignments to the RGB value - but I just want to demonstrate the technique. You could experiment.
So, all the EightByEight objects you create, all point to the same SDL_Surface.
And then, when you're done, you just convert that SDL_Surface to an SDL_Texture and blit that.
Thanks to everyone who took part, but we solved it with my friends. So here is an example (source code is too big and unnecessary here, I'll just describe the main idea):
int pitch, *pixels;
SDL_Texture *texture;
...
if (!SDL_LockTexture(texture, 0, (void **)&pixels, &pitch))
{
for (/*Conditions*/)
memcpy(/*Params*/);
SDL_UnlockTexture(texture);
}
SDL_RenderCopy(renderer, texture, 0, 0);

Drawing a minimap (game)

I'm making a rts game, and have created a 2d array containing the map tiles. I'd like to transfer this to an image (an unsigned int or a sdl surface?) and I would likely draw this onto a gl Quad. I'd likely just use 2 for loops to draw the entire map each frame. The problem is, I don't know the syntax of how to do this.
I'd like the map size to be flexible (probably always a square), and therefore the minimap has to also be flexible.
If I can find out the syntax of how to create an image from scratch (or understand how an unsigned int can be interpreted as an image?) and draw each pixel, this would completely resolve my issue.
You can first create a SDL_Surface using SDL_CreateRGBSurface( link has a tutorial ) with the desired height and width of the map.
SDL_Surface *map = SDL_CreateRGBSurface(Uint32 flags, int width, int height, int bitsPerPixel, Uint32 Rmask, Uint32 Gmask, Uint32 Bmask, Uint32 Amask);
After you have the surface you can access the pixels of the surface with
map->pixels //pointer to the start of pixel data for map
When you want to resize the map, you create a new SDL_Surface with the new size and transform pixels using image scaling algorithms to it then delete the old surface and use the new one as the map

Faster encoding of realtime 3d graphics with opengl and x264

I am working on a system that sends a compressed video to a client from 3d graphics that are done in the server as soon as they are rendered.
I already have the code working, but I feel it could be much faster (and it is already a bottleneck in the system)
Here is what I am doing:
First I grab the framebuffer
glReadBuffer( GL_FRONT );
glReadPixels( 0, 0, width, height, GL_RGB, GL_UNSIGNED_BYTE, buffer );
Then I flip the framebuffer, because there is a weird bug with swsScale (which I am using for colorspace conversion) that flips the image vertically when I convert. I am flipping in advance, nothing fancy.
void VerticalFlip(int width, int height, byte* pixelData, int bitsPerPixel)
{
byte* temp = new byte[width*bitsPerPixel];
height--; //remember height array ends at height-1
for (int y = 0; y < (height+1)/2; y++)
{
memcpy(temp,&pixelData[y*width*bitsPerPixel],width*bitsPerPixel);
memcpy(&pixelData[y*width*bitsPerPixel],&pixelData[(height-y)*width*bitsPerPixel],width*bitsPerPixel);
memcpy(&pixelData[(height-y)*width*bitsPerPixel],temp,width*bitsPerPixel);
}
delete[] temp;
}
Then I convert it to YUV420p
convertCtx = sws_getContext(width, height, PIX_FMT_RGB24, width, height, PIX_FMT_YUV420P, SWS_FAST_BILINEAR, NULL, NULL, NULL);
uint8_t *src[3]= {buffer, NULL, NULL};
sws_scale(convertCtx, src, &srcstride, 0, height, pic_in.img.plane, pic_in.img.i_stride);
Then I pretty much just call the x264 encoder. I am already using the zerolatency preset.
int frame_size = x264_encoder_encode(_encoder, &nals, &i_nals, _inputPicture, &pic_out);
My guess is that there should be a faster way to do this. Capturing the frame and converting it to YUV420p. It would be nice to convert it to YUV420p in the GPU and only after that copying it to system memory, and hopefully there is a way to do color conversion without the need to flip.
If there is no better way, at least this question may help someone trying to do this, to do it the same way I did.
First , use async texture read using PBOs.Here is example It speeds ups the read by using 2 PBOs which work asynchronously without stalling the pipeline like readPixels does when used directly.In my app I got 80% performance boost when switched to PBOs.
Additionally , on some GPUs glGetTexImage() works faster than glReadPixels() so try it out.
But if you really want to take the video encoding to the next level you can do it via CUDA using Nvidia Codec Library.I recently asked the same question so this can be helpful.

Load bmp file as texture using auxDIBImageLoad in OpenGL

I am learning OpenGL NeHe Production.When I read lesson22 Bump-Mapping、Multi-texture,I got a problem.
When I load logo bmp file,I need to load two bmp files:one stores color information ,and another stores alpha information.
here is the two bmp files:
OpenGL_Alpha.bmp:
and OpenGL.bmp :
Here is the code:
if (Image=auxDIBImageLoad("Data/OpenGL_ALPHA.bmp")) {
alpha=new char[4*Image->sizeX*Image->sizeY];
for (int a=0; a<Image->sizeX*Image->sizeY; a++)
alpha[4*a+3]=Image->data[a*3]; //???????
if (!(Image=auxDIBImageLoad("Data/OpenGL.bmp"))) status=false;
for (a=0; a<Image->sizeX*Image->sizeY; a++) {
alpha[4*a]=Image->data[a*3];//??????????
alpha[4*a+1]=Image->data[a*3+1];
alpha[4*a+2]=Image->data[a*3+2];
}
glGenTextures(1, &glLogo);
glBindTexture(GL_TEXTURE_2D, glLogo);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, Image->sizeX, Image->sizeY, 0, GL_RGBA, GL_UNSIGNED_BYTE, alpha);
delete alpha;
}
My question is :why the index of Image->data is a*3???
Could someone interpret for me ?
I am learning OpenGL NeHe Production.When I read lesson22 Bump-Mapping
Why? The NeHe tutorials are terribly outdated, and the Bump Mapping technique outlined there completely obsolete. It's been superseeded by shader based normal mapping for well over 13 years (until 2003 texture combiners were used instead of shaders).
Also instead of BMPs you should use a image file format better suited for textures (with alpha channel). Like:
TGA
PNG
OpenEXR
Also the various compressed DX texture formats are a good choice for several applications.
My question is :why the index of Image->data is a*3???
Extracting the red channel of a RGB DIB.
It's the channel offset. The RGB data is stored as three consecutive bytes. Here 'a' represents which pixel (group of 3 bytes, one for R, one for G, one for B).
Think of a*3 as a pointer to an array of 3 bytes:
char* myPixel = Image->data + (a*3);
char red = myPixel[0];
char green = myPixel[1];
char blue = myPixel[2];