Im working on a porject in opengl.
I have a polygon in the polygon filled with bmp image file.
I can rotate the camera to look at the image from different places, and I want to copy the part of the image and put it inside a new bmp file.
I have alot of Unnecessary code so I will copy the imprtant parts.
_textureId = LoadBMP("file.bmp");
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, _textureId);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glColor3f(1, 1, 0.7);
float BOX_SIZE = -12.0f;
glBegin(GL_QUADS);
glVertex3f(-BOX_SIZE / 2, -BOX_SIZE / 2, -5);
glVertex3f(BOX_SIZE / 2, -BOX_SIZE / 2, -5);
glVertex3f(BOX_SIZE / 2, -BOX_SIZE / 2, 5);
glVertex3f(-BOX_SIZE / 2, -BOX_SIZE / 2, 5);
glEnd();
and the rotation is pretty basic, soo someone have any suggestions?
thanks alot.
If you want to save the output of OpenGL to a file, you will have to read back the contents of the color buffer from the GL to client memory. Then, you can do whatecver you want to it. The command
glReadPixels(GLint x, GLint y, GLsizei width, GLsizei height, GLenum format, GLenum type, GLvoid *data)
will read back the pixel data in an rectangle of the width * height pixels beginning at x,y to the memory buffer located at data. Since you said you want to save it as a BMP file, you probably want GL_UNSIGNED_BYTE as type, because BMP only supports up to 8 bit per channel. You also want probably GL_BGA or GL_BGR as the format, as this is the native channel layout for BMP.
Related
When I rasterize out a font, my code gives me a single channel of visability for a texture. Currently, I just duplicate this out to 4 different channels, and send that as a texture. Now this works, but I want to try and avoid unnecessary memory allocations and de-alocations on the cpu.
unsigned char *bitmap = new unsigned char[width*height] //How this is populated is not the point.
bitmap, now contains a 2d graphic.
It seems this guy also has the same problem: Opengl: Use single channel texture as alpha channel to display text
I do the same thing as a work around for now, where I just multiply the array size by 4 and copy the data into it 4 times.
unsigned char* colormap = new unsigned char[width * height * 4];
int offset = 0;
for (int d = 0; d < width * height;d++)
{
for (int i = 0;i < 4;i++)
{
colormap[offset++] = bitmap[d];
}
}
WHen I multiply it out, I use:
glTexParameteri(gltype, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(gltype, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(gltype, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, colormap);
And get:
Which is what I want.
When i use only the single channel:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glTexParameteri(gltype, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(gltype, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, width, height, 0, GL_RED, GL_UNSIGNED_BYTE, bitmap);
And Get:
It has no transparency, only red ext. makes it hard to colorize and ext. later.
Instead of having to do what I feel is a unnecessary allocations on the cpu side id like the tell OpenGL: "Hey your getting just one channel. multiply it out for all 4 color channels."
Is there a command for that?
In your shader, it's trivial enough to just broadcast the r component to all four channels:
vec4 vals = texture(tex, coords).rrrr;
If you don't want to modify your shader (perhaps because you need to use the same shader for 4-channel textures too), then you can apply a texture swizzle mask to the texture:
GLint swizzleMask[] = {GL_RED, GL_RED, GL_RED, GL_RED};
glTexParameteriv(GL_TEXTURE_2D, GL_TEXTURE_SWIZZLE_RGBA, swizzleMask);
When mechanisms read from the fourth component of the texture, they'll get the value defined by the red component of that texture.
I'm currently searching for bottlenecks in my code and it turns out the GUI is one of them. Well, not actually the GUI but rather the dynamic text that is drawn there.
Initialization
if (FT_Init_FreeType(&m_FreeType))
throw Helpers::ExceptionWithMsg("Could not init freetype lib");
if (FT_New_Face(m_FreeType, "res\\fonts\\FreeSans.ttf", 0, &m_FontFace))
throw Helpers::ExceptionWithMsg("Could not open font");
m_ShaderID = ... // Loads the corresponding shader
m_TextColorLocation = glGetUniformLocation(m_ShaderID, "color");
m_CoordinatesLocation = glGetAttribLocation(m_ShaderID, "coord");
glGenBuffers(1, &m_VBO);
FT_Set_Pixel_Sizes(m_FontFace, 0, m_FontSize);
glyph = m_FontFace->glyph;
glGenTextures(1, &m_Texture);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, m_Texture);
// We require 1 byte alignment when uploading texture data
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
// Linear filtering usually looks best for text
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// Clamping to edges is important to prevent artifacts when scaling
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glUseProgram(m_ShaderID);
glUniform4f(m_TextColorLocation, m_TextColor.x, m_TextColor.y, m_TextColor.z, m_TextColor.w);
glUseProgram(0);
What I do: I initialize FreeType, get the Font, initialize the shader and all uniforms.
Then I create the vbo for the textureCoordinates, set the Pixels for the font, get the glyph.
Now I generate the texture, activate it, bind it... I want to set all the parameters and then the uniform that never changes.
Rendering:
glUseProgram(m_ShaderID);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, m_Texture);
// Linear filtering usually looks best for text
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// Set up the VBO for our vertex data
glEnableVertexAttribArray(m_CoordinatesLocation);
glBindBuffer(GL_ARRAY_BUFFER, m_VBO);
glVertexAttribPointer(m_CoordinatesLocation, 4, GL_FLOAT, GL_FALSE, 0, 0);
GLfloat cursorPosX = m_X;
GLfloat cursorPosY = m_Y;
for (size_t i = 0; i < m_Text.size(); ++i)
{
// If Loading a char fails, just continue
if (FT_Load_Char(m_FontFace, m_Text[i], FT_LOAD_RENDER))
continue;
glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, glyph->bitmap.width, glyph->bitmap.rows, 0, GL_ALPHA, GL_UNSIGNED_BYTE, glyph->bitmap.buffer);
// Calculate the vertex and texture coordinates
GLfloat x2 = cursorPosX + glyph->bitmap_left * m_SX;
GLfloat y2 = -cursorPosY - glyph->bitmap_top * m_SY;
GLfloat w = glyph->bitmap.width * m_SX;
GLfloat h = glyph->bitmap.rows * m_SY;
PointStruct box[4] =
{
{ x2, -y2, 0, 0 },
{ x2 + w, -y2, 1, 0 },
{ x2, -y2 - h, 0, 1 },
{ x2 + w, -y2 - h, 1, 1 }
};
// Draw the character on the screen
glBufferData(GL_ARRAY_BUFFER, sizeof box, box, GL_DYNAMIC_DRAW);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Advance the cursor to the start of the next character
cursorPosX += glyph->advance.x / 64 * m_SX;
cursorPosY += glyph->advance.y / 64 * m_SY;
}
glDisableVertexAttribArray(m_CoordinatesLocation);
glDeleteTextures(1, &m_Texture);
glDisable(GL_BLEND);
glUseProgram(0);
Setting the shader and stuff is obvious.
For each render call I activate the texture, bind it, enable the VBO I store my textureCoordinates in. Then I iterate over every character in the text load it with FT_LOAD_CHAR.
I then specify the texture with glTexImage2D, calculate vertex and texture coordinates and draw everything.
That seems to be highly inefficient, but I find no way to improve the performance and yet have readable text.
I wanted to set the text parameters just once in the init -> all chars are boxes.
I wanted to set GL_DYNAMIC_DRAW to GL_STATIC_DRAW... not much difference.
What else can I do?
The text I render is dynamic, it changes (or may change) each frame, so I'm kind of stuck.
I query the performance of this stuff with a query. If I do not render the dynamic text it's very low, but if I render the dynamic text it gets up very high... there is not much else going on in this pass, it's just drawing the GUI.
What really bothers me
One thing I really don't understand (may be the sunny day...)
If I do not set linear filtering in the render-method() I get strange cube-glyphs, but why is that? OpenGL is a state machine, the texture-parameters are set to the one currently bound. So if I set the Min and Mag filter to GL_LINEAR in the initialization why isn't that enough?
If I remove those 2 lines in the render I get way better performance from the query (much lower numbers), but it doesn't drawn anything readable.
This is absolutely going to be slow.
For each render call I activate the texture, bind it, enable the VBO I store my textureCoordinates in. Then I iterate over every character in the text load it with FT_LOAD_CHAR. I then specify the texture with glTexImage2D, calculate vertex and texture coordinates and draw everything.
The problem, unfortunately, is hard. Here is the method I use:
There is one texture, with the GL_RED8 format, which stores glyphs.
Whenever a new glyph is needed, it is added to the texture. This is done by calling FT_Render_Glyph() and copying the result into the texture buffer. If the new glyph doesn't fit, the whole glyph texture is resized and repacked. (I use the skyline algorithm for packing glyphs since it's simple.)
If any new glyphs have been added, then I call glTexSubImage2D(). The code should be structured so that this is only called once per frame.
To render text, I create a VBO that contains vertex and texture coordinates for all the quads necessary to render a piece of text. (Please understand that "quad" means two triangles, not GL_QUAD).
So, when you change what text you want to render,
You have to update the VBO, but only once per frame
You might have to update the texture, but only once per frame, and this will probably happen less frequently as the glyph texture fills up with the glyphs you use.
A good way to prototype this kind of system is to render all of the glyphs in a font into the texture at first, but this doesn't work well if you end up using multiple fonts and styles, or if you want to render Chinese, Korean, or Japanese text.
Additional considerations are line breaking, glyph substitution, kerning, bidi, general problems with international text, how to specify styling, et cetera. I recommend using HarfBuzz in combination with FreeType. HarfBuzz handles difficult glyph substitution and positioning issues. None of this is strictly necessary if your program has English text only.
There are some libraries that do all of this, but I have not used them.
An alternative method, if you want to cut the gordian knot, is to embed a web browser like Chromium (Awesomium, WebKit, Gecko—many choices) in your application, and farm out all text rendering to that.
your bottleneck are probably the many draw calls. firstly you buffer the texture inside of your draw routine: instead provide a texture where you can map characters onto quad-positions and then replace the following:
// Calculate the vertex and texture coordinates
GLfloat x2 = cursorPosX + glyph->bitmap_left * m_SX;
GLfloat y2 = -cursorPosY - glyph->bitmap_top * m_SY;
GLfloat w = glyph->bitmap.width * m_SX;
GLfloat h = glyph->bitmap.rows * m_SY;
PointStruct box[4] =
{
{ x2, -y2, 0, 0 },
{ x2 + w, -y2, 1, 0 },
{ x2, -y2 - h, 0, 1 },
{ x2 + w, -y2 - h, 1, 1 }
};
// Draw the character on the screen
glBufferData(GL_ARRAY_BUFFER, sizeof box, box, GL_DYNAMIC_DRAW);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
with code that pre-processes your text, produces a larger PointStruct buffer (don't forget to adjust your texture coordinates for the look-up-texture) and draws multiple characters per draw call.
There are two easy options to improve performance.
Create a single texture that has all the characters that you will need to render at set offsets. Your text string then becomes a model (buffer object) that will reference the correct series of offsets to create your text string. This has the disadvantage that you will not be able to do kerning or any other fancy font joining.
Use Pango and Cairo to fully render your text as a single bitmap. This bitmap will have the text formatted with correct kerning and joining. You then only upload and draw a single texture for your entire text output.
I am using OpenGL, I can load tga files properly, but for some reason when i render jpg files, i do not see them correctly.
This is what the image is supposed to look like--
And this is what it looks like.. why is it stretched? is it because of the coordinates?
Here is the code i am using for drawing.
void Renderer::DrawJpg(GLuint tex, int xi, int yq, int width, int height) const
{
glBindTexture(GL_TEXTURE_2D, tex);
glBegin(GL_QUADS);
glTexCoord2i(0, 0); glVertex2i(0+xi, 0+xi);
glTexCoord2i(0, 1); glVertex2i(0+xi, height+xi);
glTexCoord2i(1, 1); glVertex2i(width+xi, height+xi);
glTexCoord2i(1, 0); glVertex2i(width+xi, 0+xi);
glEnd();
}
This is how i am loading the image...
imagename=s;
ILboolean success;
ilInit();
ilGenImages(1, &id);
ilBindImage(id);
success = ilLoadImage((const ILstring)imagename.c_str());
if (success)
{
success = ilConvertImage(IL_RGB, IL_UNSIGNED_BYTE); /* Convert every colour component into
unsigned byte. If your image contains alpha channel you can replace IL_RGB with IL_RGBA */
if (!success)
{
printf("image conversion failed.");
}
glGenTextures(1, &id);
glBindTexture(GL_TEXTURE_2D, id);
width = ilGetInteger(IL_IMAGE_WIDTH);
height = ilGetInteger(IL_IMAGE_HEIGHT);
glTexImage2D(GL_TEXTURE_2D, 0, ilGetInteger(IL_IMAGE_BPP), ilGetInteger(IL_IMAGE_WIDTH),
ilGetInteger(IL_IMAGE_HEIGHT), 0, ilGetInteger(IL_IMAGE_FORMAT), GL_UNSIGNED_BYTE,
ilGetData());
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); // Linear Filtered
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); // Linear Filtered
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
I probably should mention this, but some images did get rendered properly, I thought it was because width != height. But that is not the case, images with width != height also get loaded fine.
But for other images i still get this problem.
You probably need to call
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
before uploading the texture data with glTexImage2D.
From the reference pages:
GL_UNPACK_ALIGNMENT
Specifies the alignment requirements for the start of each pixel row
in memory. The allowable values are 1 (byte-alignment), 2 (rows
aligned to even-numbered bytes), 4 (word-alignment), and 8 (rows start
on double-word boundaries).
The default value for the alignment is 4 and your image loading library probably returns pixel data with byte-aligned rows, which explains why some of your images look OK (when the width is a multiple of four).
Always try to have the images width and height of the power of two because some GPU support textures only in NPOT resolution. (for example 128x128, 512x512 but not 123x533, 128x532)
And i think that here instead of GL_REPEAT you should use GL_CLAMP_TO_EDGE :)
GL_REPEAT is used when your texture coordinates are > 1.0f, CLAMP_TO_EDGE too but guarantees the image will fill the polygon without unwanted lines on edges. (it's blocking your linear filtering on edges)
Remember to try out code where floats are used (sample from comment) :)
Here is good explanation http://open.gl/textures :)
Here is my code to load a texture. I have tried to load a file using this example; it is a gif file. Can I ask if gif files can be loaded, or is it only raw files can be loaded?
void setUpTextures()
{
printf("Set up Textures\n");
//This is the array that will contain the image color information.
// 3 represents red, green and blue color info.
// 512 is the height and width of texture.
unsigned char earth[512 * 512 * 3];
// This opens your image file.
FILE* f = fopen("/Users/Raaj/Desktop/earth.gif", "r");
if (f){
printf("file loaded\n");
}else{
printf("no load\n");
fclose(f);
return;
}
fread(earth, 512 * 512 * 3, 1, f);
fclose(f);
glEnable(GL_TEXTURE_2D);
//Here 1 is the texture id
//The texture id is different for each texture (duh?)
glBindTexture(GL_TEXTURE_2D, 1);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
//In this line you only supply the last argument which is your color info array,
//and the dimensions of the texture (512)
glTexImage2D(GL_TEXTURE_2D, 0, 3, 512, 512, 0, GL_RGB, GL_UNSIGNED_BYTE,earth);
glDisable(GL_TEXTURE_2D);
}
void Draw()
{
glEnable(GL_TEXTURE_2D);
// Here you specify WHICH texture you will bind to your coordinates.
glBindTexture(GL_TEXTURE_2D,1);
glColor3f(1,1,1);
double n=6;
glBegin(GL_QUADS);
glTexCoord2d(0,50); glVertex2f(n/2, n/2);
glTexCoord2d(50,0); glVertex2f(n/2, -n/2);
glTexCoord2d(50,50); glVertex2f(-n/2, -n/2);
glTexCoord2d(0,50); glVertex2f(-n/2, n/2);
glEnd();
// Do not forget this line, as then the rest of the colors in your
// Program will get messed up!!!
glDisable(GL_TEXTURE_2D);
}
And all I get is this:
Can I know why?
Basically, no, you can't just give arbitrary texture formats to GL - it only wants pixel data, not encoded files.
Your code, as posted, clearly declares an array for 24-bit RGB data, but then you open and attempt to read that much data from a GIF file. GIF is a compressed and palettised format, complete with header information etc., so that's never doing to work.
You need to use an image loader to decompress the file into raw pixels.
Also, your texture coordinates don't look right. There are four vertices, but only 3 distinct coordinates used, and 2 adjacent coordinates are diagonally opposite each other. Even if your texture was loaded correctly, that's unlikely to be what you want.
I have an image in OpenGL that I am attempting to apply a simple HSB filter to. The user selects a hue value, I shade the image appropriately, display it, and everyone is happy. The problem I am running into is that the code I have inherited that worked on a previous system (Solaris, presuming OpenGL 2.1) does not work on our current system (RHEL 5, OpenGL 3.0).
Right now, the image appears in grey-scale, no matter what saturation is set to. However, brightness does seem to be acting appropriately. The relevant code has been reproduced below:
// imageData - unsigned char[3*width*height]
// (red|green|blue)Channel - unsigned char[width*height]
// brightnessBias - float in range [-1/3,1/3]
// hsMatrix - float[4][4] Described by algorithm from
// http://www.graficaobscura.com/matrix/index.html
// (see Hue Rotation While Preserving Luminance)
glDrawPixels(width, height, format, GL_UNSIGNED_BYTE, imageData);
// Split into RGB channels
glReadPixels(0, 0, width, height, GL_RED, GL_UNSIGNED_BYTE, redChannel);
glReadPixels(0, 0, width, height, GL_GREEN, GL_UNSIGNED_BYTE, greenChannel);
glReadPixels(0, 0, width, height, GL_BLUE, GL_UNSIGNED_BYTE, blueChannel);
// Redraw and blend RGB channels with scaling and bias
glPixelZoom(1.0, 1.0);
glRasterPos2i(0, height);
glPixelTransferf(GL_RED_BIAS, brightnessBias);
glPixelTransferf(GL_GREEN_BIAS, brightnessBias);
glPixelTransferf(GL_BLUE_BIAS, brightnessBias);
glDisable(GL_BLEND);
glPixelTransferf(GL_RED_SCALE, hsMatrix[0][0]);
glPixelTransferf(GL_GREEN_SCALE, hsMatrix[1][0]);
glPixelTransferf(GL_BLUE_SCALE, hsMatrix[2][0]);
glDrawPixels(width, height, GL_LUMINANCE, GL_UNSIGNED_BYTE, redChannel);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
glPixelTransferf(GL_RED_SCALE, hsMatrix[0][1]);
glPixelTransferf(GL_GREEN_SCALE, hsMatrix[1][1]);
glPixelTransferf(GL_BLUE_SCALE, hsMatrix[2][1]);
glDrawPixels(width, height, GL_LUMINANCE, GL_UNSIGNED_BYTE, greenChannel);
glPixelTransferf(GL_RED_SCALE, hsMatrix[0][2]);
glPixelTransferf(GL_GREEN_SCALE, hsMatrix[1][2]);
glPixelTransferf(GL_BLUE_SCALE, hsMatrix[2][2]);
glDrawPixels(width, height, GL_LUMINANCE, GL_UNSIGNED_BYTE, blueChannel);
// Reset pixel transfer parameters
glDisable(GL_BLEND);
glPixelTransferf(GL_RED_SCALE, 1.0f);
glPixelTransferf(GL_GREEN_SCALE, 1.0f);
glPixelTransferf(GL_BLUE_SCALE, 1.0f);
glPixelTransferf(GL_RED_BIAS, 0.0f);
glPixelTransferf(GL_GREEN_BIAS, 0.0f);
glPixelTransferf(GL_BLUE_BIAS, 0.0f);
The brightness control works as intended, however, when the glPixelTransferf(GL_*_SCALE) calls are left in, the image is displayed in greyscale. Compounding all of this is the fact that I have no prior experience with OpenGL, so I find a lot of links for what I presume are more modern techniques that I simply can't make sense of.
EDIT:
I believe the theory behind what was being done was a hack at doing the matrix multiplication through the draw calls, because GL_LUMINANCE treats the one value as the value for all three components, so if you follow the components through that drawing, you expect
// After glDrawPixels(..., redChannel)
new_red = red*hsMatrix[0][0]
new_green = red*hsMatrix[1][0]
new_blue = red*hsMatrix[2][0]
// After glDrawPixels(..., greenChannel)
new_red = red*hsMatrix[0][0] + green*hsMatrix[0][1]
new_green = red*hsMatrix[1][0] + green*hsMatrix[1][1]
new_blue = red*hsMatrix[2][0] + green*hsMatrix[2][1]
// After glDrawPixels(..., blueChannel)
new_red = red*hsMatrix[0][0] + green*hsMatrix[0][1] + blue*hsMatrix[0][2]
new_green = red*hsMatrix[1][0] + green*hsMatrix[1][1] + blue*hsMatrix[1][2]
new_blue = red*hsMatrix[2][0] + green*hsMatrix[2][1] + blue*hsMatrix[2][2]
Because it was turning out greyscale anyway and from a similar-ish example, I had thought that I might have needed to do the glPixelTransfer calls before calling glDrawPixels, but that was amazingly slow.
Wow, what the hell is that ?!
For your question, I'd replace GL_LUMINANCE in your 3 glDrawPixels by GL_RED, GL_GREEN and GL_BLUE respectively.
However :
glPixelTransfer is bad
glDrawPixels is bad
Is there a single reason why you're not using a super-simple fragment shader to do the conversion ? It's a simple matrix multiplication, and you're under ogl3.0...
Create a texture from imageData, this needs to be done only once.
Make a shader that reads the color from the texture, multiply it by the color conversion matrix, and display it
Bind the computed color matrix
Draw a fullscreen quad. Even an 5 year old card will get 500 fps out of this.