OpenGL Texture Loading - opengl

Evening everyone,
I'm going around in circles here but I think I'm looking in the wrong places.
My question is how would I go about loading an image and applying it to a primative in openGL. For example loading a bmp or jpg and applying it to a glutsolidsphere.
Would the solution be limited to one platform or could it work across all?
Thanks for your help

Well, if you want to, you can write your own bmp loader. Here is the specification and some code. Otherwise, I happen to have a tga loader here. Once you do that, it will return the data in an unsigned character array, otherwise known as GL_UNSIGNED_BYTE.
To make an OpenGL texture from this array, you first define a variable that will serve as a reference to that texture in OpenGL's memory.
GLuint textureid;
Then, you need to tell OpenGL to make space for a new texture:
glGenTextures(1, &textureid);
Then, you need to bind that texture as the currently used texture.
glBindTexture(GL_TEXTURE_2D, textureid);
Finally, you need to tell OpenGL where the data is for the current texture.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, your_data);
Then, when you render a primitive, you apply it to the primitive by calling glBindTexture again:
glBindTexture(GL_TEXTURE_2D, textureid);
glBegin(GL_QUAD);
glTexCoord2f(0.0, 0.0);
glVertex3f(0.0, 0.0, 0.0);
glTexCoord2f(1.0, 0.0);
glVertex3f(1.0, 0.0, 0.0);
glTexCoord2f(1.0, 1.0);
glVertex3f(1.0, 1.0, 0.0);
glTexCoord2f(0.0, 1.0);
glVertex3f(0.0, 1.0, 0.0);
glEnd();
However, any time you apply the texture to a primitive, you need to have texture coordinate data along with vertex data, and glutSolidSphere does not generate texture coordinate data. To texture a sphere, either generate it yourself, or call texgen functions, or use shaders.

There are an infinite number of ways to texture map an image onto a sphere. You could look into the gltexgen commands to automatically generate texture coordinates for a glut sphere.

It might be more complex than you'd expect.
First of all, I'm quite sure you can't texture a sphere drawn using glutSolidSphere, as that function doesn't specify texture coordinates for the vertices it draws. So you'll need to code a method to draw a sphere either vertex by vertex or using a vertex buffer and be sure you specify texture coordinates for each vertex.
Once this is done, you need to load your images. OpenGL does not provide functions to load images from files and, if I remember well, glu and glut do not either. So you'll have to either read the image file yourself, or use a library that does so. DevIL is one of them.
Once you have the pixel data of the image, you need to create a new OpenGL texture using it. Some image loading libraries can do this for you. DevIL can.
I highly recommend googling for some tutorials, but please do not use NeHe tutorials, their quality is generally quite poor.

Ned has described it nicely. You should read tutorial from NeHe, where each step is described in detail. As others have pointed out, texturing quad would be good start and sphere may be bit tricky.

That's funny, TGAs were the first type of image file I was able to load successfully into a texture. Unfortunately OpenGL does not have its own built in API functions to read files directly. So any external libraries or code would be needed. The TGA format is straightforward and Ned has provided some good code for it.

Related

How to blank my OpenGL texture

I created an OpenGL (GL_TEXTURE_2D) texture, made an OpenCL image2d_t buffer out of it using clCreateFromGLTexture(), I run my OpenCL kernel to draw to the texture, using clEnqueueAcquireGLObjects and clEnqueueReleaseGLObjects before and after, and then I display the result in OpenGL by doing this (I'm trying to simply draw my framebuffer texture to the window with no scaling, is this even the right way to do it?):
glBegin(GL_QUADS);
glTexCoord2f(0.0, 0.0); glVertex2f(0.0f, 0.0f);
glTexCoord2f(1.0, 0.0); glVertex2f(1920.0f, 0.0f);
glTexCoord2f(1.0, 1.0); glVertex2f(1920.0f, 1080.0f);
glTexCoord2f(0.0, 1.0); glVertex2f(0.0f, 1080.0f);
glEnd();
SDL_GL_SwapWindow(window);
It works great except there's one problem, I can't figure out how to fill the texture with zeroes in order to blank it, so I see old pixels where no new ones were written.
Apparently there's no way to blank an OpenCL image object from the host, so I'd have to do it in OpenGL (before calling clEnqueueAcquireGLObjects of course), so I tried this:
glClearColor(0,0,0,0);
glClear(GL_COLOR_BUFFER_BIT);
However this doesn't do anything. It does do something on screen when I comment out the GL_QUADS block above, it's like glClear doesn't do anything on the texture itself but rather the screen directly. It's confusing, I'm not very familiar with OpenGL, how do I blank my texture?
OpenGL glClear clears the currently bound framebuffer. To make this operate on a texture create a framebuffer object, attach the texture to it, bind the framebuffer object for drawing operations and clear it (which will then clear the texture).

OpenGL - Is there an easier way to fill window with a texture, instead using VBO,etc?

My OpenGL window is drawn like this:
glClearColor(0.3f, 0.4f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
I want to use a texture to fill up the window.
Is there an easier way to do that, instead of creating another VBO, EBO besides the one I'm already using for my triangles?
Since there is the glClearColor that fills the background..
The most direct and generally most efficient way to draw a texture to the window is by using glBlitFramebuffer().
To use this, you need to create an FBO, and attach your texture texId to it:
GLuint fboId = 0;
glGenFramebuffers(1, &fboId);
glBindFramebuffer(GL_READ_FRAMEBUFFER, fboId);
glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
GL_TEXTURE_2D, texId, 0);
Note that the code above bound GL_READ_FRAMEBUFFER, since we want to use this as the source of the blit.
Then, to copy the content:
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, 0); // if not already bound
glBlitFramebuffer(0, 0, width, height, 0, 0, width, height,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
This is for the case where texture and window have the same size. Otherwise, you can specify different sizes in the first 8 arguments, and may want to use GL_LINEAR for the last parameter.
Using glBlitFramebuffer() has a few advantages over drawing a window sized textured quad:
It needs fewer API calls.
You don't need to write a shader for the copy operation.
You don't need to bind a different shader program, which can reduce overhead.
The driver may have a more optimized code path for the operation, compared to using an app provided shader and draw call.
Many GPUs have dedicated units for blitting data, which can be more efficient than the programmable shader units. They can also potentially run in parallel to the general purpose programmable part of the GPU, allowing the copy to be executed in parallel with rendering. If that applies, the performance gain can be very substantial.
In one word: No.
Well in legacy OpenGL there'd be glDrawPixels but this function never was very well supported and dead slow on most implementation. You better forget that I told you about it. Also it's been removed from modern OpenGL and never existed in OpenGL-ES.
There are already some answers to this question, but I want to add some more alternatives, for completeness:
1. attributeless rendering
With modern GL, you can render completely without vertex attributes. You can put the 4 2d coordiantes of the full screen rect directly as a const array into the vertex shader and access them via gl_VertexID:
// VERTEX SHADER
#version 150 core
out vec2 v_tex;
const vec2 pos[4]=vec2[4](vec2(-1.0, 1.0),
vec2(-1.0,-1.0),
vec2( 1.0, 1.0),
vec2( 1.0,-1.0));
void main()
{
v_tex=0.5*pos[gl_VertexID] + vec2(0.5);
gl_Position=vec4(pos[gl_VertexID], 0.0, 1.0)
}
// FRAGMENT SHADER
#version 150 core
in vec2 v_tex;
uniform sampler2D texSampler;
out vec4 color;
void main()"
{
color=texture(texSampler, v_tex);
}
If your texture exactly matches the resolution of your viewport (so you are not scaling the texture at all), you can completely remove the v_tex varying and use color=texelFetch(texSampler, ivec2(gl_FragCoord.xy)) in the FS, as #datenwolf suggested in his comment.
In any case, you still need some VAO bound, even if no attributes are enabled in it. So this method requires you to do the following once during intialization:
Create and compile the shaders and link them to the program
Create a new VAO name by a glGenVertexArrays() call
And for drawing, you have to:
Bind the texture you want to draw
Use the program
Bind the (still empty) VAO
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4)
You might also be able to simply re-use the currently bound VAO. As the shader does not access any attributes, it does not matter what data your VBOs provide, and which attributes are enabled currently.
This method requires you to switch the shader, which isn't exactly cheap either, so it might be better to just switch the buffer bindigs and keep the current shader.. But you might need to switch the shader anyway.
2. nvidia-specifc extension
NVidia provides a specific extension for the task of drawing a texture to the screen: NV_draw_texture. This introduces the glDrawTextureNV() function which allows drawing a texture without setting changing anything on the GL state. Quoting from the overview section of the extension spec:
While this functionality can be obtained in unextended OpenGL by drawing a
rectangle and using a fragment shader to do a texture lookup,
DrawTextureNV() is likely to have better power efficiency on
implementations supporting this extension. Additionally, use of this
extension frees the application developer from having to set up
specialized shaders, transformation matrices, vertex attributes, and
various other state in order to render the rectangle.
The drawback of this method is of course that it is nvidia-specific, so it is probably of less practical use in a general GL application.
You can render your texture to a fullscreen quad using an ortographic projection:
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texture);
glDisable(GL_LIGHTING);
// Set up ortographic projection
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
glOrtho(0, width, 0, height, -1, 1);
// Render a quad
glBegin(GL_QUADS);
glTexCoord2f(0,0); glVertex2f(0,0);
glTexCoord2f(0,1); glVertex2f(0,width);
glTexCoord2f(1,1); glVertex2f(height, width);
glTexCoord2f(1,0); glVertex2f(height,0);
glEnd();
// Reset Projection Matrix
glPopMatrix();
glDisable(GL_TEXTURE_2D);
glEnable(GL_LIGHTING);
Render this into your framebuffer instead of glClearColor.

Is it possible to clear just certain textures in a framebuffer with multi target rendering?

I have a framebuffer object in which I use Multi Target Rendering on N textures binded to it. At a certain time, I want to clear the content of some of those textures, but not all of them.
If I call
glClearColor(0.0, 0.0, 0.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
every texture binded to the FBO is going to be cleared (am I right?).
Is there a way to do this on specific draw buffers/textures?
The GL_COLOR_BUFFER_BIT in the glClear call will clear all of the active draw color buffers, as specified via glDrawBuffers. So you could change the draw buffers before executing a clear.
But that's needless state changing. You can simply call glClearBuffer, which will clear a particular buffer.
It will be all buffers.
You can mask out buffers for clear with glColorMask though. http://www.opengl.org/sdk/docs/man/xhtml/glColorMask.xml

loading textures in C++ OpenGL using DevIL

Using C++, I'm trying to load a texture into OpenGL using DevIL. After scrounging around for different code segments, I have a bit of code done (shown below), but it doesn't seem to work completely.
Loading a texture (Part of a Texture2D class):
void Texture2D::LoadTexture(const char *file_name)
{
unsigned int image_ID;
ilInit();
iluInit();
ilutInit();
ilutRenderer(ILUT_OPENGL);
image_ID = ilutGLLoadImage((char*)file_name);
sheet.texture_ID = image_ID;
sheet.width = ilGetInteger(IL_IMAGE_WIDTH);
sheet.height = ilGetInteger(IL_IMAGE_HEIGHT);
}
This compiles and works fine. I do realise that I should only do the ilInit(), iluInit(), and ilutInit() once, but if I remove those lines the program instantly breaks upon loading any image (compiles fine, but errors on runtime).
Displaying the texture in OpenGL (Part of the same class):
void Texture2D::Draw()
{
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glPushMatrix();
u = v = 0;
// this is the origin point (the position of the button)
VectorXYZ point_TL; // Top Left
VectorXYZ point_BL; // Bottom Left
VectorXYZ point_BR; // Bottom Right
VectorXYZ point_TR; // Top Right
/* For the sake of simplicity, I've removed the code calculating the 4 points of the Quad. Assume that they are found correctly. */
glColor3f(1,1,1);
// bind the appropriate texture frame
glBindTexture(GL_TEXTURE_2D, sheet.texture_ID);
// draw the image as a quad the size of the first loaded image
glEnable(GL_TEXTURE_2D);
glBegin(GL_QUADS);
glTexCoord2f (0, 0);
glVertex3f (point_TL.x, point_TL.y, point_TL.z); // Top Left
glTexCoord2f (0, 1);
glVertex3f (point_BL.x, point_BL.y, point_BL.z); // Bottom Left
glTexCoord2f (1, 1);
glVertex3f (point_BR.x, point_BR.y, point_BR.z); // Bottom Right
glTexCoord2f (1, 0);
glVertex3f (point_TR.x, point_TR.y, point_TR.z); // Top Right
glEnd();
glPopMatrix();
glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);
}
Currently, the quad shows up, but its completely white (the background colour it's given). The image I'm loading exists and is loaded fine (verified using the loaded size values).
Another few things I should note:
1) I am using a depth buffer. I've heard this doesn't go well with GL_BLEND?
2) I would really like to use the ilutGLLoadImage function.
3) I appreciate example code, as I'm a newbie to openGL and DevIL as a whole.
Yes, you have the problem. There might be issues with ilutGLLoadImage().
Try doing things manually:
Load the image using ilLoadImage
Generate the OpenGL texture handle using the glGenTextures
Upload the image to OpenGL glTextImage2D and ilGetData()
See this link for a working solution
http://r3dux.org/tag/ilutglloadimage/
I know, this solution seems to be "a little" complicated, but nobody knows jow much time you would spend fighting with this bug hidden deep in the DevIL.
Another way of fixing things: check you GL texture setup code. Anything in the filtering can be a reason for GL_INVALID_OPERATION.
We've a lot of times into the "White texture" issue while programming the old ATI cards.
Oh! The biggest guess: Non-power-of-two textures. Is you texture file 2^N by 2^N or something different ?
To use non-rectangular textures you just have to use GL extensions.
And the other one: are you using the textures in the same thread or in the other ? Remember that you should glGenTextures() and glBindTexture()/glBegin/glEnd in the same thread.

Color coded picking problem in OpenGL

I am making a game, actually a very basic replica of Minecraft, for a class project of mine. I'm stuck in the picking process right now, which would enable me to destroy and create blocks in the game environment.
I've been trying to use OpenGL's own picking mode without any success, and building my own ray picker using math libraries seems to large a work for a project of this size. So, I've decided to use the color coded picking method, which consists of rendering every pickable object in a different color, then getting the color at the mouse position and using it to identify the picked object.
My current interface is just a 3D rendering of many boxes stacked, creating a terrain-like structure. Since I've done no texture mapping yet, all the boxes are shades of grey (lighting enabled).
Now, time for some actual code:
This is the initialization part, enabling texturing, lighting etc.
glEnable(GL_DEPTH_TEST);
glEnable(GL_TEXTURE_2D);
glShadeModel(GL_SMOOTH);
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
glEnable(GL_LIGHT1);
When a mouse button is clicked, I try to get the color at the mouse cursor's position (always the middle of the window, actually) by:
glDisable(GL_LIGHTING);
glDisable(GL_TEXTURE_2D);
glDisable(GL_DITHER);
glDisable(GL_LIGHT0);
glDisable(GL_LIGHT1);
renderColors();
GLubyte pixels[3];
glReadPixels(x, y, 1, 1, GL_RGB, GL_UNSIGNED_BYTE, (void *)pixels);
glEnable(GL_TEXTURE_2D);
glEnable(GL_LIGHTING);
glEnable(GL_DITHER);
glEnable(GL_LIGHT0);
glEnable(GL_LIGHT1);
Problem is, the disables do not work and I always get the RGB values of different shades of grey in my pixels array.
What could be the problem?
Perhaps you forget to clear the color buffer and disable depth buffer and all your rendered colors are causing Z-Fighting or not rendered at all (if z-test is "less"). Try to add swapbuffers code and see what gets rendered after your ColorRender code.