Issue using glTexCoordPointer() - c++

I'm fairly new to OpenGL (and GLSL) and I have an issue using glTexCoordPointer().
I have the texture loaded in and it is rendering on the object correctly (a single quad) but I also get another quad appearing which is a single colour not a part of the loaded texture.
The arrays are defined as follows:
static const GLfloat obj_vert_buf[] = {
-1, 0, -1,
-1, 0, 1,
1, 0, 1,
1, 0, -1
};
static const GLfloat obj_tex_buf[] = {
0, 0,
0, 1,
1, 1,
1, 0
};
And the relevant excerpt from the draw function:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY_EXT);
glGenBuffers(1, &obj_id);
glTexCoordPointer(2, GL_FLOAT, 0, obj_tex_buf);
glVertexPointer(3, GL_FLOAT, 0, obj_vert_buf);
glDrawArrays(GL_QUADS, 0, sizeof(obj_vert_buf) / sizeof(GLfloat));
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY_EXT);
To my understanding glTexCoordPointer()'s first argument specifies the number of elements per vertex which would be two as in:
glTexCoord2f(0.0, 0.0);
The second argument is the type, GLfloat.
The third argument is the offset between each set of elements pertaining to each vertex, so zero after the two stated before (I have also tried it with 2 * sizeof(GLfloat) to no change).
And the fourth argument is a pointer to the start of the data, i.e. obj_tex_buf.
The quad renders correctly and the texture is drawn on it correctly, but I get another random shape coming off from its centre and textured incorrectly, any thoughts would be great. The additional quad isn't visible without the glTexCoordPointer() line.

From the docs:
count
Specifies the number of indices to be rendered.
Thus you have to call glDrawArrays(GL_QUADS, 0, 4);
Please note that GL_QUADS isn't officially supported anymore as of OpenGL 3.1.

Related

Passing values of different types to shader

I am using GLEW32 with GLFW and write code on C++. I have encountered some problems passing values to shader. I have successfully passed vec2, uvec3 etc. Now I want to pass multiple values for each vertex:
uvec2 (or vec2 - not very important) - X and Y position;
uvec4 - RGBA color. I can also use int to decode RGBA from int32, but uvec4 would be more convinient :)
But there's another problem: I can set different attributes types using glVertexAttribIPointer() or glVertexAttribPointer():
glVertexAttribIPointer(0, 2, GL_UNSIGNED_SHORT, ...);
glEnableVertexAttribArray(0);
glVertexAttribIPointer(1, 4, GL_UNSIGNED_BYTE, ...);
glEnableVertexAttribArray(1);
But I cannot pass values of different types to glBufferData():
glBufferData(GL_ARRAY_BUFFER, sizeof(array), array, GL_DYNAMIC_DRAW); // Just one array!!!
I tried to do this using uniforms, but the code was tooo bulky and inefficient that I gave it up. Are there any ways to do such a manipulations "properly"?
I found a solution in Kai Burjack's comment. I made a structure to keep data and send it. The code is:
struct dataToSend
{
GLushort x, y;
GLubyte r, g, b, a;
}
...
glVertexAttribIPointer(0, 2, GL_UNSIGNED_SHORT, sizeof(dataToSend), (GLvoid *) 0);
glEnableVertexAttribArray(0);
glVertexAttribIPointer(1, 4, GL_UNSIGNED_BYTE, sizeof(dataToSend), (GLvoid *) (2 * sizeof(GLushort)));
glEnableVertexAttribArray(1);
...
// (pixels) x y r g b a
dataToSend data [4] = {{width, height, 255, 255, 0, 255}, // Corner 1 (Up-Right)
{width, 0, 255, 0, 255, 255}, // Corner 2 (Down-Right)
{0, 0, 0, 255, 0, 0}, // Corner 3 (Down-Left)
{0, height, 0, 255, 255, 255}}; // Corner 4 (Up-Left)
glBufferData(GL_ARRAY_BUFFER, sizeof(data), (GLvoid *) data, GL_DYNAMIC_DRAW);
That was kinda weird experience to use GLvoid and point to a structure instead of array but this lets one to pass almost any data to shaders. The result image is here:
Rendered image

Qt5 QOpenGL does not draw anything

I'm using a Widget inheriting QGLWidget to show an OpenGL viewport inside my Qt application.
The Widget does nothing more than creating three CRenderVectors and drawing them all the time.
A CRenderVector is simply a group of a QVector3D, a QOpenGLVertexArrayObject and three QOpenGLBuffers for vertices, indices and colors.
The vertex buffer objects get created with
const GLFloat color[] = {1, 0, 0, 1, 0, 0};
color_buffer.create();
color_buffer.setUsagePattern(QOpenGLBuffer::StaticDraw);
color_buffer.bind();
color_buffer.allocate(color, sizeof(color);
color_buffer.release();
respectively, using 0, 0, 0 and vec().{x,y,z}() for the vertex_buffer.
The vertex array object gets created via
vertex_array.create();
vertex_array.bind();
vertex_buffer.bind();
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_TRUE, 0, (void*)0);
index_buffer.bind();
color_buffer.bind();
glEnableVertexAttribArray(1);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_TRUE, 0, (void*)0);
vertex_array.release();
Drawing of a vector looks like
vertex_array.bind();
glDrawElements(GL_LINES, elements, GL_UNSIGNED_INT, (void*)0);
vertex_array.release();
The problem is, that I can't see anything in the viewport except the clearing color although I think I'm using everything like shown in the official Qt documentation. Where did I misunderstand Qt or OpenGL documentations?
The stripped project for QtCreator can be downloaded at mediafire.

Texture coordinates on different size of rectangles

I mapped texture coordinates like:
static float texCoord[] = {
0, 1,
1, 1,
1, 0,
0, 0
};
And by drawing it:
void Rectangle::Draw()
{
const float vertices[] = {
x, y,
x + width, y,
x, y - height,
x + width, y - height
};
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glColor3ub(255, 255, 255);
glVertexPointer(2, GL_FLOAT, 0, vertices);
glTexCoordPointer(2, GL_FLOAT, 2, texCoord);
if (IsTypeHorizontal()) glBindTexture(GL_TEXTURE_2D, texture_H);
else /* (IsTypeVertical())*/ glBindTexture(GL_TEXTURE_2D, texture_V);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}
Texture drawn in vertical (height > width) is good but in horizontal (height < width), texture appears to be inverted. Even if I separate the texture coordinates by texCoord_H and texCoord_V, image drawn is still inverted?
What do I still need to know ? What is the problem here in my code?
PS. I upload texture in OpenGL using SOIL
Try:
static float texCoord[] = {
0, 1,
1, 1,
0, 0,
1, 0
};
Most OpenGL newbies get confused by the fact, that with the usual set of projection, transforms and viewport mapping parameters OpenGL considers image origin to be in the lower left and coordinates increating toward the right and up. This is in contrast to most computer graphics systems that assume the origin to be in the upper left and vertical dimension increasing downwards.
Probably that is your issue.
After many times of changing and swapping texture coordinates, I realize that I've wasted my time. The real problem is here
glTexCoordPointer(2, GL_FLOAT, 2, texCoord);
3rd parameter (w/c is the stride) 'causes the bug, it should be 0.

sending array to shader with glUniformMatrix

I have a model with for which all matdexs (read matrix index) are either 0, 1, and 2 roughly evenly distributed. bonebends[matdex[whichvertex]] tells us which matrix should be used on this vertex.
initializing, c++:
std::vector< GLint > matdex; //initialized to 0, 1, or 2 for each vertex loaded, not shown for brevity
std::vector<glm::mat4> bonebends;
int frames=0;
bonebends.push_back(glm::mat4(1,0,0,0,0,1,0,0,0,0,1,0,0,0,0,1)); //identity matrices initially
bonebends.push_back(glm::mat4(1,0,0,0,0,1,0,0,0,0,1,0,0,0,0,1));
bonebends.push_back(glm::mat4(1,0,0,0,0,1,0,0,0,0,1,0,0,0,0,1)); // thats 3 bones
ver_ind = glGetAttribLocation(shaderID, "VER_IND");
boneID = glGetUniformLocation(shaderID, "BONE_MATRIX[0]");
glGenBuffers(1, &ind_buff);
glBindBuffer(GL_ARRAY_BUFFER, ind_buff);
glBufferData(GL_ARRAY_BUFFER, matdex.size() * sizeof(GLint), &matdex[0], GL_STATIC_DRAW);
periodically in main loop, c++:
glEnableVertexAttribArray(3); //IND
glBindBuffer(GL_ARRAY_BUFFER, ind_buff);
glVertexAttribPointer( ver_ind, 1, GL_INT, GL_FALSE, 0, (void*)0 );
//ALSO LOOK HERE
glUniformMatrix4fv(boneID, 3 , GL_FALSE, &bonebends[0][0][0]);
frames+=1;
float cs=cos(3.14*2.0*(frames/100.0));
float sn=sin(3.14*2.0*(frames/100.0));
// LOOK HERE
bonebends[0]=glm::mat4(
cs , 0,-sn,0 ,
0 , 1, 0,0 ,
sn, 0, cs,0 ,
0 , 0, 0,1 );
in the shader, glsl:
layout(location = 3) in int VER_IND;
uniform mat4 BONE_MATRIX[3];
in the main function of shader, glsl:
gl_Position = BONE_MATRIX[VER_IND] * vec4(VER_POS,1); //VER_POS is the vertex's coordinates
I hoped this would make all the vertices with a coinciding matdex of 0 to rotate, and the rest remain stationary. Instead this displays the model (which is otherwise rendered correctly) spinning, as if all matdexs values are equal to 0. Which a little couting tells me they are not. I don't know how to debug VER_IND in the shader. I tried
using uint instead of int for all incarnations of matdex, which didn't make a difference.
Changing BONE_MATRIX[VER_IND] to BONE_MATRIX[0] in shader, which made it crash(???)
Changing the second and fourth argument of glUniformMatrix4fv(boneID, 3 , GL_FALSE, &bonebends[0][0][0]); in various ways, didn't help.
changed matdexs values such that none of them were 0, and it still rotates as if the shader is only using BONE_MATRIX[0].
Changing the line below // LOOK HERE to bonebends[1]=glm::mat4( results in the model not spinning at all.
From this I concluded that its only ever using the first of BONE_MATRIX[3], probably because the shader isn't receiving it correctly. I can post the whole thing but its a bit lengthy. I would like to know what I am doing wrong and how to get information back from the shader.
There are a lot of problems.
First, matdex is empty. So you're not getting any actual information.
Second,
layout(location = 3) in int VER_IND;
glVertexAttribPointer( ver_ind, 1, GL_INT, GL_FALSE, 0, (void*)0 );
These two things cannot work together. glVertexAttribPointer can only feed floating-point inputs. If you use integer formats like GL_INT, they will be converted to floats. If you want to feed integer inputs, you must use glVertexAttribIPointer (note the I).
Or just make VER_IND a float. And pass GLubytes instead of GLints. Really, there's no point in taking up all of that memory.
Third:
glEnableVertexAttribArray(3); //IND
glVertexAttribPointer( ver_ind, 1, GL_INT, GL_FALSE, 0, (void*)0 );
The first parameter of both of these functions should be the same, if you're trying to enable ver_ind.
Fourth:
//ALSO LOOK HERE
glUniformMatrix4fv(boneID, 3 , GL_FALSE, &bonebends[0][0][0]);
I don't see where you use glUseProgram to set the program you wanted to change the uniform data on.
Fifth:
// LOOK HERE
bonebends[0]=glm::mat4(
Changing a value after you have uploaded it does not magically cause the uploaded value to change.
There may be (and likely are) more problems, but the code you posted is incomplete and I don't intend to be your personal debugger. At the very least, start with code that you know works, then make the smallest change leading towards what you want. Get that working, then make another change. And so on. You did a bunch of stuff all at once, and it didn't work out.

OpenGL Vertex Buffer doesn't draw anything in golang

I tried to use this tutorial with Golang: http://www.opengl-tutorial.org/beginners-tutorials/tutorial-2-the-first-triangle/
The go-version opens the window and makes the background blue, but it doesn't show the triangle. The c-version does show it.
This is the code in Go:
err := glfw.Init()
if err != nil {
log.Fatal("Failed to init GLFW: " + err.Error())
}
err = glfw.OpenWindow(1024, 768, 0,0,0,0, 32,0, glfw.Windowed)
if err != nil {
log.Fatal("Failed to open GLFW window: " + err.Error())
}
if gl.Init() != 0 {
log.Fatal("Failed to init GL")
}
gl.ClearColor(0.0, 0.0, 0.3, 0.0)
// create vertexbuffer
gVertexBufferData := []float32{-1.0,-1.0,0.0, 1.0,-1.0,0.0, 0.0,1.0,0.0}
vertexBuffer := gl.GenBuffer()
vertexBuffer.Bind(gl.ARRAY_BUFFER)
gl.BufferData(gl.ARRAY_BUFFER, len(gVertexBufferData), gVertexBufferData, gl.STATIC_DRAW)
for {
// clear screen
gl.Clear(gl.COLOR_BUFFER_BIT)
// first attribute buffer: vertices
var vertexAttrib gl.AttribLocation = 0
vertexAttrib.EnableArray()
vertexBuffer.Bind(gl.ARRAY_BUFFER)
var f float32 = 0.0
vertexAttrib.AttribPointer(
3, // size
false, // normalized?
0, // stride
&f) // array buffer offset
// draw the triangle
gl.DrawArrays(gl.TRIANGLES, 0, 3)
vertexAttrib.DisableArray()
glfw.SwapBuffers()
}
And this is the code in c which works:
if(!glfwInit())
return -1;
if(!glfwOpenWindow( 1024, 768, 0,0,0,0, 32,0, GLFW_WINDOW ))
return -1;
if(glewInit() != GLEW_OK)
return -1;
glClearColor(0.0f, 0.0f, 0.3f, 0.0f);
GLuint VertexArrayID;
glGenVertexArrays(1, &VertexArrayID);
glBindVertexArray(VertexArrayID);
static const GLfloat g_vertex_buffer_data[] = {
-1.0f, -1.0f, 0.0f,
1.0f, -1.0f, 0.0f,
0.0f, 1.0f, 0.0f,
};
GLuint vertexbuffer;
glGenBuffers(1, &vertexbuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(g_vertex_buffer_data), g_vertex_buffer_data, GL_STATIC_DRAW);
while(1) {
glClear( GL_COLOR_BUFFER_BIT );
// 1rst attribute buffer : vertices
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glVertexAttribPointer(
0,
3, // size
GL_FLOAT, // type
GL_FALSE, // normalized?
0, // stride
(void*)0 // array buffer offset
);
// Draw the triangle !
glDrawArrays(GL_TRIANGLES, 0, 3); // From index 0 to 3 -> 1 triangle
glDisableVertexAttribArray(0);
// Swap buffers
glfwSwapBuffers();
}
Maybe I give vertexAttrib.AttribPointer() the wrong arguments, because I'm not sure what to give it instead of (void*)0. I tried nil, but that caused the application to crash. &gVertexBufferData[0] doesn't work either.
I'm using github.com/banthar/gl as glew-wrapper, go 1.0.2 and ubuntu 12.04 amd64.
EDIT update:
glGetError doesn't give any errors
I had the same problem and I managed to fix it after looking at your post, so first of all thanks a lot.
I managed to display a triangle by using the work branch of banthar bindings with this call to AttribPointer:
vertexAttrib.AttribPointer(
3, // size
gl.FLOAT, //type
false, // normalized?
0, // stride
nil) // array buffer offset
and by passing the size in bytes to BufferData.
[...]
data := []float32{0, 1, 0, -1, -1, 0, 1, -1, 0}
[...]
gl.BufferData(gl.ARRAY_BUFFER, len(data)*4, data, gl.STATIC_DRAW)
[...]
There is probably a better way to pass the right length.
I recently came into a similar issue with the Golang OpenGL bindings, and this question was one of the only references to it I could find.
However, none of the existing answers solved my problem, as the bindings appear to be slightly different now in 2015 than they looked in 2012.
The solution to my issue which hasn't already been covered by the existing answers involved the gl.BufferData() function called when creating a VBO.
A problem-producing example of the code in question would look like this:
[...]
vertices := []float32{0, 1, 0, -1, -1, 0, 1, -1, 0}
[...]
gl.BufferData(
gl.ARRAY_BUFFER,
len(vertices)*4,
unsafe.Pointer(&vertices),
gl.STATIC_DRAW)
[...]
One solution already provided recommended to change this code to something like this:
[...]
vertices := []float32{0, 1, 0, -1, -1, 0, 1, -1, 0}
[...]
gl.BufferData(
gl.ARRAY_BUFFER,
len(vertices)*4,
vertices,
gl.STATIC_DRAW)
[...]
However the bindings I used had a different function signature to those used here, and errored with:
cannot use vertices (type []float32) as type unsafe.Pointer in argument to gl.BufferData
The solution I ended up finding, and wanted to put here so nobody else should have to go through the headache it took trying to figure out the issue, looks like this:
[...]
vertices := []float32{0, 1, 0, -1, -1, 0, 1, -1, 0}
[...]
gl.BufferData(
gl.ARRAY_BUFFER,
len(vertices)*4, //len(vertices)*int(reflect.TypeOf(vertices).Elem().Size()),
gl.Ptr(vertices),
gl.STATIC_DRAW)
[...]
I also included a commented out option to replace len(vertices)*4 with, which produces the exact same result, but finds the '4' based on the type of slice (float32 in this case)
Footnotes
The bindings I used:
github.com/go-gl/gl/all-core/gl
github.com/go-gl/glfw/v3.1/glfw
My OpenGL context was created with these hints:
primaryMonitor := glfw.GetPrimaryMonitor()
vidMode := primaryMonitor.GetVideoMode()
glfw.WindowHint(glfw.ContextVersionMajor, 3)
glfw.WindowHint(glfw.ContextVersionMinor, 3)
glfw.WindowHint(glfw.OpenGLProfile, glfw.OpenGLCoreProfile)
glfw.WindowHint(glfw.OpenGLForwardCompatible, glfw.True)
glfw.WindowHint(glfw.RedBits, vidMode.RedBits)
glfw.WindowHint(glfw.GreenBits, vidMode.GreenBits)
glfw.WindowHint(glfw.BlueBits, vidMode.BlueBits)
glfw.WindowHint(glfw.RefreshRate, vidMode.RefreshRate)
glfw.WindowHint(glfw.Visible, glfw.False)
I was having the same problem, ended up being that for some reason calling glfw.OpenWindowHint was screwing it up. It would request the correct context, my opengl version would match, I would get no errors at all, but it wouldn't work. If I leave out the hint, I get a 4.3 context and everything seems to work.
Even if I request 4.3 in the hint, it doesn't work. If I request something else, my opengl string matches, but once again it doesn't work.
Hope this helps
I don't know how the OpenGL bindings to Go look exactly, but I can tell you at least this much:
The last parameter to glVertexAttribPointer should be the byte offset from the start of the buffer object, so (in your case) 0.
Note: The C type of that parameter generally should be int, as it's a byte offset. Instead, it's void* for legacy reasons - it used to have a different meaning before VBOs.
Instead of &f try passing either a literal 0 or, if this doesn't work, a pointer with value equal to 0. How to do that in Go? This is for you to figure out, since I don't grok Go. I told you what OpenGL expects and I hope this much helps.
Also: For debugging, please check glGetError() often.