I've been having problems storing texture coordinate points in a VBO, and then telling OpenGL to use it when it's time to render. In the code below, what I should be getting is a nice 16x16 texture on a square I am making using quads. However what I do get is the first top left pixel of the image instead which is red, so I get a big red square. Please tell me what I am doing wrong in great detail.
public void start() {
try {
Display.setDisplayMode(new DisplayMode(800,600));
Display.create();
} catch (LWJGLException e) {
e.printStackTrace();
System.exit(0);
}
// init OpenGL
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GL11.glOrtho(0, 800, 0, 600, 1, -1);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
glEnable(GL_DEPTH_TEST);
glEnable(GL_TEXTURE_2D);
glLoadIdentity();
//loadTextures();
TextureManager.init();
makeCube();
// init OpenGL here
while (!Display.isCloseRequested()) {
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);
// render OpenGL here
renderCube();
Display.update();
}
Display.destroy();
}
public static void main(String[] argv) {
Screen screen = new Screen();
screen.start();
}
int cube;
int texture;
private void makeCube() {
FloatBuffer cubeBuffer;
FloatBuffer textureBuffer;
//Tried using 0,0,16,0,16,16,0,16 for textureData did not work.
float[] textureData = new float[]{
0,0,
1,0,
1,1,
0,1};
textureBuffer = BufferUtils.createFloatBuffer(textureData.length);
textureBuffer.put(texture);
textureBuffer.flip();
texture = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, texture);
glBufferData(GL_ARRAY_BUFFER, textureBuffer, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
float[] cubeData = new float[]{
/*Front Face*/
100, 100,
100 + 200, 100,
100 + 200, 100 + 200,
100, 100 + 200};
cubeBuffer = BufferUtils.createFloatBuffer(cubeData.length);
cubeBuffer.put(cubeData);
cubeBuffer.flip();
cube = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, cube);
glBufferData(GL_ARRAY_BUFFER, cubeBuffer, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
private void renderCube(){
TextureManager.texture.bind();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);
glBindBuffer(GL_ARRAY_BUFFER, texture);
glTexCoordPointer(2, GL_FLOAT, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, cube);
glVertexPointer(2, GL_FLOAT, 0, 0);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glDrawArrays(GL_QUADS, 0, 4);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}
I believe your problem is in the argument to textureBuffer.put() in this code fragment:
textureBuffer = BufferUtils.createFloatBuffer(textureData.length);
textureBuffer.put(texture);
textureBuffer.flip();
texture is a variable of type int, which has not even been initialized yet. You later use it as a buffer name. The argument should be textureData instead:
textureBuffer.put(textureData);
I normally try to focus on functionality over style when answering questions here, but I can't help it this time: IMHO, texture is a very unfortunate name for a buffer name. It's not only a style and readability question. If you used descriptive names for the variables, you most likely would have spotted this problem immediately.
Say you named the variable for the buffer name bufferId (I call object identifiers "id", even though the official OpenGL terminology is "name"), and the buffer holding the texture coordinates textureCoordBuf. The statement in question would then become:
textureCoordBuf.put(bufferId);
which would jump out as highly suspicious from even a very superficial look at the code.
Related
I want to draw a cube and a sphere and apply a different texture to each.
I use blender to create the scene and then export to an obj file which then includes the vertices, normals, uvs and faces for both objects as well as the textures.
I have created a routine which loads all the data from the obj file. This all works as I can load the objects and display them etc but with only one texture. As I say I have gone through pages and pages of code and posts and 99% only deal with 1 texture to 1 object and those that deal with multiple textures only deal with one object or are in a very old version of openGL.
The one thing I haven't tried is uniform sample2D arrays in the fragment shader but I haven't found an explanation on that so haven't tried it.
My code that I have below:
ObjLoader *obj = new ObjLoader();
string _filepath = "objects\\" + _filename;
//bool res = obj->loadObjWithStaticColor(_filepath.c_str(), _vertices, _normals, vertex_colors, _colors, 1.0);
bool res = obj->loadObjWithTextures(_filepath.c_str(), _objects, _textures);
program = InitShader("shaders\\vshader.glsl", "shaders\\fshader.glsl");
glUseProgram(program);
GLuint vao_world_objects;
glGenVertexArrays(1, &vao_world_objects);
glBindVertexArray(vao_world_objects);
//GLuint vbo_world_objects;
//glGenBuffers(1, &vbo_world_objects);
//glBindBuffer(GL_ARRAY_BUFFER, vbo_world_objects);
NumVertices = _objects[_objects.size() - 1]._stop + 1;
for (size_t i = 0; i < _objects.size(); i++)
{
_vertices.insert(_vertices.end(), _objects[i]._vertices.begin(), _objects[i]._vertices.end());
_normals.insert(_normals.end(), _objects[i]._normals.begin(), _objects[i]._normals.end());
_uvs.insert(_uvs.end(), _objects[i]._uvs.begin(), _objects[i]._uvs.end());
}
GLuint _vSize = _vertices.size() * sizeof(point4);
GLuint _nSize = _normals.size() * sizeof(point4);
GLuint _uSize = _uvs.size() * sizeof(point2);
GLuint _totalSize = _vSize + _uSize; // normals + vertices + uvs
GLuint vertexbuffer;
glGenBuffers(1, &vertexbuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glBufferData(GL_ARRAY_BUFFER, _vSize, &_vertices[0], GL_STATIC_DRAW);
GLuint uvbuffer;
glGenBuffers(1, &uvbuffer);
glBindBuffer(GL_ARRAY_BUFFER, uvbuffer);
glBufferData(GL_ARRAY_BUFFER, _uSize, &_uvs[0], GL_STATIC_DRAW);
TextureID = glGetUniformLocation(program, "myTextureSampler");
TextureObjects = new GLuint[_textures.size()];
glGenTextures(_textures.size(), TextureObjects);
for (size_t i = 0; i < _textures.size(); i++)
{
// "Bind" the newly created texture : all future texture functions will modify this texture
glBindTexture(GL_TEXTURE_2D, TextureObjects[i]);
// Give the image to OpenGL
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, _textures[i].width, _textures[i].height, 0, GL_BGR, GL_UNSIGNED_BYTE, _textures[i]._tex_data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
}
for (size_t i = 0; i < _objects.size(); i++)
{
if (i == 0)
{
glActiveTexture(GL_TEXTURE0);
}
else
{
glActiveTexture(GL_TEXTURE1);
}
glBindTexture(GL_TEXTURE_2D, TextureObjects[i]);
GLuint _v_size = _objects[i]._vertices.size() * sizeof(point4);
GLuint _u_size = _objects[i]._uvs.size() * sizeof(point2);
GLuint vPosition = glGetAttribLocation(program, "vPosition");
glEnableVertexAttribArray(vPosition);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
if (i == 0)
{
glVertexAttribPointer(vPosition, 4, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
}
else
{
glVertexAttribPointer(vPosition, 4, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(_v_size));
}
GLuint vUV = glGetAttribLocation(program, "vUV");
glEnableVertexAttribArray(vUV);
glBindBuffer(GL_ARRAY_BUFFER, uvbuffer);
if (i == 0)
{
glVertexAttribPointer(vUV, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
}
else
{
glVertexAttribPointer(vUV, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(_u_size));
}
if (i == 0)
{
glUniform1i(TextureID, 0);
}
else
{
glUniform1i(TextureID, 1);
}
}
_scale = Scale(zoom, zoom, zoom);
_projection = Perspective(45.0, 4.0 / 3.0, 0.1, 100.0);
_view = LookAt(point4(Camera.x, Camera.y, Camera.z, 0), point4(0, 0, 0, 0), point4(0, 1, 0, 0));
_model = mat4(1.0); // identity matrix
_mvp = _projection * _view * _model;
MVP = glGetUniformLocation(program, "MVP");
theta = glGetUniformLocation(program, "theta");
Zoom = glGetUniformLocation(program, "Zoom");
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
glEnable(GL_CULL_FACE);
glClearColor(1.0, 1.0, 1.0, 1.0);
I understand that I have to switch between the active textures when drawing an object but I can't figure out how.
UPDATE
#immibis Ok I tried to do that yesterday but it didn't work but it was late and I was highly frustrated. SO just to get my thinking correct here, do I have to create a buffer every time (glGenBuffer) and then fill it, activate texture and then glDrawArrays or do I just create the buffer and then fill it every time with the different vetices and uvs for each object, set the offsets and then call glDrawArray for each object?
When I tried this originally I didn't know where the
glGetAttribLocation / glEnableVertexAttribArray /glBindBuffer
should go. So if I understand correctly every time I do a transformation like rotating around the x axis then buffers have to be filled etc so the code needs to go in the display function. Is that correct?
SOLVED
Ok so thanx to immibus' comments, it got me looking in a different direction. I was staring the whole time into how the data was pumped into the arrays that I never even looked at glDrawArrays. I was searching the web again and I came across a piece of code in a tutorial and the person explained glDrawArrays and I saw that you can tell it what to draw.
So then this became easy, as I originally thought it was supposed to be. I changed my code back to pumping everything in the buffers and since I have a start and stop property on my objects returned from my loader it was real easy to tell glDrawArrays what to do.
Thank you.
I was drawing a square with coordinates (-.5, -.5), (-.5, .5), (.5, .5), (.5, -.5) and I noticed that it appeared squashed in my 800 x 600 window (which seems entirely logical).
I was trying to fix it so that the square appeared square, not rectangular. My approach was to call glOrtho() with values of left = -ar, right = ar, bottom = -1, top = 1 where ar was my aspect ratio (800/600). What I found was that any call I made to glOrtho() had no effect, including the one I was already making (I could remove it and nothing would change).
My understanding of glOrtho() is that it maps the corners of the OpenGL context to the values supplied, and stretches everything between those points to fit. Is that incorrect, or am I doing something that's preventing my call to glOrtho() from taking effect?
import org.lwjgl.BufferUtils;
import org.lwjgl.LWJGLException;
import org.lwjgl.opengl.Display;
import org.lwjgl.opengl.DisplayMode;
import java.nio.FloatBuffer;
import static org.lwjgl.opengl.GL11.*;
import static org.lwjgl.opengl.GL15.*;
public class GLOrthoTestDriver {
public static void main(String[] args) {
try {
Display.setDisplayMode(new DisplayMode(800, 600));
Display.setTitle("glOrtho Test");
Display.create();
} catch (LWJGLException e) {
e.printStackTrace();
Display.destroy();
System.exit(1);
}
// Initialization code for OpenGL
glMatrixMode(GL_PROJECTION);
glOrtho(-1, 1, -1, 1, -1, 1);
glLoadIdentity();
// Scene setup
int vertexBufferHandle;
int colorBufferHandle;
FloatBuffer vertexData = BufferUtils.createFloatBuffer(8);
vertexData.put(new float[]
{
-.5f, -.5f,
-.5f, .5f,
.5f, .5f,
.5f, -.5f
}
);
vertexData.flip();
FloatBuffer colorData = BufferUtils.createFloatBuffer(12);
colorData.put(new float[]
{
1, 1, 1,
1, 1, 1,
1, 1, 1,
1, 1, 1
}
);
colorData.flip();
vertexBufferHandle = glGenBuffers();
colorBufferHandle = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferHandle);
glBufferData(GL_ARRAY_BUFFER, vertexData, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, colorBufferHandle);
glBufferData(GL_ARRAY_BUFFER, colorData, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
while (!Display.isCloseRequested()) {
glClear(GL_COLOR_BUFFER_BIT);
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferHandle);
glVertexPointer(2, GL_FLOAT, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, colorBufferHandle);
glColorPointer(3, GL_FLOAT, 0, 0);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glDrawArrays(GL_QUADS, 0, 4);
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
Display.update();
Display.sync(60);
}
glDeleteBuffers(vertexBufferHandle);
glDeleteBuffers(colorBufferHandle);
Display.destroy();
}
}
Ah - your problem is that you're calling glLoadIdentity after calling glOrtho. glLoadIdentity replaces the matrix in the current mode (GL_PROJECTION) with the identity matrix, thus wiping out any previous call to glOrtho.
Try calling glLoadIdentity before glOrtho.
I use part of a SSB as a matrix 3D of linked lists. Each voxel of the matric is a uint that gives the location of the first element of the list.
Before each rendering, I need to re-init this matrix, but not the whole SSB. So I associated the part corresponding to the matrix with a texture 1D to be able to unpack a buffer inside it.
//Storage Shader buffer
glGenBuffers(1, &m_buffer);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, m_buffer);
glBufferData(GL_SHADER_STORAGE_BUFFER,
headerMatrixSizeInByte + linkedListSizeInByte,
NULL,
GL_DYNAMIC_DRAW);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, 0);
//Texture
glGenTextures(1, &m_texture);
glBindTexture(GL_TEXTURE_1D, m_texture);
glTexBufferRange(
GL_TEXTURE_BUFFER,
GL_R32UI,
m_buffer,
0,
headerMatrixSizeInByte);
glBindTexture(GL_TEXTURE_1D, 0);
//Unpack buffer
GLuint* clearData = new uchar[m_headerMatrixSizeInByte];
memset(clearData, 0xff, headerMatrixSizeInByte);
glGenBuffers(1, &m_clearBuffer);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, m_clearBuffer);
glBufferData(
GL_PIXEL_UNPACK_BUFFER,
headerMatrixSizeInByte,
clearData,
GL_STATIC_COPY);
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
delete[] clearData;
So this is the initialization, now here is the clear attempt :
GLuint err;
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, m_clearBuffer);
glBindTexture(GL_TEXTURE_1D, m_texture);
err = m_pFunctions->glGetError(); //no error
glTexSubImage1D(
GL_TEXTURE_1D,
0,
0,
m_textureSize,
GL_RED_INTEGER,
GL_UNSIGNED_INT,
NULL);
err = m_pFunctions->glGetError(); //err GL_INVALID_VALUE
glBindBuffer(GL_PIXEL_UNPACK_BUFFER, 0);
glBindTexture(GL_TEXTURE_1D, 0);
My questions are :
Is it possible to do what I'm attempting to ?
If yes, where did I screw up ?
Thanks to Andon again who got half the answer. There is two problem in the code above :
m_textureSize = 32770 which exceeds the limit in one dimension for many hardware. The easy workaround is to use a texture 2D. Since I don't care about the content after the linked list in the buffer, I can write whatever I want in it. In the next rendering call, it will be overwritten in the shaders.
When creating the texture, one function call was missing : glTexStorage2D(GL_TEXTURE_2D, 1, width, height);
This is more out of curiosity than for any practical purpose: is there anything in the OpenGL specification that suggests that calling glTexImage2D many times (e.g., once per frame) is illegal? I mean illegal as in 'it could produce wrong results', not just inefficient (suppose I don't care about the performance impact of not using glTexSubImage2D instead).
The reason I'm asking is that I noticed some very odd artifacts when drawing overlapping, texture-mapped primitives that use a partly-transparent texture which is loaded once per every frame using glTexImage2D (see the attached picture): after a few seconds (i.e., a few hundred frames), small rectangular black patches appear on the screen (they're actually flipping between black and normal between consecutive frames).
I'm attaching below the simplest example code I could write that exhibits the problem.
#include <stdio.h>
#ifndef __APPLE__
# include <SDL/SDL.h>
# include <SDL/SDL_opengl.h>
#else
# include <SDL.h>
# include <SDL_opengl.h>
#endif
/* some constants and variables that several functions use */
const int width = 640;
const int height = 480;
#define texSize 64
GLuint vbo;
GLuint tex;
/* forward declaration, creates a random texture; uses glTexSubImage2D if
update is non-zero (otherwise glTexImage2D) */
void createTexture(GLuint label, int update);
int init()
{
/* SDL initialization */
if (SDL_Init(SDL_INIT_VIDEO) < 0)
return 0;
SDL_GL_SetAttribute(SDL_GL_DOUBLEBUFFER, 1);
if (!SDL_SetVideoMode(width, height, 0, SDL_OPENGL)) {
fprintf(stderr, "Couldn't initialize OpenGL");
return 0;
}
/* OpenGL initialization */
glClearColor(0, 0, 0, 0);
glEnable(GL_TEXTURE_2D);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glViewport(0, 0, width, height);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, height, 0, -1, 1);
glMatrixMode(GL_MODELVIEW);
/* creating the VBO and the textures */
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, 1024, 0, GL_DYNAMIC_DRAW);
glGenTextures(1, &tex);
createTexture(tex, 0);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
return 1;
}
/* draw a triangle at the specified point */
void drawTriangle(GLfloat x, GLfloat y)
{
GLfloat coords1[12] = {0, 0, 0, 0, /**/200, 0, 1, 0, /**/200, 150, 1, 1};
glLoadIdentity();
glTranslatef(x, y, 0);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(coords1), coords1);
glVertexPointer(2, GL_FLOAT, 4*sizeof(GLfloat), (void*)0);
glTexCoordPointer(2, GL_FLOAT, 4*sizeof(GLfloat),
(char*)0 + 2*sizeof(GLfloat));
glDrawArrays(GL_TRIANGLES, 0, 3);
}
void render()
{
glClear(GL_COLOR_BUFFER_BIT);
drawTriangle(250, 50);
createTexture(tex, 0);
drawTriangle(260, 120);
SDL_GL_SwapBuffers();
}
void cleanup()
{
glDeleteTextures(1, &tex);
glDeleteBuffers(1, &vbo);
SDL_Quit();
}
int main(int argc, char* argv[])
{
SDL_Event event;
if (!init()) return 1;
while (1) {
while (SDL_PollEvent(&event))
if (event.type == SDL_QUIT)
return 0;
render();
}
cleanup();
return 0;
}
void createTexture(GLuint label, int update)
{
GLubyte data[texSize*texSize*4];
GLubyte* p;
int i, j;
glBindTexture(GL_TEXTURE_2D, label);
for (i = 0; i < texSize; ++i) {
for (j = 0; j < texSize; ++j) {
p = data + (i + j*texSize)*4;
p[0] = ((i % 8) > 4?255:0);
p[1] = ((j % 8) > 4?255:0);
p[2] = ((i % 8) > 4?255:0);
p[3] = 255 - i*3;
}
}
if (!update)
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, texSize, texSize, 0, GL_RGBA,
GL_UNSIGNED_BYTE, data);
else
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, texSize, texSize, GL_RGBA,
GL_UNSIGNED_BYTE, data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
}
Notes:
I'm using SDL, but I've seen the same happening in wxWidgets, so it's not an SDL-related problem.
If I use glTexSubImage2D instead for every frame (use update = 1 in createTexture), the artifacts disappear.
If I disable blending, there are no more artifacts.
I've been testing this on a late 2010 MacBook Air, though I doubt that's particularly relevant.
This clearly an OpenGL implementation bug (just calling glTexImage2D in a loop should not cause this to happen).
I'm attempting to render a .png image as a texture. However, all that is being rendered is a white square.
I give my texture a unique int ID called texID, read the pixeldata into a buffer 'image' (declared in the .h file). I load my pixelbuffer, do all of my OpenGL stuff and bind that pixelbuffer to a texture for OpenGL. I then draw it all using glDrawElements.
Also I initialize the texture with a size of 32x32 when its contructor is called, therefore i doubt it is related to a power of two size issue.
Can anybody see any mistakes in my OpenGL GL_TEXTURE_2D setup that might give me a block white square.
#include "Texture.h"
Texture::Texture(int width, int height, string filename)
{
const char* fnPtr = filename.c_str(); //our image loader accepts a ptr to a char, not a string
printf(fnPtr);
w = width; //give our texture a width and height, the reason that we need to pass in the width and height values manually
h = height;//UPDATE, these MUST be P.O.T.
unsigned error = lodepng::decode(image,w,h,fnPtr);//lodepng's decode function will load the pixel data into image vector
//display any errors with the texture
if(error)
{
cout << "\ndecoder error " << error << ": " << lodepng_error_text(error) <<endl;
}
for(int i = 0; i<image.size(); i++)
{
printf("%i,", image.at(i));
}
printf("\nImage size is %i", image.size());
//image now contains our pixeldata. All ready for OpenGL to do its thing
//let's get this texture up in the video memory
texGLInit();
}
void Texture::texGLInit()
{
//WHERE YOU LEFT OFF: glGenTextures isn't assigning an ID to textures. it stays at zero the whole time
//i believe this is why it's been rendering white
glGenTextures(1, &textures);
printf("\ntexture = %u", textures);
glBindTexture(GL_TEXTURE_2D, textures);//evrything we're about to do is about this texture
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
//glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
//glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
//glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri (GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
//glDisable(GL_COLOR_MATERIAL);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8,w,h,0, GL_RGBA, GL_UNSIGNED_BYTE, &image);
//we COULD free the image vectors memory right about now.
}
void Texture::draw(point centerPoint, point dimensions)
{
glEnable(GL_TEXTURE_2D);
printf("\nDrawing block at (%f, %f)",centerPoint.x, centerPoint.y);
glBindTexture(GL_TEXTURE_2D, textures);//bind the texture
//create a quick vertex array for the primitive we're going to bind the texture to
printf("TexID = %u",textures);
GLfloat vArray[8] =
{
centerPoint.x-(dimensions.x/2), centerPoint.y-(dimensions.y/2),//bottom left i0
centerPoint.x-(dimensions.x/2), centerPoint.y+(dimensions.y/2),//top left i1
centerPoint.x+(dimensions.x/2), centerPoint.y+(dimensions.y/2),//top right i2
centerPoint.x+(dimensions.x/2), centerPoint.y-(dimensions.y/2)//bottom right i3
};
//create a quick texture array (we COULD create this on the heap rather than creating/destoying every cycle)
GLfloat tArray[8] =
{
0.0f,0.0f, //0
0.0f,1.0f, //1
1.0f,1.0f, //2
1.0f,0.0f //3
};
//and finally.. the index array...remember, we draw in triangles....(and we'll go CW)
GLubyte iArray[6] =
{
0,1,2,
0,2,3
};
//Activate arrays
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
//Give openGL a pointer to our vArray and tArray
glVertexPointer(2, GL_FLOAT, 0, &vArray[0]);
glTexCoordPointer(2, GL_FLOAT, 0, &tArray[0]);
//Draw it all
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_BYTE, &iArray[0]);
//glDrawArrays(GL_TRIANGLES,0,6);
//Disable the vertex arrays
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
//done!
/*glBegin(GL_QUADS);
glTexCoord2f(0.0f,0.0f);
glVertex2f(centerPoint.x-(dimensions.x/2), centerPoint.y-(dimensions.y/2));
glTexCoord2f(0.0f,1.0f);
glVertex2f(centerPoint.x-(dimensions.x/2), centerPoint.y+(dimensions.y/2));
glTexCoord2f(1.0f,1.0f);
glVertex2f(centerPoint.x+(dimensions.x/2), centerPoint.y+(dimensions.y/2));
glTexCoord2f(1.0f,0.0f);
glVertex2f(centerPoint.x+(dimensions.x/2), centerPoint.y-(dimensions.y/2));
glEnd();*/
}
Texture::Texture(void)
{
}
Texture::~Texture(void)
{
}
I'll also include the main class' init, where I do a bit more OGL setup before this.
void init(void)
{
printf("\n......Hello Guy. \n....\nInitilising");
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(0,XSize,0,YSize);
glEnable(GL_TEXTURE_2D);
myBlock = new Block(0,0,offset);
glClearColor(0,0.4,0.7,1);
glLineWidth(2); // Width of the drawing line
glMatrixMode(GL_MODELVIEW);
glDisable(GL_DEPTH_TEST);
printf("\nInitialisation Complete");
}
Update: adding in the main function where I first setup my OpenGL window.
int main(int argc, char** argv)
{
glutInit(&argc, argv); // GLUT Initialization
glutInitDisplayMode(GLUT_RGBA|GLUT_DOUBLE); // Initializing the Display mode
glutInitWindowSize(800,600); // Define the window size
glutCreateWindow("Gem Miners"); // Create the window, with caption.
printf("\n========== McLeanTech Systems =========\nBecoming Sentient\n...\n...\n....\nKILL\nHUMAN\nRACE \n");
init(); // All OpenGL initialization
//-- Callback functions ---------------------
glutDisplayFunc(display);
glutKeyboardFunc(mykey);
glutSpecialFunc(processSpecialKeys);
glutSpecialUpFunc(processSpecialUpKeys);
//glutMouseFunc(mymouse);
glutMainLoop(); // Loop waiting for event
}
Here's the usual checklist for whenever textures come out white:
OpenGL context created and being bound to current thread when attemting to load texture?
Allocated texture ID using glGenTextures?
Are the parameters format and internal format to glTex[Sub]Image… valid OpenGL tokens allowed as input for this function?
Is mipmapping being used?
YES: Supply all mipmap layers – optimally set glTexParameteri GL_TEXTURE_BASE_LEVEL and GL_TEXTURE_MAX_LEVEL, as well as GL_TEXTURE_MIN_LOD and GL_TEXTURE_MAX_LOG.
NO: Turn off mipmap filtering by setting glTexParameteri GL_TEXTURE_MIN_FILTER to GL_NEAREST or GL_LINEAR.