I was drawing a square with coordinates (-.5, -.5), (-.5, .5), (.5, .5), (.5, -.5) and I noticed that it appeared squashed in my 800 x 600 window (which seems entirely logical).
I was trying to fix it so that the square appeared square, not rectangular. My approach was to call glOrtho() with values of left = -ar, right = ar, bottom = -1, top = 1 where ar was my aspect ratio (800/600). What I found was that any call I made to glOrtho() had no effect, including the one I was already making (I could remove it and nothing would change).
My understanding of glOrtho() is that it maps the corners of the OpenGL context to the values supplied, and stretches everything between those points to fit. Is that incorrect, or am I doing something that's preventing my call to glOrtho() from taking effect?
import org.lwjgl.BufferUtils;
import org.lwjgl.LWJGLException;
import org.lwjgl.opengl.Display;
import org.lwjgl.opengl.DisplayMode;
import java.nio.FloatBuffer;
import static org.lwjgl.opengl.GL11.*;
import static org.lwjgl.opengl.GL15.*;
public class GLOrthoTestDriver {
public static void main(String[] args) {
try {
Display.setDisplayMode(new DisplayMode(800, 600));
Display.setTitle("glOrtho Test");
Display.create();
} catch (LWJGLException e) {
e.printStackTrace();
Display.destroy();
System.exit(1);
}
// Initialization code for OpenGL
glMatrixMode(GL_PROJECTION);
glOrtho(-1, 1, -1, 1, -1, 1);
glLoadIdentity();
// Scene setup
int vertexBufferHandle;
int colorBufferHandle;
FloatBuffer vertexData = BufferUtils.createFloatBuffer(8);
vertexData.put(new float[]
{
-.5f, -.5f,
-.5f, .5f,
.5f, .5f,
.5f, -.5f
}
);
vertexData.flip();
FloatBuffer colorData = BufferUtils.createFloatBuffer(12);
colorData.put(new float[]
{
1, 1, 1,
1, 1, 1,
1, 1, 1,
1, 1, 1
}
);
colorData.flip();
vertexBufferHandle = glGenBuffers();
colorBufferHandle = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferHandle);
glBufferData(GL_ARRAY_BUFFER, vertexData, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, colorBufferHandle);
glBufferData(GL_ARRAY_BUFFER, colorData, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
while (!Display.isCloseRequested()) {
glClear(GL_COLOR_BUFFER_BIT);
glBindBuffer(GL_ARRAY_BUFFER, vertexBufferHandle);
glVertexPointer(2, GL_FLOAT, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, colorBufferHandle);
glColorPointer(3, GL_FLOAT, 0, 0);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glDrawArrays(GL_QUADS, 0, 4);
glDisableClientState(GL_COLOR_ARRAY);
glDisableClientState(GL_VERTEX_ARRAY);
Display.update();
Display.sync(60);
}
glDeleteBuffers(vertexBufferHandle);
glDeleteBuffers(colorBufferHandle);
Display.destroy();
}
}
Ah - your problem is that you're calling glLoadIdentity after calling glOrtho. glLoadIdentity replaces the matrix in the current mode (GL_PROJECTION) with the identity matrix, thus wiping out any previous call to glOrtho.
Try calling glLoadIdentity before glOrtho.
Related
I want to draw a cube and a sphere and apply a different texture to each.
I use blender to create the scene and then export to an obj file which then includes the vertices, normals, uvs and faces for both objects as well as the textures.
I have created a routine which loads all the data from the obj file. This all works as I can load the objects and display them etc but with only one texture. As I say I have gone through pages and pages of code and posts and 99% only deal with 1 texture to 1 object and those that deal with multiple textures only deal with one object or are in a very old version of openGL.
The one thing I haven't tried is uniform sample2D arrays in the fragment shader but I haven't found an explanation on that so haven't tried it.
My code that I have below:
ObjLoader *obj = new ObjLoader();
string _filepath = "objects\\" + _filename;
//bool res = obj->loadObjWithStaticColor(_filepath.c_str(), _vertices, _normals, vertex_colors, _colors, 1.0);
bool res = obj->loadObjWithTextures(_filepath.c_str(), _objects, _textures);
program = InitShader("shaders\\vshader.glsl", "shaders\\fshader.glsl");
glUseProgram(program);
GLuint vao_world_objects;
glGenVertexArrays(1, &vao_world_objects);
glBindVertexArray(vao_world_objects);
//GLuint vbo_world_objects;
//glGenBuffers(1, &vbo_world_objects);
//glBindBuffer(GL_ARRAY_BUFFER, vbo_world_objects);
NumVertices = _objects[_objects.size() - 1]._stop + 1;
for (size_t i = 0; i < _objects.size(); i++)
{
_vertices.insert(_vertices.end(), _objects[i]._vertices.begin(), _objects[i]._vertices.end());
_normals.insert(_normals.end(), _objects[i]._normals.begin(), _objects[i]._normals.end());
_uvs.insert(_uvs.end(), _objects[i]._uvs.begin(), _objects[i]._uvs.end());
}
GLuint _vSize = _vertices.size() * sizeof(point4);
GLuint _nSize = _normals.size() * sizeof(point4);
GLuint _uSize = _uvs.size() * sizeof(point2);
GLuint _totalSize = _vSize + _uSize; // normals + vertices + uvs
GLuint vertexbuffer;
glGenBuffers(1, &vertexbuffer);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
glBufferData(GL_ARRAY_BUFFER, _vSize, &_vertices[0], GL_STATIC_DRAW);
GLuint uvbuffer;
glGenBuffers(1, &uvbuffer);
glBindBuffer(GL_ARRAY_BUFFER, uvbuffer);
glBufferData(GL_ARRAY_BUFFER, _uSize, &_uvs[0], GL_STATIC_DRAW);
TextureID = glGetUniformLocation(program, "myTextureSampler");
TextureObjects = new GLuint[_textures.size()];
glGenTextures(_textures.size(), TextureObjects);
for (size_t i = 0; i < _textures.size(); i++)
{
// "Bind" the newly created texture : all future texture functions will modify this texture
glBindTexture(GL_TEXTURE_2D, TextureObjects[i]);
// Give the image to OpenGL
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, _textures[i].width, _textures[i].height, 0, GL_BGR, GL_UNSIGNED_BYTE, _textures[i]._tex_data);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
}
for (size_t i = 0; i < _objects.size(); i++)
{
if (i == 0)
{
glActiveTexture(GL_TEXTURE0);
}
else
{
glActiveTexture(GL_TEXTURE1);
}
glBindTexture(GL_TEXTURE_2D, TextureObjects[i]);
GLuint _v_size = _objects[i]._vertices.size() * sizeof(point4);
GLuint _u_size = _objects[i]._uvs.size() * sizeof(point2);
GLuint vPosition = glGetAttribLocation(program, "vPosition");
glEnableVertexAttribArray(vPosition);
glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer);
if (i == 0)
{
glVertexAttribPointer(vPosition, 4, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
}
else
{
glVertexAttribPointer(vPosition, 4, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(_v_size));
}
GLuint vUV = glGetAttribLocation(program, "vUV");
glEnableVertexAttribArray(vUV);
glBindBuffer(GL_ARRAY_BUFFER, uvbuffer);
if (i == 0)
{
glVertexAttribPointer(vUV, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(0));
}
else
{
glVertexAttribPointer(vUV, 2, GL_FLOAT, GL_FALSE, 0, BUFFER_OFFSET(_u_size));
}
if (i == 0)
{
glUniform1i(TextureID, 0);
}
else
{
glUniform1i(TextureID, 1);
}
}
_scale = Scale(zoom, zoom, zoom);
_projection = Perspective(45.0, 4.0 / 3.0, 0.1, 100.0);
_view = LookAt(point4(Camera.x, Camera.y, Camera.z, 0), point4(0, 0, 0, 0), point4(0, 1, 0, 0));
_model = mat4(1.0); // identity matrix
_mvp = _projection * _view * _model;
MVP = glGetUniformLocation(program, "MVP");
theta = glGetUniformLocation(program, "theta");
Zoom = glGetUniformLocation(program, "Zoom");
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
glEnable(GL_CULL_FACE);
glClearColor(1.0, 1.0, 1.0, 1.0);
I understand that I have to switch between the active textures when drawing an object but I can't figure out how.
UPDATE
#immibis Ok I tried to do that yesterday but it didn't work but it was late and I was highly frustrated. SO just to get my thinking correct here, do I have to create a buffer every time (glGenBuffer) and then fill it, activate texture and then glDrawArrays or do I just create the buffer and then fill it every time with the different vetices and uvs for each object, set the offsets and then call glDrawArray for each object?
When I tried this originally I didn't know where the
glGetAttribLocation / glEnableVertexAttribArray /glBindBuffer
should go. So if I understand correctly every time I do a transformation like rotating around the x axis then buffers have to be filled etc so the code needs to go in the display function. Is that correct?
SOLVED
Ok so thanx to immibus' comments, it got me looking in a different direction. I was staring the whole time into how the data was pumped into the arrays that I never even looked at glDrawArrays. I was searching the web again and I came across a piece of code in a tutorial and the person explained glDrawArrays and I saw that you can tell it what to draw.
So then this became easy, as I originally thought it was supposed to be. I changed my code back to pumping everything in the buffers and since I have a start and stop property on my objects returned from my loader it was real easy to tell glDrawArrays what to do.
Thank you.
i'm trying to draw a continues spline on every onTouchMoved call,
the functionality is supposed to be similar to the line drawing used in the iOS game - flight control.
i'm using it in the following manner:
constructor init:
conPointsArray = new PointArray();
conPointsArray->initWithCapacity(DEF_ARRAY_SIZE);
lineDrawer = DrawNode::create();
the onTouchMoved callback usage:
void line::onTouchMoved(cocos2d::Touch *Touch, cocos2d::Event *Event)
{
conPointsArray->addControlPoint(Touch->getLocation());
lineDrawer->drawCardinalSpline((conPointsArray), 0.5f, 100, Color4F::BLUE);
CCLOG("on touch moved x: %f y %f", Touch->getLocation().x, Touch->getLocation().y);
}
But the app is always crashing in a certain openGL function.
I assumed the reason for it is because the array is constantly changing and openGL is having trouble with it (and the fact i'm always sending the same array which is a bad idea, but just to see how things work), so i moved the drawing call to onTouchEnded, And indeed, The line was drawn, but only (as expected) after the onTouch action has ended.
What is the solution/best practice for this issue?
EDIT:
This is the openGL function in which the code crashes:
void DrawNode::onDrawGLLine(const Mat4 &transform, uint32_t flags)
{
auto glProgram = GLProgramCache::getInstance()->getGLProgram(GLProgram::SHADER_NAME_POSITION_LENGTH_TEXTURE_COLOR);
glProgram->use();
glProgram->setUniformsForBuiltins(transform);
if (_dirtyGLLine)
{
glBindBuffer(GL_ARRAY_BUFFER, _vboGLLine);
glBufferData(GL_ARRAY_BUFFER, sizeof(V2F_C4B_T2F)*_bufferCapacityGLLine, _bufferGLLine, GL_STREAM_DRAW);
_dirtyGLLine = false;
}
if (Configuration::getInstance()->supportsShareableVAO())
{
GL::bindVAO(_vaoGLLine);
}
else
{
glBindBuffer(GL_ARRAY_BUFFER, _vboGLLine);
GL::enableVertexAttribs(GL::VERTEX_ATTRIB_FLAG_POS_COLOR_TEX);
// vertex
glVertexAttribPointer(GLProgram::VERTEX_ATTRIB_POSITION, 2, GL_FLOAT, GL_FALSE, sizeof(V2F_C4B_T2F), (GLvoid *)offsetof(V2F_C4B_T2F, vertices));
// color
glVertexAttribPointer(GLProgram::VERTEX_ATTRIB_COLOR, 4, GL_UNSIGNED_BYTE, GL_TRUE, sizeof(V2F_C4B_T2F), (GLvoid *)offsetof(V2F_C4B_T2F, colors));
// texcood
glVertexAttribPointer(GLProgram::VERTEX_ATTRIB_TEX_COORD, 2, GL_FLOAT, GL_FALSE, sizeof(V2F_C4B_T2F), (GLvoid *)offsetof(V2F_C4B_T2F, texCoords));
}
glLineWidth(2);
glDrawArrays(GL_LINES, 0, _bufferCountGLLine); <----------- CRASH HERE
glBindBuffer(GL_ARRAY_BUFFER, 0);
CC_INCREMENT_GL_DRAWN_BATCHES_AND_VERTICES(1,_bufferCountGLLine);
CHECK_GL_ERROR_DEBUG();
}
I've been having problems storing texture coordinate points in a VBO, and then telling OpenGL to use it when it's time to render. In the code below, what I should be getting is a nice 16x16 texture on a square I am making using quads. However what I do get is the first top left pixel of the image instead which is red, so I get a big red square. Please tell me what I am doing wrong in great detail.
public void start() {
try {
Display.setDisplayMode(new DisplayMode(800,600));
Display.create();
} catch (LWJGLException e) {
e.printStackTrace();
System.exit(0);
}
// init OpenGL
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GL11.glOrtho(0, 800, 0, 600, 1, -1);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
glEnable(GL_DEPTH_TEST);
glEnable(GL_TEXTURE_2D);
glLoadIdentity();
//loadTextures();
TextureManager.init();
makeCube();
// init OpenGL here
while (!Display.isCloseRequested()) {
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);
// render OpenGL here
renderCube();
Display.update();
}
Display.destroy();
}
public static void main(String[] argv) {
Screen screen = new Screen();
screen.start();
}
int cube;
int texture;
private void makeCube() {
FloatBuffer cubeBuffer;
FloatBuffer textureBuffer;
//Tried using 0,0,16,0,16,16,0,16 for textureData did not work.
float[] textureData = new float[]{
0,0,
1,0,
1,1,
0,1};
textureBuffer = BufferUtils.createFloatBuffer(textureData.length);
textureBuffer.put(texture);
textureBuffer.flip();
texture = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, texture);
glBufferData(GL_ARRAY_BUFFER, textureBuffer, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
float[] cubeData = new float[]{
/*Front Face*/
100, 100,
100 + 200, 100,
100 + 200, 100 + 200,
100, 100 + 200};
cubeBuffer = BufferUtils.createFloatBuffer(cubeData.length);
cubeBuffer.put(cubeData);
cubeBuffer.flip();
cube = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, cube);
glBufferData(GL_ARRAY_BUFFER, cubeBuffer, GL_STATIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
private void renderCube(){
TextureManager.texture.bind();
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
GL11.glClear(GL11.GL_COLOR_BUFFER_BIT | GL11.GL_DEPTH_BUFFER_BIT);
glBindBuffer(GL_ARRAY_BUFFER, texture);
glTexCoordPointer(2, GL_FLOAT, 0, 0);
glBindBuffer(GL_ARRAY_BUFFER, cube);
glVertexPointer(2, GL_FLOAT, 0, 0);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glDrawArrays(GL_QUADS, 0, 4);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}
I believe your problem is in the argument to textureBuffer.put() in this code fragment:
textureBuffer = BufferUtils.createFloatBuffer(textureData.length);
textureBuffer.put(texture);
textureBuffer.flip();
texture is a variable of type int, which has not even been initialized yet. You later use it as a buffer name. The argument should be textureData instead:
textureBuffer.put(textureData);
I normally try to focus on functionality over style when answering questions here, but I can't help it this time: IMHO, texture is a very unfortunate name for a buffer name. It's not only a style and readability question. If you used descriptive names for the variables, you most likely would have spotted this problem immediately.
Say you named the variable for the buffer name bufferId (I call object identifiers "id", even though the official OpenGL terminology is "name"), and the buffer holding the texture coordinates textureCoordBuf. The statement in question would then become:
textureCoordBuf.put(bufferId);
which would jump out as highly suspicious from even a very superficial look at the code.
I'm working on what will eventually be a terrain drawer, however at the moment I am just trying to get a basic pair of triangles to draw with glDrawElements. The code looks as such:
public class TerrainFlat extends AbstractTerrain {
IntBuffer ib = BufferUtils.createIntBuffer(3);
int vHandle = ib.get(0);
int cHandle = ib.get(1);
int iHandle = ib.get(2);
FloatBuffer vBuffer = BufferUtils.createFloatBuffer(19);
FloatBuffer cBuffer = BufferUtils.createFloatBuffer(18);
ShortBuffer iBuffer = BufferUtils.createShortBuffer(6);
#Override
public void initilize(){
float[] vertexData = {50, 20, 100, 50, -20, 100, 10, -20, 100, -10, -20, 100, -50, -20, 100, -50, 20, 100};
float[] colorData = {1, 0, 0, 0, 1, 0, 0, 0, 1, 0 , 0 , 1 , 0 , 1 , 0 , 1 , 0 , 0 };
short[] indexData = {0,1,2,3,4,5};
vBuffer.put(vertexData);
vBuffer.flip();
cBuffer.put(colorData);
cBuffer.flip();
iBuffer.put(indexData);
iBuffer.flip();
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, iHandle);
glBufferDataARB(GL_ELEMENT_ARRAY_BUFFER_ARB, iBuffer, GL_STATIC_DRAW_ARB);
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, 0);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vHandle);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, vBuffer, GL_STATIC_DRAW_ARB);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, cHandle);
glBufferDataARB(GL_ARRAY_BUFFER_ARB, cBuffer, GL_STATIC_DRAW_ARB);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0);
}
#Override
public void setUp(float posX, float posY, float posZ){
}
#Override
public void draw(){
glClear(GL_COLOR_BUFFER_BIT);
glGenBuffersARB(ib);
vHandle = ib.get(0);
cHandle = ib.get(1);
iHandle = ib.get(2);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_COLOR_ARRAY);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, vHandle);
glVertexPointer(3, GL_FLOAT, 3 << 2 , 0L);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, cHandle);
glColorPointer(3, GL_FLOAT, 3 << 2 , 0L);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0);
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, iHandle);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0L);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0);
glBindBufferARB(GL_ELEMENT_ARRAY_BUFFER_ARB, 0);
}
#Override
public void destroy(){
ib.put(0, vHandle);
ib.put(1, cHandle);
ib.put(2, iHandle);
glDeleteBuffersARB(ib);
}
}
This does not draw anything, the screen is entirely black. GL_CULL_FACE is not enabled, and there is no lighting (or lack thereof) to cause this error. Moving the data binding into the Draw() function lets it work (if you want I can add that code to this). This however means I create my buffer's every draw function (ie: bad). I'm wondering outright what I'm doing wrong here, because I've moved it around and slapped it silly for 4 days now.
You're setting vHandle, cHandle and iHandle to 0.
You never need to create the IntBuffer in line 1 anyway. Replace
int vHandle = ib.get(0);
int cHandle = ib.get(1);
int iHandle = ib.get(2);
with
int vHandle = glGenBuffers();
int cHandle = glGenBuffers();
int iHandle = glGenBuffers();
I'm using Vuforia to place a 3D model on an image target. I have created a common C++ solution to work on both Android and iOS. It works on Android, but I can't get the 3D model to appear in iOS. It tracks the image target perfectly, but there's no sign of the 3D model. The 3D model I'm using can be found here.
This is how I'm doing:
This method is called by Vuforia every time the screen needs to be rendered:
- (void)renderFrameQCAR
{
[self setFramebuffer];
[[ObjectController getInstance] getObjectInstance]->renderFrame();
[self presentFramebuffer];
}
This is the setFramebuffer method (Objective-C++):
- (void)setFramebuffer
{
if (context) {
[EAGLContext setCurrentContext:context];
if (!defaultFramebuffer) {
[self performSelectorOnMainThread:#selector(createFramebuffer) withObject:self waitUntilDone:YES];
}
#ifdef USE_OPENGL1
glBindFramebufferOES(GL_FRAMEBUFFER_OES, defaultFramebuffer);
#else
glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);
#endif
}
}
This is the renderFrame method (C++):
void IDRObject::renderFrame()
{
// Clear color and depth buffer
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Get the state from QCAR and mark the beginning of a rendering section
QCAR::State state = QCAR::Renderer::getInstance().begin();
// Explicitly render the Video Background
QCAR::Renderer::getInstance().drawVideoBackground();
#ifdef DEVICE_OPENGL_1
// Set GL11 flags:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
glDisable(GL_LIGHTING);
#endif
glEnable(GL_DEPTH_TEST);
// We must detect if background reflection is active and adjust the culling direction.
// If the reflection is active, this means the post matrix has been reflected as well,
// therefore standard counter clockwise face culling will result in "inside out" models.
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
if(QCAR::Renderer::getInstance().getVideoBackgroundConfig().mReflection == QCAR::VIDEO_BACKGROUND_REFLECTION_ON)
glFrontFace(GL_CW); //Front camera
else
glFrontFace(GL_CCW); //Back camera
SampleUtils::checkGlError("gl start setup stuff");
// Did we find any trackables this frame?
for(int tIdx = 0; tIdx < state.getNumTrackableResults(); tIdx++)
{
// Get the trackable:
const QCAR::TrackableResult* result = state.getTrackableResult(tIdx);
const QCAR::Trackable& trackable = result->getTrackable();
QCAR::Matrix44F modelViewMatrix = QCAR::Tool::convertPose2GLMatrix(result->getPose());
// Choose the texture based on the target name:
int textureIndex;
if (strcmp(trackable.getName(), "chips") == 0)
{
textureIndex = 0;
}
else if (strcmp(trackable.getName(), "stones") == 0)
{
textureIndex = 1;
}
else
{
textureIndex = 2;
}
const Texture* const thisTexture = textures[textureIndex];
#ifdef DEVICE_OPENGL_1
// Load projection matrix:
glMatrixMode(GL_PROJECTION);
glLoadMatrixf(projectionMatrix.data);
// Load model view matrix:
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(modelViewMatrix.data);
glTranslatef(0.f, 0.f, kObjectScale);
glScalef(kObjectScale, kObjectScale, kObjectScale);
// Draw object:
glBindTexture(GL_TEXTURE_2D, thisTexture->mTextureID);
glTexCoordPointer(2, GL_FLOAT, 0, (const GLvoid*) &teapotTexCoords[0]);
glVertexPointer(3, GL_FLOAT, 0, (const GLvoid*) &teapotVertices[0]);
glNormalPointer(GL_FLOAT, 0, (const GLvoid*) &teapotNormals[0]);
glDrawElements(GL_TRIANGLES, NUM_TEAPOT_OBJECT_INDEX, GL_UNSIGNED_SHORT,
(const GLvoid*) &teapotIndices[0]);
#else
QCAR::Matrix44F modelViewProjection;
SampleUtils::translatePoseMatrix(0.0f, 0.0f, kObjectScale, &modelViewMatrix.data[0]);
SampleUtils::scalePoseMatrix(kObjectScale, kObjectScale, kObjectScale, &modelViewMatrix.data[0]);
SampleUtils::multiplyMatrix(&projectionMatrix.data[0], &modelViewMatrix.data[0], &modelViewProjection.data[0]);
glUseProgram(shaderProgramID);
glVertexAttribPointer(vertexHandle, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*) &teapotVertices[0]);
glVertexAttribPointer(normalHandle, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*) &teapotNormals[0]);
glVertexAttribPointer(textureCoordHandle, 2, GL_FLOAT, GL_FALSE, 0, (const GLvoid*) &teapotTexCoords[0]);
glEnableVertexAttribArray(vertexHandle);
glEnableVertexAttribArray(normalHandle);
glEnableVertexAttribArray(textureCoordHandle);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, thisTexture->mTextureID);
glUniformMatrix4fv(mvpMatrixHandle, 1, GL_FALSE, (GLfloat*)&modelViewProjection.data[0] );
glUniform1i(texSampler2DHandle, 0 /*GL_TEXTURE0*/);
glDrawElements(GL_TRIANGLES, NUM_TEAPOT_OBJECT_INDEX, GL_UNSIGNED_SHORT, (const GLvoid*) &teapotIndices[0]);
LOG("Tracking awesome targets.\n");
SampleUtils::checkGlError("ImageTargets renderFrame\n");
#endif
}
glDisable(GL_DEPTH_TEST);
glDisable(GL_CULL_FACE);
#ifdef DEVICE_OPENGL_1
glDisable(GL_TEXTURE_2D);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
#else
glDisableVertexAttribArray(vertexHandle);
glDisableVertexAttribArray(normalHandle);
glDisableVertexAttribArray(textureCoordHandle);
#endif
QCAR::Renderer::getInstance().end();
}
And the last presentFrameBuffer (Objective-C++):
- (BOOL)presentFramebuffer
{
BOOL success = FALSE;
if (context) {
[EAGLContext setCurrentContext:context];
#ifdef USE_OPENGL1
glBindRenderbufferOES(GL_RENDERBUFFER_OES, colorRenderbuffer);
#else
glBindRenderbuffer(GL_RENDERBUFFER, colorRenderbuffer);
#endif
success = [context presentRenderbuffer:GL_RENDERBUFFER];
}
return success;
}
It turned out that the projectionMatrix never was set.
I was missing the following calls:
// Cache the projection matrix:
const QCAR::CameraCalibration& cameraCalibration = QCAR::CameraDevice::getInstance().getCameraCalibration();
projectionMatrix = QCAR::Tool::getProjectionGL(cameraCalibration, 2.0f, 2500.0f);
They are originally placed in the startCameramethod in the ImageTarget sample from Vuforia.