3d drawing in OpenGL - opengl

I'm trying to draw a chess board in OpenGL. I can draw the squares of the game board exactly as I want. But I also want to add a small boarder around the perimeter of the game board. Somehow, my perimeter is way bigger than I want. In fact, each edge of the border is the exact width of the entire game board itself.
My approach is to draw a neutral gray rectangle to represent the entire "slab" of wood that would be cut to make the board. Then, inside of this slab, I place the 64 game squares, which should be exactly centered and take up just slightly less 2d space as the slab does. I'm open to better ways, but keep in mind that I'm not very bright.
EDIT: in the image below all that gray area should be about 1/2 the size of a single square. But as you can see, each edge is the size of the entire game board. Clearly I'm not understanding something.
Here is the display function that I wrote. Why is my "slab" so much too large?
void display()
{
// Clear the image
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Reset any previous transformations
glLoadIdentity();
// define the slab
float square_edge = 8;
float border = 4;
float slab_thickness = 2;
float slab_corner = 4*square_edge+border;
// Set the view angle
glRotated(ph,1,0,0);
glRotated(th,0,1,0);
glRotated(zh,0,0,1);
float darkSquare[3] = {0,0,1};
float lightSquare[3] = {1,1,1};
// Set the viewing matrix
glOrtho(-slab_corner, slab_corner, slab_corner, -slab_corner, -slab_corner, slab_corner);
GLfloat board_vertices[8][3] = {
{-slab_corner, slab_corner, 0},
{-slab_corner, -slab_corner, 0},
{slab_corner, -slab_corner, 0},
{slab_corner, slab_corner, 0},
{-slab_corner, slab_corner, slab_thickness},
{-slab_corner, -slab_corner, slab_thickness},
{slab_corner, -slab_corner, slab_thickness},
{slab_corner, slab_corner, slab_thickness}
};
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_INT, 0, board_vertices);
// this defines each of the six faces in counter clockwise vertex order
GLubyte slabIndices[] = {0,3,2,1,2,3,7,6,0,4,7,3,1,2,6,5,4,5,6,7,0,1,5,4};
glColor3f(0.3,0.3,0.3); //upper left square is always light
glDrawElements(GL_QUADS, 24, GL_UNSIGNED_BYTE, slabIndices);
// draw the individual squares on top and centered inside of the slab
for(int x = -4; x < 4; x++) {
for(int y = -4; y < 4; y++) {
//set the color of the square
if ( (x+y)%2 ) glColor3fv(darkSquare);
else glColor3fv(lightSquare);
glBegin(GL_QUADS);
glVertex2i(x*square_edge, y*square_edge);
glVertex2i(x*square_edge+square_edge, y*square_edge);
glVertex2i(x*square_edge+square_edge, y*square_edge+square_edge);
glVertex2i(x*square_edge, y*square_edge+square_edge);
glEnd();
}
}
glFlush();
glutSwapBuffers();
}

glVertexPointer(3, GL_INT, 0, board_vertices);
specifies that board_vertices contains integers, but it's actually of type GLfloat. Could this be the problem?

Related

OpenGL overlapping ugly rendering

I'm trying to render a scene with OpenGL 2.1 but the borders on overlapping shapes are weird. I tested some OpenGL initialisations but without any change. I reduce my issue to a simple test application with 2 sphere with the same result.
I tried several things about Gl_DEPTH_TEST, enable/disable smoothing without success.
Here is my result with 2 gluSphere :
We can see some sort of aliasing when a line will be enough to separate blue and red faces...
I use SharpGL but I think that it's not significant (as I use it only as a an OpenGL wrapper). Here my simplest code to render the same thing (You can copy it in a Form to test it) :
OpenGL gl;
IntPtr hdc;
int cpt;
private void Init()
{
cpt = 0;
hdc = this.Handle;
gl = new OpenGL();
gl.Create(SharpGL.Version.OpenGLVersion.OpenGL2_1, RenderContextType.NativeWindow, 500, 500, 32, hdc);
gl.Enable(OpenGL.GL_DEPTH_TEST);
gl.DepthFunc(OpenGL.GL_LEQUAL);
gl.ClearColor(1.0F, 1.0F, 1.0F, 0);
gl.ClearDepth(1);
gl.MatrixMode(OpenGL.GL_PROJECTION);
gl.Perspective(30, 1, 0.1F, 1.0E+7F);
gl.MatrixMode(OpenGL.GL_MODELVIEW);
gl.LookAt(0, 3000, 0, 0, 0, 0, 0, 0, 1);
}
private void Render(int angle)
{
gl.Clear(OpenGL.GL_COLOR_BUFFER_BIT | OpenGL.GL_DEPTH_BUFFER_BIT | OpenGL.GL_STENCIL_BUFFER_BIT);
RenderSphere(gl, 0, 0, 0, 0, 300, Color.Red);
RenderSphere(gl, 0, 0, 100, angle, 300, Color.Blue);
gl.Blit(hdc);
}
private void RenderSphere(OpenGL gl, int x, int y, int z, int angle, int radius, Color col)
{
IntPtr obj = gl.NewQuadric();
gl.PushMatrix();
gl.Translate(x, y, z);
gl.Rotate(angle, 0, 0);
gl.Color(new float[] { col.R / 255f, col.G / 255f, col.B / 255f, col.A / 255f });
gl.QuadricDrawStyle(obj, OpenGL.GLU_FILL);
gl.Sphere(obj, radius, 20, 10);
gl.Color(new float[] { 0, 0, 0, 1 });
gl.QuadricDrawStyle(obj, OpenGL.GLU_SILHOUETTE);
gl.Sphere(obj, radius, 20, 10);
gl.DeleteQuadric(obj);
gl.PopMatrix();
}
Thanks in advance for your advices !
EDIT :
I tested that without success :
gl.Enable(OpenGL.GL_LINE_SMOOTH);
gl.Enable(OpenGL.GL_POLYGON_SMOOTH);
gl.ShadeModel(OpenGL.GL_SMOOTH);
gl.Hint(OpenGL.GL_LINE_SMOOTH_HINT, OpenGL.GL_NICEST);
gl.Hint(OpenGL.GL_POLYGON_SMOOTH_HINT, OpenGL.GL_NICEST);
gl.Hint(OpenGL.GL_PERSPECTIVE_CORRECTION_HINT, OpenGL.GL_NICEST);
EDIT2 : With more faces, image with and without lines
It is ... different... but not pleasing.
The issue has 2 reasons.
The first one indeed is a Z-fighting issue, which is cause by the monstrous distance between the near and far plane
gl.Perspective(30, 1, 0.1F, 1.0E+7F);
and the fact that at perspective projection, the depth is not linear. See also How to render depth linearly ....
This can be improved by putting the near plane as close as possible to the geometry. Since the distance to the object is 3000.0 and the radius of the sphere is 300, the near plane has to be less than 2700.0:
e.g.
gl.Perspective(30, 1, 2690.0F, 5000.0F);
The second issue is caused by the fact, that the sphere consist of triangle primitives. As you suggested in your answer, you can improve that by increasing the number of primitives.
I will provide an alternative solution, by using a clip plane. Clip the red sphere at the bottom and the blue sphere at the top. Exactly in the plane where the spheres are intersecting, so that a cap is cut off from each sphere.
A clip plane can be set by glClipPlane and to be enabled by glEnable.
The parameters to the clipping plane are interpreted as a Plane Equation.
The first 3 components of the plane equation are the normal vector to the clipping plane. The 4th component is the distance to the origin.
So the clip plane equation for the red sphere has to be {0, 0, -1, 50} and for the blue sphere {0, 0, 1, -50}.
Note, when glClipPlane is called, then the equation is transformed by the inverse of the modelview matrix. So the clip plane has to be set before the model transformations like rotation, translation and scale.
e.g.
private void Render(int angle)
{
gl.Clear(OpenGL.GL_COLOR_BUFFER_BIT | OpenGL.GL_DEPTH_BUFFER_BIT | OpenGL.GL_STENCIL_BUFFER_BIT);
double[] plane1 = new double[] {0, 0, -1, 50};
RenderSphere(gl, 0, 0, 0, 0, 300, Color.Red, plane1);
double[] plane2 = new double[] {0, 0, 1, -50};
RenderSphere(gl, 0, 0, 100, angle, 300, Color.Blue, plane2);
gl.Blit(hdc);
}
private void RenderSphere(
OpenGL gl, int x, int y, int z, int angle, int radius,
Color col, double[] plane)
{
IntPtr obj = gl.NewQuadric();
gl.ClipPlane(OpenGL.GL_CLIP_PLANE0, plane);
gl.Enable(OpenGL.GL_CLIP_PLANE0);
gl.PushMatrix();
gl.Translate(x, y, z);
gl.Rotate(angle, 0, 0);
gl.Color(new float[] { col.R / 255f, col.G / 255f, col.B / 255f, col.A / 255f });
gl.QuadricDrawStyle(obj, OpenGL.GLU_FILL);
gl.Sphere(obj, radius, 20, 10);
gl.Color(new float[] { 0, 0, 0, 1 });
gl.QuadricDrawStyle(obj, OpenGL.GLU_SILHOUETTE);
gl.Sphere(obj, radius, 20, 10);
gl.DeleteQuadric(obj);
gl.PopMatrix();
gl.Disable(OpenGL.GL_CLIP_PLANE0);
}
Solution 1 (not a good one): Applying gl.Scale(0.0001, 0.0001, 0.0001); to the ModelView matrix
Solution 2 : The near plane has to be as far as possible to avoid compressing z value in a small range. In this case, use 10 instead of 0.1 is enough. The best is to compute an adapted value depending on objects distance (in this case the nearest object is at 2700)
I think we can focus on z is stored non-linearly in the #PikanshuKumar link and the implicit consequencies.
Result :
Only the faces are cutted by a line: there is a straight line as separator at the equator.
Those lines disappear as expected when we increase the number of faces.
You're killing depth buffer precision with the way you setup your projection matrix
gl.MatrixMode(OpenGL.GL_PROJECTION);
gl.Perspective(30, 1, 0.1F, 1.0E+7F);
Essentially this compresses almost all of the depth buffer precision into the range 0.1 to 0.2 or so (I didn't do the math, just eyeballing it here).
In general you should choose the distance for the near clip plane to be as far away as possible, still keeping all the objects in your scene. The distance of the far plane doesn't matter that much (in fact, with the right matrix magic you can place it at infinity), but in general it's also a good idea to keep it as close as possible.

How to proper position skybox camera using openGL

I created a skybox for my project and it looks the way I wanted it to; however, there are a few issues I cannot figure out how to fix and I have read some tutorials on this subject, but I was not able to find something that would help me.
The first problem is that I don't know how to get the box to always move with my camera. In the image below you can see that I am able to zoom out and see the whole box, instead of only zooming in/out of the solar system and always having the stars on the background.
The other issue I have is that when I zoom in too close my background disappears.The picture below illustrates what I mean
I know if I can get the camera working properly I can get this fixed, but it goes back to my first problem. I don't know how to access the camera info.
I believe I would have to modify glTranslatef() and glScalef() in my code from a fixed number to a number that changes with the camera position.
Here is my code:
void Skybox::displaySkybox()
{
Images::RGBImage test[6]; //6 pictures for 6 sides
test[0]=Images::readImageFile(fileName); //Top
//test[1]=Images::readImageFile(fileName);//Back
//test[2]=Images::readImageFile(fileName);//Bottom
//test[3]=Images::readImageFile(fileName);//Right
//test[4]=Images::readImageFile(fileName); //Left
//test[5]=Images::readImageFile(fileName); //Front
glEnable(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
test[0].glTexImage2D(GL_TEXTURE_2D,0,GL_RGB);
// Save Current Matrix
glPushMatrix();
// Second Move the render space to the correct position (Translate)
glTranslatef(0,0,0);
// First apply scale matrix
glScalef(10000,10000,10000);
static const GLint faces[6][4] =
{
{5, 1, 2, 6}, // back
{5, 4, 0, 1}, // bottom
{0, 4, 7, 3}, // front
{4, 5, 6, 7}, // right ( 'left' in crinity's labeling )
{1, 0, 3, 2}, // left ( 'right' in crinity's labeling )
{2, 3, 7, 6} // top*/
};
GLfloat v[8][3];
GLint i;
v[0][0] = v[1][0] = v[2][0] = v[3][0] = -1; // min x
v[4][0] = v[5][0] = v[6][0] = v[7][0] = 1; // max x
v[0][1] = v[1][1] = v[4][1] = v[5][1] = -1; // min y
v[2][1] = v[3][1] = v[6][1] = v[7][1] = 1; // max y
v[0][2] = v[3][2] = v[4][2] = v[7][2] = -1; // min z
v[1][2] = v[2][2] = v[5][2] = v[6][2] = 1; // max z
for (i = 0; i < 7; i++)
{
//
glBegin(GL_QUADS);
glTexCoord2f(0,1); glVertex3fv(&v[faces[i][0]][0]);
glTexCoord2f(1,1); glVertex3fv(&v[faces[i][1]][0]);
glTexCoord2f(1,0); glVertex3fv(&v[faces[i][2]][0]);
glTexCoord2f(0,0); glVertex3fv(&v[faces[i][3]][0]);
glEnd();
}
// Load Saved Matrix
glPopMatrix();
}
How can I get access to these variables? Does openGL alreay have a function that takes care of that?
I believe I would have to modify glTranslatef() and glScalef() in my code from a fixed number to a number that changes with the camera position.
You're close, but there's a simpler solution:
Draw the skybox first, before translating the camera, so that you don't have to translate the box. Don't forget to clear your depth buffer for each new frame (you'll see why in a second).
Disable writes to the depth buffer (call glDepthMask(GL_FALSE)). This will cause every other object you render to draw over it, making it always appear "behind" everything else.
Assuming your transform matrices were reset at the start of the frame, apply only the rotation of the camera. This way the camera will always be "centered" inside the box.
Draw the skybox. Since writes to the depth buffer are off, it doesn't matter how small it is as long as it's larger than your camera's near clip plane.
Re-enable writes to the depth buffer (call glDepthMask(GL_TRUE))
Render your scene normally.
I haven't worked with skyboxes before, but it would make sense that the camera should always be at the center of the box. So start by translating the box to center around the camera coordinates, something like glTranslate(camera.x, camera.y, camera.z);
Then I'd think the box should stay infinitely distant, so maybe set the vertices to INT_MAX or something ridonculously big.
v[0][0] = v[1][0] = v[2][0] = v[3][0] = -INT_MAX; // min x
v[4][0] = v[5][0] = v[6][0] = v[7][0] = INT_MAX; // max x ...etc
Then probably get rid of the call to glScalef(). Try that out

Read different parts of an openGL framebuffer keeping overlap

I need to do some CPU operations on the framebuffer data previously drawn by openGL. Sometimes, the resolution at which I need to draw is higher than the texture resolution, therefore I have thought about picking a SIZE for the viewport and the target FBO, drawing, reading to a CPU bufffer, then moving the viewport somewhere else in the space and repeating. In my CPU memory I will have all the needed colordata. Unfortunately, for my purposes, I need to keep an overlap of 1 pixel between the vertical and horizontal borders of my tiles. Therefore, imagining a situation with four tiles with size SIZE x SIZE:
0 1
2 3
I need to have the last column of data of tile 0 holding the same data of the first column of data of tile 1, and the last row of data of tile 0 holding the same data of the first row of tile 2, for example. Hence, the total resolution I will draw at will be
SIZEW * ntilesHor -(ntilesHor-1) x SIZEH * ntilesVer -(ntilesVer-1)
For semplicity, SIZEW and SIZEH will be the same, and the same for ntilesVer and ntilesHor. My code now looks like
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);
glViewport(0, 0, tilesize, tilesize);
glPolygonMode(GL_FRONT, GL_FILL);
for (int i=0; i < ntiles; ++i)
{
for (int j=0; j < ntiles; ++j)
{
tileid = i * ntiles +j;
int left = max(0, (j*tilesize)- j);
int right = left + tilesize;
int bottom = max(0, (i*tilesize)- i);
int top = bottom + tilesize;
glEnable(GL_DEPTH_TEST);
glDepthFunc(GL_LESS);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(left, right, bottom, top, -1, 0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Draw display list
glCallList(DList);
// Texture target of the fbo
glReadBuffer(tex_render_target);
// Read to CPU to preallocated buffer
glReadPixels(0, 0, tilesize, tilesize, GL_BGRA, GL_UNSIGNED_BYTE, colorbuffers[tileid]);
}
}
The code runs and in the various buffers "colorbuffers" I seem to have what looks like colordata, and also similar to what I should have given my draw; only, the overlap I need is not there, namely, last column of tile 0 and first column of tile 1 yield different values.
Any idea?
int left = max(0, (j*tilesize)- j);
int right = left + tilesize;
int bottom = max(0, (i*tilesize)- i);
int top = bottom + tilesize;
I'm not sure about those margins. If your intention is a pixel based mapping, as suggested by your viewport, with some constant overlap, then the -j and -i terms make no sense, as they're nonuniform. I think you want some constant value there. Also you don't need that max there. You want a 1 pixel overlap though, so your constant will be 0. Because then you have
right_j == left_(j+1)
and the same for bottom and top, which is exactly what you intend.

OpenGL particles, help controlling direction

I am trying to modify this Digiben sample in order to get the effect of particles that generate from a spot (impact point) and float upwards kind of like the sparks of a fire. The sample has the particles rotating in a circle... I have tried removing the cosine/sine functions and replace them with a normal glTranslate with increasing Y value but I just can't get any real results... could anyone please point out roughly where I should add/modify the translation in this code to obtain that result?
void ParticleMgr::init(){
tex.Load("part.bmp");
GLfloat angle = 0; // A particle's angle
GLfloat speed = 0; // A particle's speed
// Create all the particles
for(int i = 0; i < P_MAX; i++)
{
speed = float(rand()%50 + 450); // Make a random speed
// Init the particle with a random speed
InitParticle(particle[i],speed,angle);
angle += 360 / (float)P_MAX; // Increment the angle so when all the particles are
// initialized they will be equally positioned in a
// circular fashion
}
}
void ParticleMgr::InitParticle(PARTICLE &particle, GLfloat sss, GLfloat aaa)
{
particle.speed = sss; // Set the particle's speed
particle.angle = aaa; // Set the particle's current angle of rotation
// Randomly set the particles color
particle.red = rand()%255;
particle.green = rand()%255;
particle.blue = rand()%255;
}
void ParticleMgr::DrawParticle(const PARTICLE &particle)
{
tex.Use();
// Calculate the current x any y positions of the particle based on the particle's
// current angle -- This will make the particles move in a "circular pattern"
GLfloat xPos = sinf(particle.angle);
GLfloat yPos = cosf(particle.angle);
// Translate to the x and y position and the #defined PDEPTH (particle depth)
glTranslatef(xPos,yPos,PDEPTH);
// Draw the first quad
glBegin(GL_QUADS);
glTexCoord2f(0,0);
glVertex3f(-5, 5, 0);
glTexCoord2f(1,0);
glVertex3f(5, 5, 0);
glTexCoord2f(1,1);
glVertex3f(5, -5, 0);
glTexCoord2f(0,1);
glVertex3f(-5, -5, 0);
glEnd(); // Done drawing quad
// Draw the SECOND part of our particle
tex.Use();
glRotatef(particle.angle,0,0,1); // Rotate around the z-axis (depth axis)
//glTranslatef(0, particle.angle, 0);
// Draw the second quad
glBegin(GL_QUADS);
glTexCoord2f(0,0);
glVertex3f(-4, 4, 0);
glTexCoord2f(1,0);
glVertex3f(4, 4, 0);
glTexCoord2f(1,1);
glVertex3f(4, -4, 0);
glTexCoord2f(0,1);
glVertex3f(-4, -4, 0);
glEnd(); // Done drawing quad
// Translate back to where we began
glTranslatef(-xPos,-yPos,-PDEPTH);
}
void ParticleMgr::run(){
for(int i = 0; i < P_MAX; i++)
{
DrawParticle(particle[i]);
// Increment the particle's angle
particle[i].angle += ANGLE_INC;
}
}
For now I am adding a glPushMatrix(), glTranslate(x, y, z) in the run() function above, right before the loop, with x,y,z as the position of the enemy for placing them on top of the enemy....is that the best place for that?
Thanks for any input!
Using glTranslate and glRotate that way will in fact decrease your program's performance. OpenGL is not a scene graph, so the matrix manipulation functions directly influence the drawing process, i.e. they don't set "object state". The issue you're running into is, that a 4×4 matrix-matrix multiplication involves 64 multiplications and 16 additions. So you're spending 96 times the computing power for moving a particle, than simply update the vertex position directly.
Now to your problem: Like I already told you, glTranslate operates on (a global) matrix state of one of 4 selectable matrices. And the effects accumulate, i.e. each glTranslate will start from the matrix the previous glTranslate left. OpenGL provides a matrix stack, where one can push a copy of the current matrix to work with, then pop to revert to the state before.
However: Matrix manipulation has been removed from OpenGL-3 core and later entirely. OpenGL matrix manipulation never was accelerated (except on one particular graphics workstation made by SGI around 1996). Today it is a anachronism, as every respectable program working with 3D geometry used much more sophisticated matrix manipulation by either own implementation or 3rd party library. OpenGL's matrix stack was just redundant. So I strongly suggest you forget about OpenGL's matrix manipulation functionality and roll your own.

OpenGL - Position camera with 6 DOF vector

I work with an Augmented Reality framework on Android, and it gives me the camera position as a 6 degrees of freedom vector that includes the estimated camera optical and camera orientation.
Since I'm a complete newbie in OpenGL, I don't quite understand what that means and my question is - how to use this 4x4 matrix to position my camera in OpenGL.
Below is a sample from Android SDK which renders a simple textured triangle (I didn't know which details are important so I included the whole two classes - the renderer and the triangle object).
My guess is that it positions the camera with gluLookAt in onDrawFrame(). I want to adjust this,
I receive these matrices from the framework (these are just samples) -
When the camera should look directly at the triangle, I need to use a matrix of this type to somehow position my camera:
0.9930384 0.045179322 0.10878302 0.0
-0.018241059 0.9713616 -0.23690554 0.0
-0.11637083 0.23327199 0.9654233 0.0
21.803288 -14.920643 -150.6514 1.0
When I move the camera a bit far away:
0.9763242 0.041258257 0.21234424 0.0
0.014808476 0.96659267 -0.2558918 0.0
-0.21580763 0.25297752 0.94309634 0.0
17.665 -18.520836 -243.28784 1.0
When I tilt my camera a bit to the right:
0.8340566 0.0874321 0.5447095 0.0
0.054606464 0.96943074 -0.23921578 0.0
-0.5489726 0.22926341 0.8037848 0.0
-8.809776 -7.5869675 -244.01971 1.0
Any thoughts? My guess is that the only thing that matters is actually the last row, everything else is close to zero.
I'd be happy to get any advice on how to adjust this code to use those matrices, including any settings such as setting perspective matrices or whatsoever (again, a newbie).
public class TriangleRenderer implements GLSurfaceView.Renderer{
public TriangleRenderer(Context context) {
mContext = context;
mTriangle = new Triangle();
}
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
/*
* By default, OpenGL enables features that improve quality
* but reduce performance. One might want to tweak that
* especially on software renderer.
*/
gl.glDisable(GL10.GL_DITHER);
/*
* Some one-time OpenGL initialization can be made here
* probably based on features of this particular context
*/
gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT,
GL10.GL_FASTEST);
gl.glClearColor(0,0,0,0);
gl.glShadeModel(GL10.GL_SMOOTH);
gl.glEnable(GL10.GL_DEPTH_TEST);
gl.glEnable(GL10.GL_TEXTURE_2D);
/*
* Create our texture. This has to be done each time the
* surface is created.
*/
int[] textures = new int[1];
gl.glGenTextures(1, textures, 0);
mTextureID = textures[0];
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureID);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER,
GL10.GL_NEAREST);
gl.glTexParameterf(GL10.GL_TEXTURE_2D,
GL10.GL_TEXTURE_MAG_FILTER,
GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S,
GL10.GL_CLAMP_TO_EDGE);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T,
GL10.GL_CLAMP_TO_EDGE);
gl.glTexEnvf(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE,
GL10.GL_REPLACE);
InputStream is = mContext.getResources()
.openRawResource(R.raw.robot);
Bitmap bitmap;
try {
bitmap = BitmapFactory.decodeStream(is);
} finally {
try {
is.close();
} catch(IOException e) {
// Ignore.
}
}
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
bitmap.recycle();
}
public void onDrawFrame(GL10 gl) {
/*
* By default, OpenGL enables features that improve quality
* but reduce performance. One might want to tweak that
* especially on software renderer.
*/
gl.glDisable(GL10.GL_DITHER);
gl.glTexEnvx(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE,
GL10.GL_MODULATE);
/*
* Usually, the first thing one might want to do is to clear
* the screen. The most efficient way of doing this is to use
* glClear().
*/
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
/*
* Now we're ready to draw some 3D objects
*/
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
GLU.gluLookAt(gl, 0, 0, -5, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glActiveTexture(GL10.GL_TEXTURE0);
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureID);
gl.glTexParameterx(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S,
GL10.GL_REPEAT);
gl.glTexParameterx(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T,
GL10.GL_REPEAT);
long time = SystemClock.uptimeMillis() % 4000L;
float angle = 0.090f * ((int) time);
gl.glRotatef(angle, 0, 0, 1.0f);
mTriangle.draw(gl);
}
public void onSurfaceChanged(GL10 gl, int w, int h) {
gl.glViewport(0, 0, w, h);
/*
* Set our projection matrix. This doesn't have to be done
* each time we draw, but usually a new projection needs to
* be set when the viewport is resized.
*/
float ratio = (float) w / h;
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glFrustumf(-ratio, ratio, -1, 1, 3, 7);
}
private Context mContext;
private Triangle mTriangle;
private int mTextureID;} class Triangle {
public Triangle() {
// Buffers to be passed to gl*Pointer() functions
// must be direct, i.e., they must be placed on the
// native heap where the garbage collector cannot
// move them.
//
// Buffers with multi-byte datatypes (e.g., short, int, float)
// must have their byte order set to native order
ByteBuffer vbb = ByteBuffer.allocateDirect(VERTS * 3 * 4);
vbb.order(ByteOrder.nativeOrder());
mFVertexBuffer = vbb.asFloatBuffer();
ByteBuffer tbb = ByteBuffer.allocateDirect(VERTS * 2 * 4);
tbb.order(ByteOrder.nativeOrder());
mTexBuffer = tbb.asFloatBuffer();
ByteBuffer ibb = ByteBuffer.allocateDirect(VERTS * 2);
ibb.order(ByteOrder.nativeOrder());
mIndexBuffer = ibb.asShortBuffer();
// A unit-sided equalateral triangle centered on the origin.
float[] coords = {
// X, Y, Z
-0.5f, -0.25f, 0,
0.5f, -0.25f, 0,
0.0f, 0.559016994f, 0
};
for (int i = 0; i < VERTS; i++) {
for(int j = 0; j < 3; j++) {
mFVertexBuffer.put(coords[i*3+j] * 2.0f);
}
}
for (int i = 0; i < VERTS; i++) {
for(int j = 0; j < 2; j++) {
mTexBuffer.put(coords[i*3+j] * 2.0f + 0.5f);
}
}
for(int i = 0; i < VERTS; i++) {
mIndexBuffer.put((short) i);
}
mFVertexBuffer.position(0);
mTexBuffer.position(0);
mIndexBuffer.position(0);
}
public void draw(GL10 gl) {
gl.glFrontFace(GL10.GL_CCW);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mFVertexBuffer);
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, mTexBuffer);
gl.glDrawElements(GL10.GL_TRIANGLE_STRIP, VERTS,
GL10.GL_UNSIGNED_SHORT, mIndexBuffer);
}
private final static int VERTS = 3;
private FloatBuffer mFVertexBuffer;
private FloatBuffer mTexBuffer;
private ShortBuffer mIndexBuffer;
The "trick" is to understand, that OpenGL does not have a camera. What is does is transforming the whole world by a movement that's the exact opposite of what a camera would have to be moved from position (0,0,0).
Such transformations (=movements) are described in form of so called homogenous transformation matrices. Fixed Function OpenGL uses a combination of two matrices:
Modelview M, which describes placement of the world and view (and objects within the world to some degree).
Projection P, which could be seen as kind of "lens" of the virtual camera (remember, there is no camera in OpenGL).
Any vertex position v is transformed by c = P * M * v (c is the transformed vertex coordinate in clip space, that is screen space not in pixels but with the screen edges at -1, 1 – the viewport then maps from clip space to screen pixel space).
What Android gives you is such a transformation matrix. I'm not sure, but looking at the values it might be, that you're given P * M. As long as there is no lighting involved you can load that directly into the modelview matrix using glLoadMatrix, and projection being set to identity. You pass matrices to OpenGL as a array of 16 floats; the indexing order of OpenGL sometimes confuses people, but the way you dumped the android matrices I think you already got them right (you printed them "wrong", transposed that is, which is the same pitfall people fall into with OpenGL glLoadMatrix, but two times transposing is identity, it's probably right. If it doesn't work at first, flip column and rows, i.e. "mirror" the matrix on its diagonal running from up-left do bottom-right).