Related
I want to understand how to create loads of similar 2-D objects and then animate each one separately, using OpenGL.
I have a feeling that it will be done using this and glfwGetTime().
Can anyone here help point me in the right direction?
Ok, so here is what is the general thing that have tried so far:
We have this vector that handles translations created the following code, which I have modified slightly to make a shift in location based on time.
glm::vec2 translations[100];
int index = 0;
float offset = 0.1f;
float time = glfwGetTime(); // newcode
for (int y = -10; y < 10; y += 2)
{
for (int x = -10; x < 10; x += 2)
{
glm::vec2 translation;
translation.x = (float)x / 10.0f + offset + time; // new adjustment
translation.y = (float)y / 10.0f + offset + time*time; // new adjustmet
translations[index++] = translation;
}
}
Later, in the render loop,
while (!glfwWindowShouldClose(window))
{
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
shader.use();
glBindVertexArray(quadVAO);
glDrawArraysInstanced(GL_TRIANGLES, 0, 6, 100); // 100 triangles of 6 vertices each
glBindVertexArray(0);
time = glfwGetTime(); // new adjustment
glfwSwapBuffers(window);
glfwPollEvents();
}
is what I have tried. I suppose I am misunderstanding the way the graphics pipeline works. As I mentioned earlier, my guess is that I need to use some glm matrices to make this work as I imagined it, but am not sure ...
The general direction would be, during initialization:
Allocate a buffer to hold the positions of your instances (glNamedBufferStorage).
Set up an instanced vertex attribute for your VAO that sources the data from that buffer (glVertexArrayBindingDivisor and others).
Update your vertex shader to apply the position of your instance (coming from the instanced attribute) to the total transformation calculated within the shader.
Then, once per frame (or when the position changes):
Calculate the positions of of all your instances (the code you posted).
Submit those to the previously allocated buffer with glNamedBufferSubData.
So far you showed the code calculating the position. From here try to implement the rest, and ask a specific question if you have difficulties with any particular part of it.
I posted an example of using instancing with multidraw that you can use for reference. Note that in your case you don't need the multidraw, however, just the instancing part.
I'm attempting to create omnidirectional/point lighting in openGL version 3.3. I've searched around on the internet and this site, but so far I have not been able to accomplish this. From my understanding, I am supposed to
Generate a framebuffer using depth component
Generate a cubemap and bind it to said framebuffer
Draw to the individual parts of the cubemap as refrenced by the enums GL_TEXTURE_CUBE_MAP_*
Draw the scene normally, and compare the depth value of the fragments against those in the cubemap
Now, I've read that it is better to use distances from the light to the fragment, rather than to store the fragment depth, as it allows for easier cubemap look up (something about not needing to check each individual texture?)
My current issue is that the light that comes out is actually in a sphere, and does not generate shadows. Another issue is that the framebuffer complains of not being complete, although I was under the impression that a framebuffer does not need a renderbuffer if it renders to a texture.
Here is my framebuffer and cube map initialization:
framebuffer = 0;
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glGenTextures(1, &shadowTexture);
glBindTexture(GL_TEXTURE_CUBE_MAP, shadowTexture);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);GL_COMPARE_R_TO_TEXTURE);
for(int i = 0; i < 6; i++){
glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i , 0,GL_DEPTH_COMPONENT16, 800, 800, 0,GL_DEPTH_COMPONENT, GL_FLOAT, NULL);
}
glDrawBuffer(GL_NONE);
Shadow Vertex Shader
void main(){
gl_Position = depthMVP * M* vec4(position,1);
pos =(M * vec4(position,1)).xyz;
}
Shadow Fragment Shader
void main(){
fragmentDepth = distance(lightPos, pos);
}
Vertex Shader (unrelated bits cut out)
uniform mat4 depthMVP;
void main() {
PositionWorldSpace = (M * vec4(position,1.0)).xyz;
gl_Position = MVP * vec4(position, 1.0 );
ShadowCoord = depthMVP * M* vec4(position, 1.0);
}
Fragment Shader (unrelated code cut)
uniform samplerCube shadowMap;
void main(){
float bias = 0.005;
float visibility = 1;
if(texture(shadowMap, ShadowCoord.xyz).x < distance(lightPos, PositionWorldSpace)-bias)
visibility = 0.1
}
Now as you are probably thinking, what is depthMVP? Depth projection matrix is currently an orthogonal projection with the ranges [-10, 10] in each direction
Well they are defined like so:
glm::mat4 depthMVP = depthProjectionMatrix* ??? *i->getModelMatrix();
The issue here is that I don't know what the ??? value is supposed to be. It used to be the camera matrix, however I am unsure if that is what it is supposed to be.
Then the draw code is done for the sides of the cubemap like so:
for(int loop = 0; loop < 6; loop++){
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_CUBE_MAP_POSITIVE_X+loop, shadowTexture,0);
glClear( GL_DEPTH_BUFFER_BIT);
for(auto i: models){
glUniformMatrix4fv(modelPos, 1, GL_FALSE, glm::value_ptr(i->getModelMatrix()));
glm::mat4 depthMVP = depthProjectionMatrix*???*i->getModelMatrix();
glUniformMatrix4fv(glGetUniformLocation(shadowProgram, "depthMVP"),1, GL_FALSE, glm::value_ptr(depthMVP));
glBindVertexArray(i->vao);
glDrawElements(GL_TRIANGLES, i->triangles, GL_UNSIGNED_INT,0);
}
}
Finally the scene gets drawn normally (I'll spare you the details). Before the calls to draw onto the cubemap I set the framebuffer to the one that I generated earlier, and change the viewport to 800 by 800. I change the framebuffer back to 0 and reset the viewport to 800 by 600 before I do normal drawing. Any help on this subject will be greatly appreciated.
Update 1
After some tweaking and bug fixing, this is the result I get. I fixed an error with the depthMVP not working, what I am drawing here is the distance that is stored in the cubemap.
http://imgur.com/JekOMvf
Basically what happens is it draws the same one sided projection on each side. This makes sense since we use the same view matrix for each side, however I am not sure what sort of view matrix I am supposed to use. I think they are supposed to be lookAt() matrices that are positioned at the center, and look out in the cube map side's direction. However, the question that arises is how I am supposed to use these multiple projections in my main draw call.
Update 2
I've gone ahead and created these matrixes, however I am unsure of how valid they are (they were ripped from a website for DX cubemaps, so I inverted the Z coord).
case 1://Negative X
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(-1,0,0),glm::vec3(0,-1,0));
break;
case 3://Negative Y
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(0,-1,0),glm::vec3(0,0,-1));
break;
case 5://Negative Z
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(0,0,-1),glm::vec3(0,-1,0));
break;
case 0://Positive X
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(1,0,0),glm::vec3(0,-1,0));
break;
case 2://Positive Y
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(0,1,0),glm::vec3(0,0,1));
break;
case 4://Positive Z
sideViews[i] = glm::lookAt(glm::vec3(0), glm::vec3(0,0,1),glm::vec3(0,-1,0));
break;
The question still stands, what I am supposed to translate the depthMVP view portion by, as these are 6 individual matrices. Here is a screenshot of what it currently looks like, with the same frag shader (i.e. actually rendering shadows) http://i.imgur.com/HsOSG5v.png
As you can see the shadows seem fine, however the positioning is obviously an issue. The view matrix that I used to generate this was just an inverse translation of the position of the camera (as the lookAt() function would do).
Update 3
Code, as it currently stands:
Shadow Vertex
void main(){
gl_Position = depthMVP * vec4(position,1);
pos =(M * vec4(position,1)).xyz;
}
Shadow Fragment
void main(){
fragmentDepth = distance(lightPos, pos);
}
Main Vertex
void main(){
PositionWorldSpace = (M*vec4(position, 1)).xyz;
ShadowCoord = vec4(PositionWorldSpace - lightPos, 1);
}
Main Frag
void main(){
float texDist = texture(shadowMap, ShadowCoord.xyz/ShadowCoord.w).x;
float dist = distance(lightPos, PositionWorldSpace);
if(texDist < distance(lightPos, PositionWorldSpace)
visibility = 0.1;
outColor = vec3(texDist);//This is to visualize the depth maps
}
The perspective matrix being used
glm::mat4 depthProjectionMatrix = glm::perspective(90.f, 1.f, 1.f, 50.f);
Everything is currently working, sort of. The data that the texture stores (i.e. the distance) seems to be stored in a weird manner. It seems like it is normalized, as all values are between 0 and 1. Also, there is a 1x1x1 area around the viewer that does not have a projection, but this is due to the frustum and I think will be easy to fix (like offsetting the cameras back .5 into the center).
If you leave the fragment depth to OpenGL to determine you can take advantage of hardware hierarchical Z optimizations. Basically, if you ever write to gl_FragDepth in a fragment shader (without using the newfangled conservative depth GLSL extension) it prevents hardware optimizations called hierarchical Z. Hi-Z, for short, is a technique where rasterization for some primitives can be skipped on the basis that the depth values for the entire primitive lies behind values already in the depth buffer. But it only works if your shader never writes an arbitrary value to gl_FragDepth.
If instead of writing a fragment's distance from the light to your cube map, you stick with traditional depth you should theoretically get higher throughput (as occluded primitives can be skipped) when writing your shadow maps.
Then, in your fragment shader where you sample your depth cube map, you would convert the distance values into depth values by using a snippet of code like this (where f and n are the far and near plane distances you used when creating your depth cube map):
float VectorToDepthValue(vec3 Vec)
{
vec3 AbsVec = abs(Vec);
float LocalZcomp = max(AbsVec.x, max(AbsVec.y, AbsVec.z));
const float f = 2048.0;
const float n = 1.0;
float NormZComp = (f+n) / (f-n) - (2*f*n)/(f-n)/LocalZcomp;
return (NormZComp + 1.0) * 0.5;
}
Code borrowed from SO question: Omnidirectional shadow mapping with depth cubemap
So applying that extra bit of code to your shader, it would work out to something like this:
void main () {
float shadowDepth = texture(shadowMap, ShadowCoord.xyz/ShadowCoord.w).x;
float testDepth = VectorToDepthValue(lightPos - PositionWorldSpace);
if (shadowDepth < testDepth)
visibility = 0.1;
}
So I decided to make a 3D Stars kinda thing in C++ with SDL and OpenGL. I created a Point class which holds x, y, and z values. I create an array of Points and fill it with random coordinates. This seems to work but when I do glTranslatef(0,0,0.1f) or something similar, the stars don't come close, they just disappear.
//OpenGL Initialization Code
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(30.0f,640.0/480.0,0.3f,500.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glEnable(GL_DEPTH_TEST);
//Random point generation
for(int i = 0; i < 200000; i++)
{
float randomX = (float)rand()/((float)RAND_MAX/100.0f) - 50.0f;
float randomY = (float)rand()/((float)RAND_MAX/20) - 10.0f;
float randomZ = (float)rand()/((float)RAND_MAX/20) - 20.0f;
points[i] = Point(randomX, randomY,randomZ);
}
//Render
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glBegin(GL_POINTS);
for(int i = 0; i < 200000; i++)
{
glVertex3f(points[i]._x, points[i]._y, points[i]._z);
}
glEnd();
glTranslatef(0,0,0.1f);
SDL_GL_SwapBuffers();
SDL_Flip(screen);
What am I doing wrong?
Is that code your render function?
If it is, then you should move the point generation elsewhere so that they get generated only once.
If it is not, then you should move the initialization code to the beginning of the render function. The matrices should be cleared every frame.
If that doesn't work:
Try temporarily disabling depth testing. If they don't disappear and you see lines, you need to adjust your view frustum. Try using gluLookAt
Also, try translating them in the negative z axis. If I remember correctly, in opengl, object-space coordinates are farther away as z decreases. (0 is closer than -1)
I'm not sure why they'ld be disappearing, but in your sample code it should be inert. You load a new Identity matrix at the top for the MODELVIEW matrix, and then translate it after you've rendered. Try moving the glTranslate to right before your glBegin.
I moved the OpenGL initialization code after the code where I init SDL SetVideoMode, it works as it should now.
I am trying to modify this Digiben sample in order to get the effect of particles that generate from a spot (impact point) and float upwards kind of like the sparks of a fire. The sample has the particles rotating in a circle... I have tried removing the cosine/sine functions and replace them with a normal glTranslate with increasing Y value but I just can't get any real results... could anyone please point out roughly where I should add/modify the translation in this code to obtain that result?
void ParticleMgr::init(){
tex.Load("part.bmp");
GLfloat angle = 0; // A particle's angle
GLfloat speed = 0; // A particle's speed
// Create all the particles
for(int i = 0; i < P_MAX; i++)
{
speed = float(rand()%50 + 450); // Make a random speed
// Init the particle with a random speed
InitParticle(particle[i],speed,angle);
angle += 360 / (float)P_MAX; // Increment the angle so when all the particles are
// initialized they will be equally positioned in a
// circular fashion
}
}
void ParticleMgr::InitParticle(PARTICLE &particle, GLfloat sss, GLfloat aaa)
{
particle.speed = sss; // Set the particle's speed
particle.angle = aaa; // Set the particle's current angle of rotation
// Randomly set the particles color
particle.red = rand()%255;
particle.green = rand()%255;
particle.blue = rand()%255;
}
void ParticleMgr::DrawParticle(const PARTICLE &particle)
{
tex.Use();
// Calculate the current x any y positions of the particle based on the particle's
// current angle -- This will make the particles move in a "circular pattern"
GLfloat xPos = sinf(particle.angle);
GLfloat yPos = cosf(particle.angle);
// Translate to the x and y position and the #defined PDEPTH (particle depth)
glTranslatef(xPos,yPos,PDEPTH);
// Draw the first quad
glBegin(GL_QUADS);
glTexCoord2f(0,0);
glVertex3f(-5, 5, 0);
glTexCoord2f(1,0);
glVertex3f(5, 5, 0);
glTexCoord2f(1,1);
glVertex3f(5, -5, 0);
glTexCoord2f(0,1);
glVertex3f(-5, -5, 0);
glEnd(); // Done drawing quad
// Draw the SECOND part of our particle
tex.Use();
glRotatef(particle.angle,0,0,1); // Rotate around the z-axis (depth axis)
//glTranslatef(0, particle.angle, 0);
// Draw the second quad
glBegin(GL_QUADS);
glTexCoord2f(0,0);
glVertex3f(-4, 4, 0);
glTexCoord2f(1,0);
glVertex3f(4, 4, 0);
glTexCoord2f(1,1);
glVertex3f(4, -4, 0);
glTexCoord2f(0,1);
glVertex3f(-4, -4, 0);
glEnd(); // Done drawing quad
// Translate back to where we began
glTranslatef(-xPos,-yPos,-PDEPTH);
}
void ParticleMgr::run(){
for(int i = 0; i < P_MAX; i++)
{
DrawParticle(particle[i]);
// Increment the particle's angle
particle[i].angle += ANGLE_INC;
}
}
For now I am adding a glPushMatrix(), glTranslate(x, y, z) in the run() function above, right before the loop, with x,y,z as the position of the enemy for placing them on top of the enemy....is that the best place for that?
Thanks for any input!
Using glTranslate and glRotate that way will in fact decrease your program's performance. OpenGL is not a scene graph, so the matrix manipulation functions directly influence the drawing process, i.e. they don't set "object state". The issue you're running into is, that a 4×4 matrix-matrix multiplication involves 64 multiplications and 16 additions. So you're spending 96 times the computing power for moving a particle, than simply update the vertex position directly.
Now to your problem: Like I already told you, glTranslate operates on (a global) matrix state of one of 4 selectable matrices. And the effects accumulate, i.e. each glTranslate will start from the matrix the previous glTranslate left. OpenGL provides a matrix stack, where one can push a copy of the current matrix to work with, then pop to revert to the state before.
However: Matrix manipulation has been removed from OpenGL-3 core and later entirely. OpenGL matrix manipulation never was accelerated (except on one particular graphics workstation made by SGI around 1996). Today it is a anachronism, as every respectable program working with 3D geometry used much more sophisticated matrix manipulation by either own implementation or 3rd party library. OpenGL's matrix stack was just redundant. So I strongly suggest you forget about OpenGL's matrix manipulation functionality and roll your own.
I work with an Augmented Reality framework on Android, and it gives me the camera position as a 6 degrees of freedom vector that includes the estimated camera optical and camera orientation.
Since I'm a complete newbie in OpenGL, I don't quite understand what that means and my question is - how to use this 4x4 matrix to position my camera in OpenGL.
Below is a sample from Android SDK which renders a simple textured triangle (I didn't know which details are important so I included the whole two classes - the renderer and the triangle object).
My guess is that it positions the camera with gluLookAt in onDrawFrame(). I want to adjust this,
I receive these matrices from the framework (these are just samples) -
When the camera should look directly at the triangle, I need to use a matrix of this type to somehow position my camera:
0.9930384 0.045179322 0.10878302 0.0
-0.018241059 0.9713616 -0.23690554 0.0
-0.11637083 0.23327199 0.9654233 0.0
21.803288 -14.920643 -150.6514 1.0
When I move the camera a bit far away:
0.9763242 0.041258257 0.21234424 0.0
0.014808476 0.96659267 -0.2558918 0.0
-0.21580763 0.25297752 0.94309634 0.0
17.665 -18.520836 -243.28784 1.0
When I tilt my camera a bit to the right:
0.8340566 0.0874321 0.5447095 0.0
0.054606464 0.96943074 -0.23921578 0.0
-0.5489726 0.22926341 0.8037848 0.0
-8.809776 -7.5869675 -244.01971 1.0
Any thoughts? My guess is that the only thing that matters is actually the last row, everything else is close to zero.
I'd be happy to get any advice on how to adjust this code to use those matrices, including any settings such as setting perspective matrices or whatsoever (again, a newbie).
public class TriangleRenderer implements GLSurfaceView.Renderer{
public TriangleRenderer(Context context) {
mContext = context;
mTriangle = new Triangle();
}
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
/*
* By default, OpenGL enables features that improve quality
* but reduce performance. One might want to tweak that
* especially on software renderer.
*/
gl.glDisable(GL10.GL_DITHER);
/*
* Some one-time OpenGL initialization can be made here
* probably based on features of this particular context
*/
gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT,
GL10.GL_FASTEST);
gl.glClearColor(0,0,0,0);
gl.glShadeModel(GL10.GL_SMOOTH);
gl.glEnable(GL10.GL_DEPTH_TEST);
gl.glEnable(GL10.GL_TEXTURE_2D);
/*
* Create our texture. This has to be done each time the
* surface is created.
*/
int[] textures = new int[1];
gl.glGenTextures(1, textures, 0);
mTextureID = textures[0];
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureID);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER,
GL10.GL_NEAREST);
gl.glTexParameterf(GL10.GL_TEXTURE_2D,
GL10.GL_TEXTURE_MAG_FILTER,
GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S,
GL10.GL_CLAMP_TO_EDGE);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T,
GL10.GL_CLAMP_TO_EDGE);
gl.glTexEnvf(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE,
GL10.GL_REPLACE);
InputStream is = mContext.getResources()
.openRawResource(R.raw.robot);
Bitmap bitmap;
try {
bitmap = BitmapFactory.decodeStream(is);
} finally {
try {
is.close();
} catch(IOException e) {
// Ignore.
}
}
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
bitmap.recycle();
}
public void onDrawFrame(GL10 gl) {
/*
* By default, OpenGL enables features that improve quality
* but reduce performance. One might want to tweak that
* especially on software renderer.
*/
gl.glDisable(GL10.GL_DITHER);
gl.glTexEnvx(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE,
GL10.GL_MODULATE);
/*
* Usually, the first thing one might want to do is to clear
* the screen. The most efficient way of doing this is to use
* glClear().
*/
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
/*
* Now we're ready to draw some 3D objects
*/
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
GLU.gluLookAt(gl, 0, 0, -5, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glActiveTexture(GL10.GL_TEXTURE0);
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureID);
gl.glTexParameterx(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S,
GL10.GL_REPEAT);
gl.glTexParameterx(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T,
GL10.GL_REPEAT);
long time = SystemClock.uptimeMillis() % 4000L;
float angle = 0.090f * ((int) time);
gl.glRotatef(angle, 0, 0, 1.0f);
mTriangle.draw(gl);
}
public void onSurfaceChanged(GL10 gl, int w, int h) {
gl.glViewport(0, 0, w, h);
/*
* Set our projection matrix. This doesn't have to be done
* each time we draw, but usually a new projection needs to
* be set when the viewport is resized.
*/
float ratio = (float) w / h;
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glFrustumf(-ratio, ratio, -1, 1, 3, 7);
}
private Context mContext;
private Triangle mTriangle;
private int mTextureID;} class Triangle {
public Triangle() {
// Buffers to be passed to gl*Pointer() functions
// must be direct, i.e., they must be placed on the
// native heap where the garbage collector cannot
// move them.
//
// Buffers with multi-byte datatypes (e.g., short, int, float)
// must have their byte order set to native order
ByteBuffer vbb = ByteBuffer.allocateDirect(VERTS * 3 * 4);
vbb.order(ByteOrder.nativeOrder());
mFVertexBuffer = vbb.asFloatBuffer();
ByteBuffer tbb = ByteBuffer.allocateDirect(VERTS * 2 * 4);
tbb.order(ByteOrder.nativeOrder());
mTexBuffer = tbb.asFloatBuffer();
ByteBuffer ibb = ByteBuffer.allocateDirect(VERTS * 2);
ibb.order(ByteOrder.nativeOrder());
mIndexBuffer = ibb.asShortBuffer();
// A unit-sided equalateral triangle centered on the origin.
float[] coords = {
// X, Y, Z
-0.5f, -0.25f, 0,
0.5f, -0.25f, 0,
0.0f, 0.559016994f, 0
};
for (int i = 0; i < VERTS; i++) {
for(int j = 0; j < 3; j++) {
mFVertexBuffer.put(coords[i*3+j] * 2.0f);
}
}
for (int i = 0; i < VERTS; i++) {
for(int j = 0; j < 2; j++) {
mTexBuffer.put(coords[i*3+j] * 2.0f + 0.5f);
}
}
for(int i = 0; i < VERTS; i++) {
mIndexBuffer.put((short) i);
}
mFVertexBuffer.position(0);
mTexBuffer.position(0);
mIndexBuffer.position(0);
}
public void draw(GL10 gl) {
gl.glFrontFace(GL10.GL_CCW);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mFVertexBuffer);
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, mTexBuffer);
gl.glDrawElements(GL10.GL_TRIANGLE_STRIP, VERTS,
GL10.GL_UNSIGNED_SHORT, mIndexBuffer);
}
private final static int VERTS = 3;
private FloatBuffer mFVertexBuffer;
private FloatBuffer mTexBuffer;
private ShortBuffer mIndexBuffer;
The "trick" is to understand, that OpenGL does not have a camera. What is does is transforming the whole world by a movement that's the exact opposite of what a camera would have to be moved from position (0,0,0).
Such transformations (=movements) are described in form of so called homogenous transformation matrices. Fixed Function OpenGL uses a combination of two matrices:
Modelview M, which describes placement of the world and view (and objects within the world to some degree).
Projection P, which could be seen as kind of "lens" of the virtual camera (remember, there is no camera in OpenGL).
Any vertex position v is transformed by c = P * M * v (c is the transformed vertex coordinate in clip space, that is screen space not in pixels but with the screen edges at -1, 1 – the viewport then maps from clip space to screen pixel space).
What Android gives you is such a transformation matrix. I'm not sure, but looking at the values it might be, that you're given P * M. As long as there is no lighting involved you can load that directly into the modelview matrix using glLoadMatrix, and projection being set to identity. You pass matrices to OpenGL as a array of 16 floats; the indexing order of OpenGL sometimes confuses people, but the way you dumped the android matrices I think you already got them right (you printed them "wrong", transposed that is, which is the same pitfall people fall into with OpenGL glLoadMatrix, but two times transposing is identity, it's probably right. If it doesn't work at first, flip column and rows, i.e. "mirror" the matrix on its diagonal running from up-left do bottom-right).