Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
In a 2d platform game, I want to visualize the bounding box that I have created to aid debugging. How can I accomplish this in Visual C++ 2012?
First define a simple vertex structure:
struct Vertex
{
D3DXVECTOR3 position; //a 3D point even in 2D rendering
};
Now you can create vertex and index arrays:
Vertex *vertices;
unsigned long *indices = new unsigned long[5];
D3D11_BUFFER_DESC vertexBufferDesc, indexBufferDesc;
D3D11_SUBRESOURCE_DATA vertexData, indexData;
//create the vertex array
vertices = new Vertex[5];
if(!vertices)
{
//handle error
}
//load the vertex array with data
vertices[0].position = D3DXVECTOR3(left, top, 0.0f);
vertices[1].position = D3DXVECTOR3(right, top, 0.0f);
vertices[2].position = D3DXVECTOR3(right, bottom, 0.0f);
vertices[3].position = D3DXVECTOR3(left, bottom, 0.0f);
vertices[4].position = D3DXVECTOR3(left, top, 0.0f);
//create the index array
indices = new unsigned long[5];
if(!indices)
{
//handle error
}
//load the index array with data
for(i=0; i<5; i++)
indices[i] = i;
And load them into buffers:
ID3D11Buffer *vertexBuffer, *indexBuffer;
HRESULT result;
//set up the description of the dynamic vertex buffer
vertexBufferDesc.Usage = D3D11_USAGE_DYNAMIC; //enables recreation and movement of vertices
vertexBufferDesc.ByteWidth = sizeof(Vertex) * 5;
vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
vertexBufferDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; //couples with dynamic
vertexBufferDesc.MiscFlags = 0;
vertexBufferDesc.StructureByteStride = 0;
//give the subresource structure a pointer to the vertex data
vertexData.pSysMem = vertices;
vertexData.SysMemPitch = 0;
vertexData.SysMemSlicePitch = 0;
//now create the vertex buffer
result = device->CreateBuffer(&vertexBufferDesc, &vertexData, &vertexBuffer);
if(FAILED(result))
{
//handle error
}
//set up the description of the static index buffer
indexBufferDesc.Usage = D3D11_USAGE_DEFAULT;
indexBufferDesc.ByteWidth = sizeof(unsigned long) * 5;
indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER;
indexBufferDesc.CPUAccessFlags = 0;
indexBufferDesc.MiscFlags = 0;
indexBufferDesc.StructureByteStride = 0;
//give the subresource structure a pointer to the index data
indexData.pSysMem = indices;
indexData.SysMemPitch = 0;
indexData.SysMemSlicePitch = 0;
//create the index buffer
result = device->CreateBuffer(&indexBufferDesc, &indexData, &indexBuffer);
if(FAILED(result))
{
//handle error
}
Set up the rectangle to be rendered like this:
unsigned int stride = sizeof(Vertex);
unsigned int offset = 0;
deviceContext->IASetVertexBuffers(0, 1, &vertexBuffer, &stride, &offset);
deviceContext->IASetIndexBuffer(indexBuffer, DXGI_FORMAT_R32_UINT, 0);
deviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_LINESTRIP);
Now render with the shader of your choice, remembering to pass the orthographic matrix to the shader instead of the perspective matrix. Voilà! Rectangle. But you can't move it yet... You have to define another function to do that:
bool UpdateRectBuffers(ID3D11Buffer *vertexBuffer, ID3D11DeviceContext *deviceContext, float top, float left, float bottom, float right)
{
Vertex *vertices;
D3D11_MAPPED_SUBRESOURCE mappedResource;
VertexType *verticesPtr;
HRESULT result;
//create a temporary vertex array to fill with the updated data
vertices = new Vertex[5];
if(!vertices)
{
return false;
}
vertices[0].position = D3DXVECTOR3(left, top, 0.0f);
vertices[1].position = D3DXVECTOR3(right, top, 0.0f);
vertices[2].position = D3DXVECTOR3(right, bottom, 0.0f);
vertices[3].position = D3DXVECTOR3(left, bottom, 0.0f);
vertices[4].position = D3DXVECTOR3(left, top, 0.0f);
//lock the vertex buffer so it can be written to
result = deviceContext->Map(vertexBuffer, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource);
if(FAILED(result))
{
return false;
}
verticesPtr = (Vertex*)mappedResource.pData;
//copy the data into the vertex buffer
memcpy(verticesPtr, (void*)vertices, (sizeof(Vertex) * 5));
deviceContext->Unmap(vertexBuffer, 0);
delete [] vertices;
vertices = 0;
return true;
}
The dependencies of this code are float top, float left, float right, float left, ID3D11DeviceContext *deviceContext, and ID3D11Device *device.
As you did not described well what you are able to do already, my answer is based on some assumptions.
Assumptions
So, I'm assuming that
you are able to draw a sprite (i.e. a colored/textured rectangle, i.e. quad of vertices / duet of triangles)
you already have data, that defines bounding volume (in any of several ways)
and will not explain how to do it.
Possible solutions
Variant 1:
You don't need anything special to draw edges. "Edges" (straight lines) are just long and thin rectangles. So, you need to put 4 thin rectangles in place where your edges should be.
This way you can choose thickness, color of line and even use a texture (dotted lines, dashed lines, lines with pink kittens, etc.) or shader effects, like procedural coloring, smoothing, blur, etc. Anyway, you will likely need lines for your game.
Variant 2:
You can draw lines instead of triangles. Use "line list" primitive topology, instead of "triangle list". (See: ID3D11DeviceContext::IASetPrimitiveTopology(), D3D11_PRIMITIVE_TOPOLOGY_LINELIST).
This way you cannot customize things. But this is much easier.
Variant 3:
Draw rectangle in wireframe mode. Just set up rasterizer state's fill mode. See: ID3D11DeviceContext::RSSetState, D3D11_RASTERIZER_DESC::FillMode, D3D11_FILL_MODE::D3D11_FILL_WIREFRAME
You'll get edges of triangles, even that ones that are diagonals of rectangle. This way you cannot set up neither thickness, nor color. But this way is really, really simple.
Variant 4:
Use any 2D drawing library that will do Variant 1 for you. As D3DX is obsolete, it is not recommended to use D3DXLine anymore. You can try DirectX toolkit or any library available on web.
Obviously you'll get additional dependency.
P.S.
If my initial assumptions were incorrect, then you'll not get an answer here. No men on StackOverflow will explain you such basic things. To correct a situation:
There are zillion ways to draw a rectangle. Pick any tutorial online (Ex.: rastertek, braynzarsoft) to get an idea on what's happening in "Possible solutions" part of this answer.
There are zillion ways to calculate bounding rectangle for each of zillion ways of defining it. Note, that to define a rectangle in 2D space, you need at least 2 points. And to define a 2D point, you need 2 coordinate values. So 4 values per each rectangle. Google or pick a math book for additional info.
Hope it helps!
Related
I want to understand how to create loads of similar 2-D objects and then animate each one separately, using OpenGL.
I have a feeling that it will be done using this and glfwGetTime().
Can anyone here help point me in the right direction?
Ok, so here is what is the general thing that have tried so far:
We have this vector that handles translations created the following code, which I have modified slightly to make a shift in location based on time.
glm::vec2 translations[100];
int index = 0;
float offset = 0.1f;
float time = glfwGetTime(); // newcode
for (int y = -10; y < 10; y += 2)
{
for (int x = -10; x < 10; x += 2)
{
glm::vec2 translation;
translation.x = (float)x / 10.0f + offset + time; // new adjustment
translation.y = (float)y / 10.0f + offset + time*time; // new adjustmet
translations[index++] = translation;
}
}
Later, in the render loop,
while (!glfwWindowShouldClose(window))
{
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
shader.use();
glBindVertexArray(quadVAO);
glDrawArraysInstanced(GL_TRIANGLES, 0, 6, 100); // 100 triangles of 6 vertices each
glBindVertexArray(0);
time = glfwGetTime(); // new adjustment
glfwSwapBuffers(window);
glfwPollEvents();
}
is what I have tried. I suppose I am misunderstanding the way the graphics pipeline works. As I mentioned earlier, my guess is that I need to use some glm matrices to make this work as I imagined it, but am not sure ...
The general direction would be, during initialization:
Allocate a buffer to hold the positions of your instances (glNamedBufferStorage).
Set up an instanced vertex attribute for your VAO that sources the data from that buffer (glVertexArrayBindingDivisor and others).
Update your vertex shader to apply the position of your instance (coming from the instanced attribute) to the total transformation calculated within the shader.
Then, once per frame (or when the position changes):
Calculate the positions of of all your instances (the code you posted).
Submit those to the previously allocated buffer with glNamedBufferSubData.
So far you showed the code calculating the position. From here try to implement the rest, and ask a specific question if you have difficulties with any particular part of it.
I posted an example of using instancing with multidraw that you can use for reference. Note that in your case you don't need the multidraw, however, just the instancing part.
I want to understand how to create loads of similar 2-D objects and then animate each one separately, using OpenGL.
I have a feeling that it will be done using this and glfwGetTime().
Can anyone here help point me in the right direction?
Ok, so here is what is the general thing that have tried so far:
We have this vector that handles translations created the following code, which I have modified slightly to make a shift in location based on time.
glm::vec2 translations[100];
int index = 0;
float offset = 0.1f;
float time = glfwGetTime(); // newcode
for (int y = -10; y < 10; y += 2)
{
for (int x = -10; x < 10; x += 2)
{
glm::vec2 translation;
translation.x = (float)x / 10.0f + offset + time; // new adjustment
translation.y = (float)y / 10.0f + offset + time*time; // new adjustmet
translations[index++] = translation;
}
}
Later, in the render loop,
while (!glfwWindowShouldClose(window))
{
glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
shader.use();
glBindVertexArray(quadVAO);
glDrawArraysInstanced(GL_TRIANGLES, 0, 6, 100); // 100 triangles of 6 vertices each
glBindVertexArray(0);
time = glfwGetTime(); // new adjustment
glfwSwapBuffers(window);
glfwPollEvents();
}
is what I have tried. I suppose I am misunderstanding the way the graphics pipeline works. As I mentioned earlier, my guess is that I need to use some glm matrices to make this work as I imagined it, but am not sure ...
The general direction would be, during initialization:
Allocate a buffer to hold the positions of your instances (glNamedBufferStorage).
Set up an instanced vertex attribute for your VAO that sources the data from that buffer (glVertexArrayBindingDivisor and others).
Update your vertex shader to apply the position of your instance (coming from the instanced attribute) to the total transformation calculated within the shader.
Then, once per frame (or when the position changes):
Calculate the positions of of all your instances (the code you posted).
Submit those to the previously allocated buffer with glNamedBufferSubData.
So far you showed the code calculating the position. From here try to implement the rest, and ask a specific question if you have difficulties with any particular part of it.
I posted an example of using instancing with multidraw that you can use for reference. Note that in your case you don't need the multidraw, however, just the instancing part.
I'm trying to write a general D3D11 line draw with variable width. It works but only when the line is about 45 degrees. Then it 'breaks up' as shown in the pic. Ignore the model and the triangle.
First, the calls to attempt to draw the lines, pretty basic:
g_UILineShader.SetActive();
for (float x = 0; x < 800; x = x + 10)
{
g_UILineShader.DrawUILine(pd3dDevice, x, 0, 800-x, 600, 3, XMFLOAT4(1.0f, 1.0f, 0.0f, 1.0f));
}
g_UILineShader.Render(pd3dDevice);
and ultimately, the render code for the triangle list:
HRESULT Render(ID3D11Device * pd3dDevice)
{
auto devcon = DXUTGetD3D11DeviceContext();
// Copy all of the vertices in the array to the vertex buffer
D3D11_MAPPED_SUBRESOURCE ms;
devcon->Map(_pVertexBuffer, NULL, D3D11_MAP_WRITE_DISCARD, NULL, &ms); // map the buffer
memcpy(ms.pData, &_vertices[0], _vertices.size() * sizeof(UILineVertex)); // copy the data
devcon->Unmap(_pVertexBuffer, NULL); // unmap the buffer
// Set the vertex buffer on the device context
UINT stride = sizeof(UILineVertex);
UINT offset = 0;
ID3D11Buffer * pBuffer = _pVertexBuffer;
devcon->IASetVertexBuffers(0, 1, &pBuffer, &stride, &offset);
// Select which primtive type we are using
devcon->IASetPrimitiveTopology(D3D10_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
// Draw the vertex buffer to the back buffer
devcon->Draw(_vertices.size(), 0);
// Once rendered we can discard all of the vertex data
_vertices.clear();
return S_OK;
}
Can anyone spot a bug or fundamental misunderstanding? If there's a better way to draw lines in screen space that allow for different angles and thicknesses, I'd rather not reinvent the wheel but haven't come across one.
The shader seems fine and merely divides the screen space coords by the screen size to get it into the -1,+1 space in both the X and Y dimensions. I don't think normals would matter, or could they?
Just in case someone runs into something vaguely similar, it turned out to be my rasterizer state; the cull mode was the opposite of the way I was winding my triangles. "Yay" for the VS2013 graphics debugger.
I have a simple rendering program I just made, but it refuses to draw anything but what I set the initial vertices to. Even if I call Map() and Unmap(), the geometry doesn't seem to change. I have a feeling it has to do with Map() and Unmap(), but I'm not sure. Right now, my initial vertex data consists of one triangle. Then I map the vertex buffer with a new set of vertices which consists of two triangles, but they aren't rendered. Only one triangle is rendered even though I pass in 6 for the vertex count in the draw function. Here is my setup code:
VertexData vertexData[] =
{
{0.0f,0.0f,0.0f,0.0f,0.0f,1.0f,0.0f,0.0f},
{0.0f,1.0f,0.0f,0.0f,0.0f,1.0f,0.0f,0.0f},
{1.0f,1.0f,0.0f,0.0f,0.0f,1.0f,0.0f,0.0f}
};
D3D11_BUFFER_DESC bd;
ZeroMemory(&bd,sizeof(bd));
bd.Usage = D3D11_USAGE_DYNAMIC;
bd.ByteWidth = sizeof(VertexData)*256;
bd.BindFlags = D3D11_BIND_VERTEX_BUFFER;
bd.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE;
D3D11_SUBRESOURCE_DATA vertexBufferData;
vertexBufferData.pSysMem = vertexData;
vertexBufferData.SysMemPitch = 0;
vertexBufferData.SysMemSlicePitch = 0;
_device->CreateBuffer(&bd, &vertexBufferData, &_renderBuffer);
Mapping function:
data is a vector containing six VertexData structs, for six vertices
D3D11_MAPPED_SUBRESOURCE ms;
ZeroMemory(&ms,sizeof(D3D11_MAPPED_SUBRESOURCE));
_deviceContext->Map(_renderBuffer,NULL,D3D11_MAP_WRITE_DISCARD,NULL,&ms);
memcpy(ms.pData,&data[0],sizeof(VertexData)*data.size());
_deviceContext->Unmap(_renderBuffer,NULL);
And here is my rendering code:
_deviceContext->ClearRenderTargetView(_backBuffer,D3DXCOLOR(0.0f,0.0f,0.0f,1.0f));
_deviceContext->Draw(6,0);
_swapChain->Present(0,0);
EDIT: Disabled backface culling, but the triangle is still not appearing.
Mapping Function
void Render::CopyDataToBuffers(std::vector<VertexData> data)
{
D3D11_MAPPED_SUBRESOURCE ms;
ZeroMemory(&ms,sizeof(D3D11_MAPPED_SUBRESOURCE));
_deviceContext->Map(_renderBuffer,NULL,D3D11_MAP_WRITE_DISCARD,NULL,&ms);
memcpy(ms.pData,&data[0],sizeof(VertexData)*data.size());
_deviceContext->Unmap(_renderBuffer,NULL);
}
Calling of Mapping function
std::vector<VertexData> vertexDataVec;
vertexDataVec.push_back(vertexData[0]);
vertexDataVec.push_back(vertexData[1]);
vertexDataVec.push_back(vertexData[2]);
vertexDataVec.push_back(vertexData[3]);
vertexDataVec.push_back(vertexData[4]);
vertexDataVec.push_back(vertexData[5]);
Render::GetRender().CopyDataToBuffers(vertexDataVec);
To fix the problem, I just had to create a ID3D11RasterizerState and disable culling.
Here is the structure:
ID3D11RasterizerState* rasterizerState = NULL;
D3D11_RASTERIZER_DESC rd =
{
D3D11_FILL_SOLID,
D3D11_CULL_NONE,
TRUE,
1,
10.0F,
1.0F,
TRUE
};
_device->CreateRasterizerState(&rd,&rasterizerState);
_deviceContext->RSSetState(rasterizerState);
I work with an Augmented Reality framework on Android, and it gives me the camera position as a 6 degrees of freedom vector that includes the estimated camera optical and camera orientation.
Since I'm a complete newbie in OpenGL, I don't quite understand what that means and my question is - how to use this 4x4 matrix to position my camera in OpenGL.
Below is a sample from Android SDK which renders a simple textured triangle (I didn't know which details are important so I included the whole two classes - the renderer and the triangle object).
My guess is that it positions the camera with gluLookAt in onDrawFrame(). I want to adjust this,
I receive these matrices from the framework (these are just samples) -
When the camera should look directly at the triangle, I need to use a matrix of this type to somehow position my camera:
0.9930384 0.045179322 0.10878302 0.0
-0.018241059 0.9713616 -0.23690554 0.0
-0.11637083 0.23327199 0.9654233 0.0
21.803288 -14.920643 -150.6514 1.0
When I move the camera a bit far away:
0.9763242 0.041258257 0.21234424 0.0
0.014808476 0.96659267 -0.2558918 0.0
-0.21580763 0.25297752 0.94309634 0.0
17.665 -18.520836 -243.28784 1.0
When I tilt my camera a bit to the right:
0.8340566 0.0874321 0.5447095 0.0
0.054606464 0.96943074 -0.23921578 0.0
-0.5489726 0.22926341 0.8037848 0.0
-8.809776 -7.5869675 -244.01971 1.0
Any thoughts? My guess is that the only thing that matters is actually the last row, everything else is close to zero.
I'd be happy to get any advice on how to adjust this code to use those matrices, including any settings such as setting perspective matrices or whatsoever (again, a newbie).
public class TriangleRenderer implements GLSurfaceView.Renderer{
public TriangleRenderer(Context context) {
mContext = context;
mTriangle = new Triangle();
}
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
/*
* By default, OpenGL enables features that improve quality
* but reduce performance. One might want to tweak that
* especially on software renderer.
*/
gl.glDisable(GL10.GL_DITHER);
/*
* Some one-time OpenGL initialization can be made here
* probably based on features of this particular context
*/
gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT,
GL10.GL_FASTEST);
gl.glClearColor(0,0,0,0);
gl.glShadeModel(GL10.GL_SMOOTH);
gl.glEnable(GL10.GL_DEPTH_TEST);
gl.glEnable(GL10.GL_TEXTURE_2D);
/*
* Create our texture. This has to be done each time the
* surface is created.
*/
int[] textures = new int[1];
gl.glGenTextures(1, textures, 0);
mTextureID = textures[0];
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureID);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER,
GL10.GL_NEAREST);
gl.glTexParameterf(GL10.GL_TEXTURE_2D,
GL10.GL_TEXTURE_MAG_FILTER,
GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S,
GL10.GL_CLAMP_TO_EDGE);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T,
GL10.GL_CLAMP_TO_EDGE);
gl.glTexEnvf(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE,
GL10.GL_REPLACE);
InputStream is = mContext.getResources()
.openRawResource(R.raw.robot);
Bitmap bitmap;
try {
bitmap = BitmapFactory.decodeStream(is);
} finally {
try {
is.close();
} catch(IOException e) {
// Ignore.
}
}
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
bitmap.recycle();
}
public void onDrawFrame(GL10 gl) {
/*
* By default, OpenGL enables features that improve quality
* but reduce performance. One might want to tweak that
* especially on software renderer.
*/
gl.glDisable(GL10.GL_DITHER);
gl.glTexEnvx(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE,
GL10.GL_MODULATE);
/*
* Usually, the first thing one might want to do is to clear
* the screen. The most efficient way of doing this is to use
* glClear().
*/
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
/*
* Now we're ready to draw some 3D objects
*/
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
GLU.gluLookAt(gl, 0, 0, -5, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glActiveTexture(GL10.GL_TEXTURE0);
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureID);
gl.glTexParameterx(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S,
GL10.GL_REPEAT);
gl.glTexParameterx(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T,
GL10.GL_REPEAT);
long time = SystemClock.uptimeMillis() % 4000L;
float angle = 0.090f * ((int) time);
gl.glRotatef(angle, 0, 0, 1.0f);
mTriangle.draw(gl);
}
public void onSurfaceChanged(GL10 gl, int w, int h) {
gl.glViewport(0, 0, w, h);
/*
* Set our projection matrix. This doesn't have to be done
* each time we draw, but usually a new projection needs to
* be set when the viewport is resized.
*/
float ratio = (float) w / h;
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glFrustumf(-ratio, ratio, -1, 1, 3, 7);
}
private Context mContext;
private Triangle mTriangle;
private int mTextureID;} class Triangle {
public Triangle() {
// Buffers to be passed to gl*Pointer() functions
// must be direct, i.e., they must be placed on the
// native heap where the garbage collector cannot
// move them.
//
// Buffers with multi-byte datatypes (e.g., short, int, float)
// must have their byte order set to native order
ByteBuffer vbb = ByteBuffer.allocateDirect(VERTS * 3 * 4);
vbb.order(ByteOrder.nativeOrder());
mFVertexBuffer = vbb.asFloatBuffer();
ByteBuffer tbb = ByteBuffer.allocateDirect(VERTS * 2 * 4);
tbb.order(ByteOrder.nativeOrder());
mTexBuffer = tbb.asFloatBuffer();
ByteBuffer ibb = ByteBuffer.allocateDirect(VERTS * 2);
ibb.order(ByteOrder.nativeOrder());
mIndexBuffer = ibb.asShortBuffer();
// A unit-sided equalateral triangle centered on the origin.
float[] coords = {
// X, Y, Z
-0.5f, -0.25f, 0,
0.5f, -0.25f, 0,
0.0f, 0.559016994f, 0
};
for (int i = 0; i < VERTS; i++) {
for(int j = 0; j < 3; j++) {
mFVertexBuffer.put(coords[i*3+j] * 2.0f);
}
}
for (int i = 0; i < VERTS; i++) {
for(int j = 0; j < 2; j++) {
mTexBuffer.put(coords[i*3+j] * 2.0f + 0.5f);
}
}
for(int i = 0; i < VERTS; i++) {
mIndexBuffer.put((short) i);
}
mFVertexBuffer.position(0);
mTexBuffer.position(0);
mIndexBuffer.position(0);
}
public void draw(GL10 gl) {
gl.glFrontFace(GL10.GL_CCW);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mFVertexBuffer);
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, mTexBuffer);
gl.glDrawElements(GL10.GL_TRIANGLE_STRIP, VERTS,
GL10.GL_UNSIGNED_SHORT, mIndexBuffer);
}
private final static int VERTS = 3;
private FloatBuffer mFVertexBuffer;
private FloatBuffer mTexBuffer;
private ShortBuffer mIndexBuffer;
The "trick" is to understand, that OpenGL does not have a camera. What is does is transforming the whole world by a movement that's the exact opposite of what a camera would have to be moved from position (0,0,0).
Such transformations (=movements) are described in form of so called homogenous transformation matrices. Fixed Function OpenGL uses a combination of two matrices:
Modelview M, which describes placement of the world and view (and objects within the world to some degree).
Projection P, which could be seen as kind of "lens" of the virtual camera (remember, there is no camera in OpenGL).
Any vertex position v is transformed by c = P * M * v (c is the transformed vertex coordinate in clip space, that is screen space not in pixels but with the screen edges at -1, 1 – the viewport then maps from clip space to screen pixel space).
What Android gives you is such a transformation matrix. I'm not sure, but looking at the values it might be, that you're given P * M. As long as there is no lighting involved you can load that directly into the modelview matrix using glLoadMatrix, and projection being set to identity. You pass matrices to OpenGL as a array of 16 floats; the indexing order of OpenGL sometimes confuses people, but the way you dumped the android matrices I think you already got them right (you printed them "wrong", transposed that is, which is the same pitfall people fall into with OpenGL glLoadMatrix, but two times transposing is identity, it's probably right. If it doesn't work at first, flip column and rows, i.e. "mirror" the matrix on its diagonal running from up-left do bottom-right).