Had asked a question here:
Drawing a portion of a hemisphere airspace
Probably it was more into GIS so have moved to more basic and specific implementation in OpenGL to get the desired output.
I have simply overridden/copied functions that are drawing the hemisphere and altering the GL part to basically INSERT clipping. I am able to draw the hemisphere centred at a location (latitude,longitude) and radius may be 2000. But when i cut it using a plane nothing happens. Please check the equation of the plane(its a plane parallel to the surface of the globe at height may be 1000, so (0,0,+1,1000))
The base class has drawUnitSphere so that might be causing some problems so trying to use GL's gluSphere(). But can't even see the sphere on globe. Used translate to shift it to my location(lat/lon) but still can't see it. Might be some issues with lat/lon and cartesian coordinates, or placing of my clipping code. Please check.
Here is the code:
#Override
public void drawSphere(DrawContext dc)
{
double[] altitudes = this.getAltitudes(dc.getVerticalExaggeration());
boolean[] terrainConformant = this.isTerrainConforming();
int subdivisions = this.getSubdivisions();
if (this.isEnableLevelOfDetail())
{
DetailLevel level = this.computeDetailLevel(dc);
Object o = level.getValue(SUBDIVISIONS);
if (o != null && o instanceof Integer)
subdivisions = (Integer) o;
}
Vec4 centerPoint = this.computePointFromPosition(dc,
this.location.getLatitude(), this.location.getLongitude(), altitudes[0], terrainConformant[0]);
Matrix modelview = dc.getView().getModelviewMatrix();
modelview = modelview.multiply(Matrix.fromTranslation(centerPoint));
modelview = modelview.multiply(Matrix.fromScale(this.getRadius()));
double[] matrixArray = new double[16];
modelview.toArray(matrixArray, 0, false);
this.setExpiryTime(-1L); // Sphere geometry never expires.
GL gl = dc.getGL(); // GL initialization checks for GL2 compatibility.
gl.glPushAttrib(GL.GL_POLYGON_BIT | GL.GL_TRANSFORM_BIT);
try
{
gl.glEnable(GL.GL_CULL_FACE);
gl.glFrontFace(GL.GL_CCW);
// Were applying a scale transform on the modelview matrix, so the normal vectors must be re-normalized
// before lighting is computed. In this case we're scaling by a constant factor, so GL_RESCALE_NORMAL
// is sufficient and potentially less expensive than GL_NORMALIZE (or computing unique normal vectors
// for each value of radius). GL_RESCALE_NORMAL was introduced in OpenGL version 1.2.
gl.glEnable(GL.GL_RESCALE_NORMAL);
gl.glMatrixMode(GL.GL_MODELVIEW);
gl.glPushMatrix();
//clipping
DoubleBuffer eqn1 = BufferUtils.createDoubleBuffer(8).put(new double[] {0, 0, 1, 100});
eqn1.flip();
gl.glClipPlane(GL.GL_CLIP_PLANE0, eqn1);
gl.glEnable(GL.GL_CLIP_PLANE0);
try
{
gl.glLoadMatrixd(matrixArray, 0);
//this.drawUnitSphere(dc, subdivisions);
gl.glLoadIdentity();
gl.glTranslatef(75.2f, 32.5f, 0.0f);
gl.glColor3f(1.0f, 0.0f, 0.0f);
GLU glu = dc.getGLU();
GLUquadric qd=glu.gluNewQuadric();
glu.gluSphere(qd,3.0f,20,20);
}
finally
{
gl.glPopMatrix();
}
}
finally
{
gl.glPopAttrib();
}
}
#Override
public void drawUnitSphere(DrawContext dc, int subdivisions)
{
Object cacheKey = new Geometry.CacheKey(this.getClass(), "Sphere", subdivisions);
Geometry geom = (Geometry) this.getGeometryCache().getObject(cacheKey);
if (geom == null || this.isExpired(dc, geom))
{
if (geom == null)
geom = new Geometry();
this.makeSphere(1.0, subdivisions, geom);
this.updateExpiryCriteria(dc, geom);
this.getGeometryCache().add(cacheKey, geom);
}
this.getRenderer().drawGeometry(dc, geom);
}
#Override
public void makeSphere(double radius, int subdivisions, Geometry dest)
{
GeometryBuilder gb = this.getGeometryBuilder();
gb.setOrientation(GeometryBuilder.OUTSIDE);
GeometryBuilder.IndexedTriangleArray ita = gb.tessellateSphere((float) radius, subdivisions);
float[] normalArray = new float[3 * ita.getVertexCount()];
gb.makeIndexedTriangleArrayNormals(ita, normalArray);
dest.setElementData(GL.GL_TRIANGLES, ita.getIndexCount(), ita.getIndices());
dest.setVertexData(ita.getVertexCount(), ita.getVertices());
dest.setNormalData(ita.getVertexCount(), normalArray);
}
Related
I want to render my scene to a texture and apply a blur shader to this texture .The problem is that when I draw back my texture the front faces of the cubes are invisible
without supersampling
with supersampling
*Ignore the opaque thing around the cube in both photos.I double render the cube once with less alpha and more scale ,I disabled this but I have the same problem.
For some reason I am using the y as z and z as y,so the front face of the cube has less y than the back face(instead of z) ,I am guessing something is wrong with the z-buffer.
The render to texture code:
public class RenderOnTexture {
private float m_fboScaler = 1f;
private boolean m_fboEnabled = true;
private FrameBuffer m_fbo = null;
private TextureRegion m_fboRegion = null;
public RenderOnTexture(float scale) {
int width = (int) (Gdx.graphics.getWidth()*scale);
int height = (int) (Gdx.graphics.getHeight()*scale);
m_fbo = new FrameBuffer(Format.RGB565, (int)(width * m_fboScaler), (int)(height * m_fboScaler), false);
m_fboRegion = new TextureRegion(m_fbo.getColorBufferTexture());
m_fboRegion.flip(false,false);
}
public void begin(){
if(m_fboEnabled)
{
m_fbo.begin();
Gdx.gl.glClearColor(0, 0,0,0);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
}
}
public TextureRegion end(){
if(m_fbo != null)
{
m_fbo.end();
return m_fboRegion;
}
return null;
}
}
The Boolean argument in the FrameBuffer enables a depth buffer attachment. And the depth buffer must be cleared along with the color buffer.
I'm running into a problem and I don't know what is the best practise for it. I have a background that moves upward, which is in fact "slices" that moves toghether, as if the screen was splitted in 4-5 parts horizontally. I need to be able to draw a hole (circle) in the background (see-through), at a specified position which will change dynamically at each frame or so.
Here is how I generate a zone, I don't think there's much of a problem there:
// A 'zone' is simply the 'slice' of ground that moves upward. There's about 4 of
// them visible on screen at the same time, and they are automatically generated by
// a method irrelevant to the situation. Zones are Sprites.
// ---------
void LevelLayer::Zone::generate(LevelLayer *sender) {
// [...]
// Make a background for the zone
Sprite *background = this->generateBackgroundSprite();
background->setPosition(_contentSize.width / 2, _contentSize.height / 2);
this->addChild(background, 0);
}
This is the Zone::generateBackgroundSprite() method:
// generates dynamically a new background texture
Sprite *LevelLayer::Zone::generateBackgroundSprite() {
RenderTexture *rt = RenderTexture::create(_contentSize.width, _contentSize.height);
rt->retain();
Color4B dirtColorByte = Color4B(/*initialize the color with bytes*/);
Color4F dirtColor(dirtColorByte);
rt->beginWithClear(dirtColor.r, dirtColor.g, dirtColor.b, dirtColor.a);
// [Nothing here yet, gotta learn OpenGL m8]
rt->end();
// ++++++++++++++++++++
// I'm just testing clipping node, it works but the FPS get significantly lower.
// If I lock them to 60, they get down to 30, and if I lock them there they get
// to 20 :(
// Also for the test I'm drawing a square since ClippingNode doesn't seem to
// like circles...
DrawNode *square = DrawNode::create();
Point squarePoints[4] = { Point(-20, -20), Point(20, -20), Point(20, 20), Point(-20, 20) };
square->drawPolygon(squarePoints, 4, Color4F::BLACK, 0.0f, Color4F(0, 0, 0, 0));
square->setPosition(0, 0);
// Make a stencil
Node *stencil = Node::create();
stencil->addChild(square);
// Create a clipping node with the prepared stencil
ClippingNode *clippingNode = ClippingNode::create(stencil);
clippingNode->setInverted(true);
clippingNode->addChild(rt);
Sprite *ret = Sprite::create();
ret->addChild(clippingNode);
rt->release();
return ret;
}
**
So I'm asking you guys, what would you do in such a situation? Is what I am doing a good idea? Would you do it in another more imaginative way?
PS This is a rewrite of a little app I made for iOS (I want to port it to Android), and I was using MutableTextures in the Objective-C version (it was working). I'm just trying to see if there's a better way using RenderTexture, so I can dynamically create background images using OpenGL calls.
EDIT (SOLUTION)
I wrote my own simple fragment shader that "masks" the visible parts of a texture (the background) based on the visible parts of another texture (the mask). I have an array of points that determine where my circles are on the screen, and in the update method I draw them to a RenderTexture. I then take the generated texture and use it as the mask I pass to the shader.
This is my shader:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 v_texCoord;
uniform sampler2D u_texture;
uniform sampler2D u_alphaMaskTexture;
void main() {
float maskAlpha = texture2D(u_alphaMaskTexture, v_texCoord).a;
float texAlpha = texture2D(u_texture, v_texCoord).a;
float blendAlpha = (1.0 - maskAlpha) * texAlpha; // Show only where mask is not visible
vec3 texColor = texture2D(u_texture, v_texCoord).rgb;
gl_FragColor = vec4(texColor, blendAlpha);
return;
}
init method:
bool HelloWorld::init() {
// [...]
Size visibleSize = Director::getInstance()->getVisibleSize();
// Load and cache the custom shader
this->loadCustomShader();
// 'generateBackgroundSlice()' creates a new RenderTexture and fills it with a
// color, nothing too complicated here so I won't copy-paste it in my edit
m_background = Sprite::createWithTexture(this->generateBackgroundSprite()->getSprite()->getTexture());
m_background->setPosition(visibleSize.width / 2, visibleSize.height / 2);
this->addChild(m_background);
m_background->setShaderProgram(ShaderCache::getInstance()->getProgram(Shader_AlphaMask_frag_key));
GLProgram *shader = m_background->getShaderProgram();
m_alphaMaskTextureUniformLocation = glGetUniformLocation(shader->getProgram(), "u_alphaMaskTexture");
glUniform1i(m_alphaMaskTextureUniformLocation, 1);
m_alphaMaskRender = RenderTexture::create(m_background->getContentSize().width,
m_background->getContentSize().height);
m_alphaMaskRender->retain();
// [...]
}
loadCustomShader method:
void HelloWorld::loadCustomShader() {
// Load the content of the vertex and fragement shader
FileUtils *fileUtils = FileUtils::getInstance();
string vertexSource = ccPositionTextureA8Color_vert;
string fragmentSource = fileUtils->getStringFromFile(
fileUtils->fullPathForFilename("Shader_AlphaMask_frag.fsh"));
// Init a shader and add its attributes
GLProgram *shader = new GLProgram;
shader->initWithByteArrays(vertexSource.c_str(), fragmentSource.c_str());
shader->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_POSITION, GLProgram::VERTEX_ATTRIB_POSITION);
shader->bindAttribLocation(GLProgram::ATTRIBUTE_NAME_TEX_COORD, GLProgram::VERTEX_ATTRIB_TEX_COORDS);
shader->link();
shader->updateUniforms();
ShaderCache::getInstance()->addProgram(shader, Shader_AlphaMask_frag_key);
// Trace OpenGL errors if any
CHECK_GL_ERROR_DEBUG();
}
update method:
void HelloWorld::update(float dt) {
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
// Create the mask texture from the points in the m_circlePos array
GLProgram *shader = m_background->getShaderProgram();
m_alphaMaskRender->beginWithClear(0, 0, 0, 0); // Begin with transparent mask
for (vector<Point>::iterator it = m_circlePos.begin(); it != m_circlePos.end(); it++) {
// draw a circle on the mask
const float radius = 40;
const int resolution = 20;
Point circlePoints[resolution];
Point center = *it;
center = Director::getInstance()->convertToUI(center); // OpenGL has a weird coordinates system
float angle = 0;
for (int i = 0; i < resolution; i++) {
float x = (radius * cosf(angle)) + center.x;
float y = (radius * sinf(angle)) + center.y;
angle += (2 * M_PI) / resolution;
circlePoints[i] = Point(x, y);
}
DrawNode *circle = DrawNode::create();
circle->retain();
circle->drawPolygon(circlePoints, resolution, Color4F::BLACK, 0.0f, Color4F(0, 0, 0, 0));
circle->setPosition(Point::ZERO);
circle->visit();
circle->release();
}
m_alphaMaskRender->end();
Texture2D *alphaMaskTexture = m_alphaMaskRender->getSprite()->getTexture();
alphaMaskTexture->setAliasTexParameters(); // Disable linear interpolation
// ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
shader->use();
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, alphaMaskTexture->getName());
glActiveTexture(GL_TEXTURE0);
}
What you might want to look at is framebuffers, i'm not too familiar with the mobile API for OpenGL but I'm sure you should have access to framebuffers.
An idea of what you might want to try is to do a first pass where you render the circles's that you want to set to alpha on your background into a new framebuffer texture, then you can use this texture as an alpha map on your pass for rendering your background. So basically when you render your circle you might set the value in the texture to 0.0 for the alpha channel otherwise to 1.0, when rendering you can then set the alpha channel of the fragment to the same value as the alpha of texture of the first pass' of the rendering process.
You can think of it as a the same idea as a mask. But just using another texture.
Hope this helps :)
I want to add fog to a small 3D world, I tried fiddling with the arguments, however, the fog is not homogeneous.
I have two problems that are maybe linked :
Fog Homogeneity:
When I move or rotate my viewpoint with gluLookAt, the fog is too heavy and all the world is grey.However the are two angles where the rendering of the fog is nice.
The fog seems normal when the camera orentation on the Y axis is 45° or -135° (opposite)
Fog centered on origin of the scene:
When my fog is correctly displayed, it is centered on the (0;0;0) of the scene
Here is the code I use to initialise the fog and the call to gluLookAt
private static final float density = 1f;
private void initFog() {
float[] vertices = {0.8f, 0.8f, 0.8f, 1f};
ByteBuffer temp = ByteBuffer.allocateDirect(16);
temp.order(ByteOrder.nativeOrder());
FloatBuffer fogColor = temp.asFloatBuffer();
fogColor.put(vertices);
GL11.glClearColor(0.8f,0.8f,0.8f,1.0f);
GL11.glFogi(GL11.GL_FOG_MODE, GL11.GL_LINEAR);
GL11.glFog(GL11.GL_FOG_COLOR, temp.asFloatBuffer());
GL11.glFogf(GL11.GL_FOG_DENSITY, density);
GL11.glHint(GL11.GL_FOG_HINT, GL11.GL_FASTEST);
GL11.glFogf(GL11.GL_FOG_START, 1f);
GL11.glFogf(GL11.GL_FOG_END, 10000f);
}
private void initWindow() {
try {
Display.setDisplayMode(new DisplayMode(1600, 900));
Display.create();
GL11.glMatrixMode(GL11.GL_PROJECTION);
GL11.glLoadIdentity();
GLU.gluPerspective(60f, 1600f / 900f, 3, 100000);
GL11.glMatrixMode(GL11.GL_MODELVIEW);
GL11.glLoadIdentity();
GL11.glEnable(GL11.GL_FOG);
GL11.glEnable(GL11.GL_DEPTH_TEST);
initFog();
initParticles();
} catch (LWJGLException e) {
Display.destroy();
System.exit(1);
}
}
Called from the updatePosition function inside main loop
The angle parameter is the direction of the viewport on y axis and yCpos is a value between -1 and 1 that I use to look up or down.
GL11.glLoadIdentity();
GLU.gluLookAt(xpos, ypos, zpos, xpos + (float)Math.cos(angle), ypos+ yCpos, zpos+ (float)Math.sin(angle), 0, 1, 0);
I was drawing the ground with one giant quad, and now I draw the ground with tiles, and the problem isn't happening any more. Therefore, the cause remains mysterious, but the problem is solved.
I'm fairly new to OpenGL and 3D programming but I've begun to implement camera rotation using quaternions based on the tutorial from http://www.cprogramming.com/tutorial/3d/quaternions.html . This is all written in Java using JOGL.
I realise these kind of questions get asked quite a lot but I've been searching around and can't find a solution that works so I figured it might be a problem with my code specifically.
So the problem is that there is jittering and odd rotation if I do two different successive rotations on one or more axis. The first rotation along the an axis, either negatively or positively, works fine. However, if I rotate positively along the an axis and then rotate negatively on that axis then the rotation will jitter back and forth as if it was alternating between doing a positive and negative rotation.
If I automate the rotation, (e.g. rotate left 500 times then rotate right 500 times) then it appears to work properly which led me to think this might be related to the keypresses. However, the rotation is also incorrect (for lack of a better word) if I rotate around the x axis and then rotate around the y axis afterwards.
Anyway, I have a renderer class with the following display loop for drawing `scene nodes':
private void render(GLAutoDrawable drawable) {
GL2 gl = drawable.getGL().getGL2();
gl.glClear(GL2.GL_COLOR_BUFFER_BIT | GL2.GL_DEPTH_BUFFER_BIT);
gl.glMatrixMode(GL2.GL_PROJECTION);
gl.glLoadIdentity();
glu.gluPerspective(70, Constants.viewWidth / Constants.viewHeight, 0.1, 30000);
gl.glScalef(1.0f, -1.0f, 1.0f); //flip the y axis
gl.glMatrixMode(GL2.GL_MODELVIEW);
gl.glLoadIdentity();
camera.rotateCamera();
glu.gluLookAt(camera.getCamX(), camera.getCamY(), camera.getCamZ(), camera.getViewX(), camera.getViewY(), camera.getViewZ(), 0, 1, 0);
drawSceneNodes(gl);
}
private void drawSceneNodes(GL2 gl) {
if (currentEvent != null) {
ArrayList<SceneNode> sceneNodes = currentEvent.getSceneNodes();
for (SceneNode sceneNode : sceneNodes) {
sceneNode.update(gl);
}
}
if (renderQueue.size() > 0) {
currentEvent = renderQueue.remove(0);
}
}
Rotation is performed in the camera class as follows:
public class Camera {
private double width;
private double height;
private double rotation = 0;
private Vector3D cam = new Vector3D(0, 0, 0);
private Vector3D view = new Vector3D(0, 0, 0);
private Vector3D axis = new Vector3D(0, 0, 0);
private Rotation total = new Rotation(0, 0, 0, 1, true);
public Camera(GL2 gl, Vector3D cam, Vector3D view, int width, int height) {
this.cam = cam;
this.view = view;
this.width = width;
this.height = height;
}
public void rotateCamera() {
if (rotation != 0) {
//generate local quaternion from new axis and new rotation
Rotation local = new Rotation(Math.cos(rotation/2), Math.sin(rotation/2 * axis.getX()), Math.sin(rotation/2 * axis.getY()), Math.sin(rotation/2 * axis.getZ()), true);
//multiply local quaternion and total quaternion
total = total.applyTo(local);
//rotate the position of the camera with the new total quaternion
cam = rotatePoint(cam);
//set next rotation to 0
rotation = 0;
}
}
public Vector3D rotatePoint(Vector3D point) {
//set world centre to origin, i.e. (width/2, height/2, 0) to (0, 0, 0)
point = new Vector3D(point.getX() - width/2, point.getY() - height/2, point.getZ());
//rotate point
point = total.applyTo(point);
//set point in world coordinates, i.e. (0, 0, 0) to (width/2, height/2, 0)
return new Vector3D(point.getX() + width/2, point.getY() + height/2, point.getZ());
}
public void setAxis(Vector3D axis) {
this.axis = axis;
}
public void setRotation(double rotation) {
this.rotation = rotation;
}
}
The method rotateCamera generates the new permenant quaternions from the new rotation and previous rotations while the method rotatePoint merely multiplies a point by the rotation matrix generated from the permenant quaternion.
The axis of rotation and the angle of rotation are set by simple key presses as follows:
#Override
public void keyPressed(KeyEvent e) {
if (e.getKeyCode() == KeyEvent.VK_W) {
camera.setAxis(new float[] {1, 0, 0});
camera.setRotation(0.1f);
}
if (e.getKeyCode() == KeyEvent.VK_A) {
camera.setAxis(new float[] {0, 1, 0});
camera.setRotation(0.1f);
}
if (e.getKeyCode() == KeyEvent.VK_S) {
camera.setAxis(new float[] {1, 0, 0});
camera.setRotation(-0.1f);
}
if (e.getKeyCode() == KeyEvent.VK_D) {
camera.setAxis(new float[] {0, 1, 0});
camera.setRotation(-0.1f);
}
}
I hope I've provided enough detail. Any help would be very much appreciated.
About the jittering: I don't see any render loop in your code. How is the render method triggered? By a timer or by an event?
Your messed up rotations when rotating about two axes are probably related to the fact that you need to rotate the axis of the second rotation along with the total rotation of the first axis. You cannot just apply the rotation about the X or Y axis of the global coordinate system. You must apply the rotation about the up and right axes of the camera.
I suggest that you create a camera class that stores the up, right and view direction vectors of the camera and apply your rotations directly to those axes. If this is an FPS like camera, then you'll want to rotate the camera horizontally (looking left / right) about the absolute Y axis and not the up vector. This will also result in a new right axis of the camera. Then, you rotate the camera vertically (looking up / down) about the new right axis. However, you must be careful when the camera looks directly up or down, as in this case you can't use the cross product of the view direction and up vectors to obtain the right vector.
I work with an Augmented Reality framework on Android, and it gives me the camera position as a 6 degrees of freedom vector that includes the estimated camera optical and camera orientation.
Since I'm a complete newbie in OpenGL, I don't quite understand what that means and my question is - how to use this 4x4 matrix to position my camera in OpenGL.
Below is a sample from Android SDK which renders a simple textured triangle (I didn't know which details are important so I included the whole two classes - the renderer and the triangle object).
My guess is that it positions the camera with gluLookAt in onDrawFrame(). I want to adjust this,
I receive these matrices from the framework (these are just samples) -
When the camera should look directly at the triangle, I need to use a matrix of this type to somehow position my camera:
0.9930384 0.045179322 0.10878302 0.0
-0.018241059 0.9713616 -0.23690554 0.0
-0.11637083 0.23327199 0.9654233 0.0
21.803288 -14.920643 -150.6514 1.0
When I move the camera a bit far away:
0.9763242 0.041258257 0.21234424 0.0
0.014808476 0.96659267 -0.2558918 0.0
-0.21580763 0.25297752 0.94309634 0.0
17.665 -18.520836 -243.28784 1.0
When I tilt my camera a bit to the right:
0.8340566 0.0874321 0.5447095 0.0
0.054606464 0.96943074 -0.23921578 0.0
-0.5489726 0.22926341 0.8037848 0.0
-8.809776 -7.5869675 -244.01971 1.0
Any thoughts? My guess is that the only thing that matters is actually the last row, everything else is close to zero.
I'd be happy to get any advice on how to adjust this code to use those matrices, including any settings such as setting perspective matrices or whatsoever (again, a newbie).
public class TriangleRenderer implements GLSurfaceView.Renderer{
public TriangleRenderer(Context context) {
mContext = context;
mTriangle = new Triangle();
}
public void onSurfaceCreated(GL10 gl, EGLConfig config) {
/*
* By default, OpenGL enables features that improve quality
* but reduce performance. One might want to tweak that
* especially on software renderer.
*/
gl.glDisable(GL10.GL_DITHER);
/*
* Some one-time OpenGL initialization can be made here
* probably based on features of this particular context
*/
gl.glHint(GL10.GL_PERSPECTIVE_CORRECTION_HINT,
GL10.GL_FASTEST);
gl.glClearColor(0,0,0,0);
gl.glShadeModel(GL10.GL_SMOOTH);
gl.glEnable(GL10.GL_DEPTH_TEST);
gl.glEnable(GL10.GL_TEXTURE_2D);
/*
* Create our texture. This has to be done each time the
* surface is created.
*/
int[] textures = new int[1];
gl.glGenTextures(1, textures, 0);
mTextureID = textures[0];
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureID);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER,
GL10.GL_NEAREST);
gl.glTexParameterf(GL10.GL_TEXTURE_2D,
GL10.GL_TEXTURE_MAG_FILTER,
GL10.GL_LINEAR);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S,
GL10.GL_CLAMP_TO_EDGE);
gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T,
GL10.GL_CLAMP_TO_EDGE);
gl.glTexEnvf(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE,
GL10.GL_REPLACE);
InputStream is = mContext.getResources()
.openRawResource(R.raw.robot);
Bitmap bitmap;
try {
bitmap = BitmapFactory.decodeStream(is);
} finally {
try {
is.close();
} catch(IOException e) {
// Ignore.
}
}
GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0);
bitmap.recycle();
}
public void onDrawFrame(GL10 gl) {
/*
* By default, OpenGL enables features that improve quality
* but reduce performance. One might want to tweak that
* especially on software renderer.
*/
gl.glDisable(GL10.GL_DITHER);
gl.glTexEnvx(GL10.GL_TEXTURE_ENV, GL10.GL_TEXTURE_ENV_MODE,
GL10.GL_MODULATE);
/*
* Usually, the first thing one might want to do is to clear
* the screen. The most efficient way of doing this is to use
* glClear().
*/
gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT);
/*
* Now we're ready to draw some 3D objects
*/
gl.glMatrixMode(GL10.GL_MODELVIEW);
gl.glLoadIdentity();
GLU.gluLookAt(gl, 0, 0, -5, 0f, 0f, 0f, 0f, 1.0f, 0.0f);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glActiveTexture(GL10.GL_TEXTURE0);
gl.glBindTexture(GL10.GL_TEXTURE_2D, mTextureID);
gl.glTexParameterx(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S,
GL10.GL_REPEAT);
gl.glTexParameterx(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T,
GL10.GL_REPEAT);
long time = SystemClock.uptimeMillis() % 4000L;
float angle = 0.090f * ((int) time);
gl.glRotatef(angle, 0, 0, 1.0f);
mTriangle.draw(gl);
}
public void onSurfaceChanged(GL10 gl, int w, int h) {
gl.glViewport(0, 0, w, h);
/*
* Set our projection matrix. This doesn't have to be done
* each time we draw, but usually a new projection needs to
* be set when the viewport is resized.
*/
float ratio = (float) w / h;
gl.glMatrixMode(GL10.GL_PROJECTION);
gl.glLoadIdentity();
gl.glFrustumf(-ratio, ratio, -1, 1, 3, 7);
}
private Context mContext;
private Triangle mTriangle;
private int mTextureID;} class Triangle {
public Triangle() {
// Buffers to be passed to gl*Pointer() functions
// must be direct, i.e., they must be placed on the
// native heap where the garbage collector cannot
// move them.
//
// Buffers with multi-byte datatypes (e.g., short, int, float)
// must have their byte order set to native order
ByteBuffer vbb = ByteBuffer.allocateDirect(VERTS * 3 * 4);
vbb.order(ByteOrder.nativeOrder());
mFVertexBuffer = vbb.asFloatBuffer();
ByteBuffer tbb = ByteBuffer.allocateDirect(VERTS * 2 * 4);
tbb.order(ByteOrder.nativeOrder());
mTexBuffer = tbb.asFloatBuffer();
ByteBuffer ibb = ByteBuffer.allocateDirect(VERTS * 2);
ibb.order(ByteOrder.nativeOrder());
mIndexBuffer = ibb.asShortBuffer();
// A unit-sided equalateral triangle centered on the origin.
float[] coords = {
// X, Y, Z
-0.5f, -0.25f, 0,
0.5f, -0.25f, 0,
0.0f, 0.559016994f, 0
};
for (int i = 0; i < VERTS; i++) {
for(int j = 0; j < 3; j++) {
mFVertexBuffer.put(coords[i*3+j] * 2.0f);
}
}
for (int i = 0; i < VERTS; i++) {
for(int j = 0; j < 2; j++) {
mTexBuffer.put(coords[i*3+j] * 2.0f + 0.5f);
}
}
for(int i = 0; i < VERTS; i++) {
mIndexBuffer.put((short) i);
}
mFVertexBuffer.position(0);
mTexBuffer.position(0);
mIndexBuffer.position(0);
}
public void draw(GL10 gl) {
gl.glFrontFace(GL10.GL_CCW);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, mFVertexBuffer);
gl.glEnable(GL10.GL_TEXTURE_2D);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, mTexBuffer);
gl.glDrawElements(GL10.GL_TRIANGLE_STRIP, VERTS,
GL10.GL_UNSIGNED_SHORT, mIndexBuffer);
}
private final static int VERTS = 3;
private FloatBuffer mFVertexBuffer;
private FloatBuffer mTexBuffer;
private ShortBuffer mIndexBuffer;
The "trick" is to understand, that OpenGL does not have a camera. What is does is transforming the whole world by a movement that's the exact opposite of what a camera would have to be moved from position (0,0,0).
Such transformations (=movements) are described in form of so called homogenous transformation matrices. Fixed Function OpenGL uses a combination of two matrices:
Modelview M, which describes placement of the world and view (and objects within the world to some degree).
Projection P, which could be seen as kind of "lens" of the virtual camera (remember, there is no camera in OpenGL).
Any vertex position v is transformed by c = P * M * v (c is the transformed vertex coordinate in clip space, that is screen space not in pixels but with the screen edges at -1, 1 – the viewport then maps from clip space to screen pixel space).
What Android gives you is such a transformation matrix. I'm not sure, but looking at the values it might be, that you're given P * M. As long as there is no lighting involved you can load that directly into the modelview matrix using glLoadMatrix, and projection being set to identity. You pass matrices to OpenGL as a array of 16 floats; the indexing order of OpenGL sometimes confuses people, but the way you dumped the android matrices I think you already got them right (you printed them "wrong", transposed that is, which is the same pitfall people fall into with OpenGL glLoadMatrix, but two times transposing is identity, it's probably right. If it doesn't work at first, flip column and rows, i.e. "mirror" the matrix on its diagonal running from up-left do bottom-right).