I have an issue with rendering meshes with transparent vertices in LibGDX.
chunk on the left not rendering blocks under water properly
As you can see from the image, some chunk meshes aren't properly rendering blocks under water. What could be the issue here?
These are my OpenGL settings
Gdx.gl.glClearColor(0.4f, 0.4f, 0.4f, 1f);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
Material:
MATERIAL = new Material(TextureAttribute.createDiffuse(texture),
IntAttribute.createCullFace(GL20.GL_FRONT),
new BlendingAttribute(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA),
FloatAttribute.createAlphaTest(0.5f));
Related
According to this wikibook it used to be possible to draw a simple rectangle as easily as this (after creating and initializing the window):
glColor3f(0.0f, 0.0f, 0.0f);
glRectf(-0.75f,0.75f, 0.75f, -0.75f);
This is has been removed however in OpenGL 3.2 and later versions.
Is there some other simple, quick and dirty, way in OpenGL 4 to draw a rectangle with a fixed color (without using shaders or anything fancy)?
Is there some ... way ... to draw a rectangle ... without using shaders ...?
Yes. In fact, AFAIK, it is supported on all OpenGL versions in existence: you can draw a solid rectangle by enabling scissor test and clearing the framebuffer:
glEnable(GL_SCISSOR_TEST);
glScissor(x, y, width, height);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
This is different from glRect in multiple ways:
The coordinates are specified in pixels relative to the window origin.
The rectangle must be axis aligned and cannot be transformed in any way.
Most of the per-sample processing is skipped. This includes blending, depth and stencil testing.
However, I'd rather discourage you from doing this. You're likely to be better off by building a VAO with all the rectangles you want to draw on the screen, then draw them all with a very simple shader.
I'm working on a project that offloads some rendering to a native plugin I wrote for Unity, in order to make use of instancing and other advanced graphics features. I'm developing it for a cross-platform release, but I work with a Mac so testing is done primarily with OpenGL. At this point, the plugin only renders a quad to the center of the screen colored with hex values. The plugin works as expected in a blank Unity project, but as soon as I incorporate it into my Oculus project, it begins behaving erratically.
In the Rift, the plugin's geometry draws twice, one time stretching across both eyes and another time drawing only within the bounds of the right eye. Additionally, any primitive colors I apply to the geometry are lost and the geometry appears to pick up surrounding colors; on a black screen with red text, the geometry will be mostly black with some red bleeding into the lines. As soon as my green terrain is loaded, the geometry drawn by the plugin becomes green.
Below is a screenshot of the geometry being drawn in a blank Unity project with nothing else:
And here is a screenshot of the same geometry being drawn on top of my Oculus Rift application:
Here's the creation of the vertices that I'm rendering (three coordinates and color):
Vertex verts[4] =
{
{ -0.5f, 0.5f, 0, 0xFF0000ff },
{ 0.5f, 0.5f, 0, 0xFFff0000 },
{ 0.5f, -0.5f, 0, 0xFF00ff00 },
{ -0.5f, -0.5f, 0, 0xFFff0000 },
};
Here's the draw function, called every frame within the plugin:
// OpenGL case
if (g_DeviceType == kGfxRendererOpenGL)
{
//initialize model view matrices
glMatrixMode (GL_MODELVIEW);
float modelMatrix[16] =
{
1,0,0,0,
0,1,0,0,
0,0,1,0,
0,0,0,1,
};
glLoadMatrixf (modelMatrix); //assign our matrix to the current MatrixMode
//initialize projection matrix
glMatrixMode (GL_PROJECTION);
projectionMatrix[10] = 2.0f; //tweak projection matrix to match D3D
projectionMatrix[14] = -1.0f;
glLoadMatrixf (projectionMatrix);
// Vertex layout
glVertexPointer (3, GL_FLOAT, sizeof(verts[0]), &verts[0].x);
glEnableClientState (GL_VERTEX_ARRAY);
glColorPointer (4, GL_UNSIGNED_BYTE, sizeof(verts[0]), &verts[0].color);
glEnableClientState (GL_COLOR_ARRAY);
glDrawArrays(GL_LINE_LOOP, 0, 4);
}
Any insight from experienced native plugin/Rift graphics coders would be appreciated!
You can make use of Unity UI system to render sprites according to your needs. Here is an article from Oculus describing how to tweak Unity UI system for VR : https://developer.oculus.com/blog/unitys-ui-system-in-vr/
Outside Unity, you would use quad layers to render on top of eyes FOV. Here is layers described in Oculus Rift documentation : https://developer.oculus.com/documentation/pcsdk/latest/concepts/dg-render/#dg_render_layers
Quad Layer :
A monoscopic image that is displayed as a rectangle at a given pose and size in the virtual world. This is useful for heads-up-displays, text information, object labels and so on. By default the pose is specified relative to the user's real-world space and the quad will remain fixed in space rather than moving with the user's head or body motion. For head-locked quads, use the ovrLayerFlag_HeadLocked flag as described below.
Here are 2 images of the same 3D scene, from the same point of view (virtual camera), with and without backCulling. The images don't depict the final mesh, but the contents of the depth buffer.
Without BackCulling
glDisable(GL_CULL_FACE);
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
glDepthFunc(GL_LESS);
With BackCulling
glEnable(GL_CULL_FACE);
glPolygonMode(GL_FRONT_AND_BACK, GL_FILL);
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE);
glDepthFunc(GL_LESS);
Question
Why is there such a difference Between the 2 images? Of course, one would expect difference, but I don't get why without BackCulling the depth map is so smooth, while with BackCulling the depth values don't seem so continuous.
In the case without BackCulling , for a certain pixel, what is the value of the depth buffer? Is it the one closer to the camera (the smooth image looks like this), or since many things project to the same 2d point/pixel I should expect something else?
Reference - 3D Scene - 2D Frame
They're difference because you're culling the wrong faces. Your mesh data is clearly oriented so that the front face is different from OpenGL's defaults. OpenGL defaults to counter-clockwise being front. Your mesh data puts the clockwise face front (or you're inverting your mesh at some point).
So, in lieu of rearranging your vertex data, change your front face with the aptly named glFrontFace(GL_CW).
In my program, I am trying to use an image and draw it as a stationary background. The foreground does have some models loaded inside a camera and runnig fine.
However, when I apply a background image, the whole model and other objects don't appear and I can only see the background image appearing over the screen.
I did the disable the Depth_Test before drawing the background and then re-enabled it before drawing the model.
glDisbale(GL_DEPTH_TEST);
bgImage.draw(0,0); //draw the background image. Width and height parameters previously while initializing image
glEnable(GL_DEPTH_TEST);
cam.begin();
//stuff drawn inside
cam.end();
Also tried clearing the Depth Buffer/Depth Color bit after the bgImage.draw but nothing changes.
You need to disable depth writes so that the background doesn't hogs the depth buffer.
glDepthMask(GL_FALSE);
background();
glDepthMask(GL_TRUE);
Or you simply clear only the depth buffer after drawing the background:
glDisable(GL_DEPTH_TEST);
background(); // instead of clearing the color
glClear(GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
I have a problem when rendering cubes in OpenGL.I am drawing two cubes, one is a wire cube and is centered around the origin, while the other is offset from the origin and is solid. I have mapped some keys to rotate the objects by some degrees wrt to the origin, so the whole scene can rotate around the origin.
The problem is, when I render the scene, when the wire cube is supposed to be infront of the other solid cube, it does not display itself correctly.
In the image above, the colored cube is supposed to be behind the wire cube. i.e. the green wire cube should be on top.
Also the cube is not behaving properly.
After I rotate it a little bit around the x axis (current horizontal line).
The cube has missing faces and is not rendering correctly.
What am I doing wrong?
I have coded the following
Note that rotateX,rotateY,rotateZ are mapped to keys, and are my global rotation variables.
//The Initialize function, called once:
void Init(){
glEnable(GL_TEXTURE_2D);
glShadeModel(GL_SMOOTH); // Enable Smooth Shading
glClearColor(0.0f, 0.0f, 0.0f, 0.5f); // Black Background
glClearDepth(1.0f); // Depth Buffer Setup
glEnable(GL_DEPTH_TEST); // Depth Buffer Setup // Enables Depth Testing
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); // Really Nice Perspective Calculations
glEnable(GL_LIGHTING);
}
void draw(){
//The main draw function
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity ();
gluPerspective(45, 640/480.0, .5, 100);
glMatrixMode(GL_MODELVIEW); //select the modelview matrix.
glLoadIdentity ();
gluLookAt(0,0,5,
0,0,0,
0,1,0);
glRotatef(rotateX,1,0,0);
glRotatef(rotateY,0,1,0);
glRotatef(rotateZ,0,0,1);
drawScene(); // this just draws the main axis lines,
glutWireCube(1);
glPopMatrix();
glPushMatrix();
glTranslatef(-2,1,0);
drawNiceCube();
glPopMatrix();
glutSwapBuffers();
}
The code for the drawNiceCube() is just using GL_QUADS, while the drawWireCube is built in in GLUT.
EDIT:
I have posted the full code at http://pastebin.com/p1kwPjEM, sorry if it is not well documented.
Did you also request a window with a depth buffer?
glutInitDisplayMode( ... | GLUT_DEPTH | ...);
Update:
Did you somewhere enable face culling?
glEnable(GL_CULL_FACE);
This is may be cause of clockwise
10.090 How does face culling work? Why doesn't it use the surface normal?
OpenGL face culling calculates the signed area of the filled primitive in window coordinate space. The signed area is positive when the window coordinates are in a counter-clockwise order and negative when clockwise. An app can use glFrontFace() to specify the ordering, counter-clockwise or clockwise, to be interpreted as a front-facing or back-facing primitive. An application can specify culling either front or back faces by calling glCullFace(). Finally, face culling must be enabled with a call to glEnable(GL_CULL_FACE); .
OpenGL uses your primitive's window space projection to determine face culling for two reasons. To create interesting lighting effects, it's often desirable to specify normals that aren't orthogonal to the surface being approximated. If these normals were used for face culling, it might cause some primitives to be culled erroneously. Also, a dot-product culling scheme could require a matrix inversion, which isn't always possible (i.e., in the case where the matrix is singular), whereas the signed area in DC space is always defined.
However, some OpenGL implementations support the GL_EXT_ cull_vertex extension. If this extension is present, an application may specify a homogeneous eye position in object space. Vertices are flagged as culled, based on the dot product of the current normal with a vector from the vertex to the eye. If all vertices of a primitive are culled, the primitive isn't rendered. In many circumstances, using this extension
from here
Also you can read here
datenwolf solved my problem. I quote him:
"#JonathanSimbahan: Parts of your code are redundant, but something is missing: You forgot to call Init(); after creating your GLUT window, hence depth testing and all the other state never get enabled. I for one suggest you don't use Init at all and move it's code into the drawing code, where it actually belongs."