In a minecraft-like game I'm making, I'm getting these weird lines on polygon edges:
I'm using a texture atlas, being clamped with GL_CLAMP_TO_EDGE.
I tried using setting GL_TEXTURE_MIN_FILTER to GL_LINEAR, GL_NEAREST and even using mipmaps, but it doesn't make a difference. I also tried insetting the texture coordinates by half a pixel, or using x16 anisotropic filtering, with no sucess.
Any help?
Edit - The top face of the cubes is being rendered something like this:
glBegin(GL_QUADS);
glTexCoord2f(0f, 0f);
glVertex3f(x, y + 1f, z);
glTexCoord2f(0f, 1 / 8f);
glVertex3f(x, y + 1f, z + 1f);
glTexCoord2f(1 / 8f, 1 / 8f);
glVertex3f(x + 1f, y + 1f, z + 1f);
glTexCoord2f(1 / 8f, 0f);
glVertex3f(x + 1f, y + 1f, z);
glEnd();
This looks like the classical texture pixel <-> screen pixel fenceposting problem to me. See my answer to it here: https://stackoverflow.com/a/5879551/524368
Another issue might be, that the corner coordinates of the cubes are not exactly the same. Floating point numbers have some intrinsic error and if you arrange those cubes in a grid by adding some floating point number to it, it may happen, that the vertex positions get slightly off and due to roundoff error you get to see depth fighting. Two things to solve this: 1st: If two cubes' faces touch, don't render them. 2nd: Use integer coordinates for laying out the cube grid and convert the vertices to floating point only when submitting to OpenGL, or don't convert at all.
Related
Sorry if question is a bit niche. I wrote some code a few years back and it renders many meshes as 1 big mesh for performance.
What I am trying to do now is just render meshes without textures, so a single colour
boxModel = modelBuilder.createBox(10f, 10f, 10f, Material(ColorAttribute.createDiffuse(Color.WHITE), ColorAttribute.createSpecular(Color.RED), FloatAttribute.createShininess(15f)), (VertexAttributes.Usage.Position or VertexAttributes.Usage.Normal or VertexAttributes.Usage.TextureCoordinates).toLong()) // or VertexAttributes.Usage.TextureCoordinates
for (x in 1..10) {
for (y in 1..10) {
modelInstance = ModelInstance(boxModel, x * 15f, 0.0f, y * 15f)
chunks2[0].addMesh(modelInstance.model.meshes, modelInstance.transform, btBoxShape(Vector3(10f, 10f, 10f)))
}
}
chunks2[0].mergeBaby()
So I build up the giant mesh and then render it
shaderProgram.begin()
texture.bind()
shaderProgram.setUniformMatrix("u_projTrans", camera.combined)
shaderProgram.setAttributef("a_color", 1f, 1f, 1f, 1f)
shaderProgram.setUniformi("u_texture", 0)
renderChunks()
shaderProgram.end()
This works for textured stuff great and the right texture is shown etc but the base colour (I guess that's "a_color" is set to white) where I actually want it to use what I supplied in the Material property.
I'm trying to make do water simulation. But I'm restricted to use 2D, So i started with just making the boundary of the sea by using sine wave, through Gl_Line_loop. but I'm just unable to fill it. I have tried changing it to the Gl_polygon mode but then i don't get the proper shape. here is the code:
here is the image of wave, i want to get filled
To tessellate the above, specify a top then a bottom vertex right along the line, then draw a triangle strip. i.e. for each (x, y) position along the sin wave, emit two vertices, the same x but y = 0 (the bottom). Then render a triangle strip.
Something like this:
glBegin(GL_TRIANGLE_STRIP);
for(x=-50;x<=50;x+=inc){
k = 2 * 3.14 / wavelength;
y = amplitude * sin(k * x);
glVertex3f(x, y-35, 0);
glVertex3f(x, y, 0);
}
glEnd();
CMIIW: I heard libgdx's ShapeRenderer is slow and it is better to use Batch.
I tried using Pixmap to produce 2x2 texture and rely on the linear blending:
public void rect(Batch batch, float x, float y, float w, float h, float rot, Color c00, Color c01, Color c10, Color c11){
Pixmap pm = new Pixmap(2, 2, Format.RGBA4444);
pm.drawPixel(0, 0, Color.rgba8888(c00));
pm.drawPixel(0, 1, Color.rgba8888(c01));
pm.drawPixel(1, 0, Color.rgba8888(c10));
pm.drawPixel(1, 1, Color.rgba8888(c11));
Texture tx = new Texture(pm);
tx.setFilter(Texture.TextureFilter.Linear, Texture.TextureFilter.Linear);
batch.end();
batch.begin();
batch.draw(new TextureRegion(tx), x, y, w/2f, h/2f, w, h, 1f, 1f, rot, true);
batch.end();
batch.begin();
tx.dispose();
pm.dispose();
}
And it produces this:
It is not the effect I would want.
If I could throw half pixel from each side of the texture then I think it would be good.
I thought in order to do that I have to change the TextureRegion to this:
new TextureRegion(tx, 0.5f, 0.5f, 1f, 1f)
but this produces:
What is happening there?
Or is there better way to efficiently draw gradient rectangle?
EDIT:
Ouch! Thanks TenFour04 - I tried with
new TextureRegion(tx, 0.25f, 0.25f, 0.75f, 0.75f) but got this instead:
weird, I got exactly what I want with
new TextureRegion(tx, 0.13f, 0.13f, 0.87f, 0.87f):
looks like some rounding problem? 0.126f would still give me that (seemingly) but 0.125f would give me something much closer to the very first image in the post.
#Pinkie Swirl: hmm right, I wanted a method to draw gradient rectangles because I don't want to make textures, but in the end I do.. actually I can avoid making those 2x2 textures on the fly.
#minos
I am working on an opengl program. The viewing parameters are :
eye 0 -4 6
viewup 0 1 0
lookat 0 0 0
I want to draw a background rectangle (with texture) such that I will be able to see it from the current eye location. Right now, the eye is looking from the -ve Y direction. I want to be able to draw a rectangle that covers the entire screen. I am not understanding what coordinates to give to the rectangle and how to get the texture mapping.
Currently I have this in my method:
What would be the code for the same. I have this in function:
glPushMatrix();
glLoadIdentity();
glBegin(GL_QUADS);
glColor3f(1.0f, 0.0f, 0.0f);
glVertex2f(-1.0, -1.0);
glVertex2f(-1.0, 1.0);
glVertex2f(1.0, 1.0);
glVertex2f(1.0, -1.0);
glEnd();
glPopMatrix();
The easiest option for drawing a background image that is independent of the camera is to draw it in normalized device coordinates (NDC) and do not perform any transformations/projections on it.
To cover the whole screen, you have to draw a quad going from p = [-1, -1] to [1,1]. The texture coordinates can then be found by tex = (p + 1)/2
Normalized device coordinates are the coordinates one would normally get after applying projection and the perspective divide. They span a cube from [-1,-1,-1] to [1,1,1] where the near plane is mapped to z = -1 (at least in OpenGL, in DirectX the near plane is mapped to z=0). In your special case, the depth should not matter, as long as you draw the background plane as the first element in each frame and disable the depth-test.
This is a code I use to draw rectangle in my program:
glBegin(GL_QUADS);
glTexCoord2f(0.0f, maxTexCoordHeight); glVertex2i(pos.x, pos.y + height);
glTexCoord2f(0.0f, 0.0f); glVertex2i(pos.x, pos.y);
glTexCoord2f(maxTexCoordWidth, 0.0f); glVertex2i(pos.x + width, pos.y);
glTexCoord2f(maxTexCoordWidth, maxTexCoordHeight); glVertex2i(pos.x + width, pos.y + height);
glEnd();
It draws just a simple rectangle with specified texture, e.g. like this:
I'd like to ask if it's possible in OpenGL to achieve border effect like this:
As you see inside this tile there's just a plain blue background which could be handled separately - just automatically resized texture. This can be achieved easily with a code snippet I gave, but the problem is with border.
If the border was supposed to be one color, I could try drawing empty, not-filled rectangle by using GL_LINES around my texture, but it's not.
Also if tiles were always with a fixed size, I could prepare a texure that would match it, but they HAVE TO be easily resizable without changing a bitmap file I use as texture.
So if it's not possible with basic OpenGL functions, what are the approaches to achieve this effect that would be most efficient and/or easy?
EDIT: It has to be 2D.
This is a classical problem of GUIs with OpenGL and is often solved using the 9-cell-pattern. In this, you add the effect to the original image (or define it by other opengl-parameters) and split the rendered quad in nine quads: three rows and three columns.
You then make the height of the upper and bottom row fixed, as you make the width of the left and the right column fixed. The center quad is scaled so that your object fits the rectangle you want to fit. You then map only the border parts of the texture to the quads forming the outer cells, while you map the center of the texture to the center quad.
Related to what was said in the comments, you could also use actual 3D effects by making the quad 3D. Noone forces you to use perspectivic projection in that case, you can stay with Orthogonal projection (2D-Mode). OpenGL will always do 3D-calculations anyways.
Aside from Jonas's answer, which is excellent, I want to add two more options.
The first one is to just make the texture look like your desired square. No fancy code necessary if you can do it in photoshop ;).
The second one is to complicate your drawing code a bit. If you look at your image you can see that every "side-slope" of your square can be drawn with two triangles. You can make your code draw 10 triangles instead of one square and use a different color for each group of two triangles:
draw() {
GLFloat i = <your_inset_here>;
//top border part, top left triangle
glColor3f(<color_0>);
glVertex2f(pos.x, pos.y);
glVertex2f(pos.x + w, pos.y);
glVertex2f(pos.x + i, pos.y + i);
//top border part, bottom right triangle
glVertex2f(pos.x + w, pos.y);
glVertex2f(pos.x + w - i, pos.y + i);
glVertex2f(pos.x + i, pos.y + i);
//repeat this process with the other coordinates for the other three borders
// draw the middle square using {(pos.x+i,pos.y+i),(pos.x+w-i,pos.y+i),(pos.x+w-i,pos.y+h-i),(pos.x+i,pos.y+h-i)} as coordinates
}
You can further improve this by creating a function to draw an irregularly shaped quad with the give coordinates and a color and call that function 5 times.