I've a wall and a brick texture in my OpenGL 2 scene that keeps shimmering and flashing no matter what I set. When I'm zoomed in close (and can see clearly the texture), then the flashing and shimmering stops. But when I'm zoomed out and moving around the scene, the flashing and shimmering is very pronounced. This is the texture code for the brick wall:
brickwall.setTexParameteri(gl, GL2.GL_TEXTURE_WRAP_S, GL2.GL_REPEAT);
brickwall.setTexParameteri(gl, GL2.GL_TEXTURE_WRAP_T, GL2.GL_REPEAT);
brickwall.setTexParameteri(gl, GL2.GL_TEXTURE_MAG_FILTER,GL2.GL_NEAREST);
brickwall.setTexParameteri(gl, GL2.GL_TEXTURE_MIN_FILTER,GL2.GL_LINEAR);
gl.glGenerateMipmap(GL2.GL_TEXTURE_2D);
brickwall.enable(gl);
brickwall.bind(gl);
//...
brickwall.disable(gl);
From what I've googled, it seems that this is a problem that mipmapping solves. But my question is, how does one do this? Do I have to create, load and set parameters for all the various power of 2 sized images? Can anyone give me an example for loading and displaying a JOGL2 texture using mipmaps that won't flicker and shimmer zooming and moving about a scene?
You are generating the mipmap chain with glGenerateMipmap, but you didn't set an appropiate MIN filter:
brickwall.setTexParameteri(gl, GL2.GL_TEXTURE_MIN_FILTER,GL2.GL_LINEAR_MIPMAP_LINEAR);
The *MIPMAP* filters use mipmaps, the other texture filters don't.
Related
I seem to have a problem with my rendering. When I render to a framebuffer and then to screen, the images just seem less vibrant and kind of faded. Even simple ones.
In the picture above, the pink box on the right is rendered directly onto the screen buffer and the ones on the left are first rendered onto a framebuffer and then onto the screen.
I am using a multisampled framebuffer and it seems to have made no difference. I tried only blending once by using GL_RGB on the framebuffer color texture that also didn't help. Any ideas?
The issue ended up being the size of the framebuffer. It was too small which made the quality suffer. I multiplied the width and height by 6 and the quality goes up.
I started learning shaders, playing around on ShaderToy.com. For a project I want to make, I need to create an arbitrary glow filter on WebGL (not Bloom). I want to calculate alpha that I can then use to draw a color glow or use for some animated texture like fire etc.
So far I thought of a few ideas:
Averaging alpha across some area near each pixel - obviously slow
Going in circle around each pixel in one loop then over distance in another to calculate alpha based on how close the shape is to this pixel - probably just as slow
Blur entire shape - sounds like an overkill since I just need the alpha
Are there other ideas for approaching this? All I can find are gaussian blur techniques from bloom-like filters.
Please find this nvidia document for the simple glow effect.
The basic idea is to
render the scene in the back buffer
activate the effect
render some elements of the scene in a FBO
compute the Glow effect
bind the final FBO as a texture and blend this effect with the previously rendered scene in the backbuffer
I am working on a game with a friend and we are using openGl, glut, devIL, and c++ to render everything. Simply, Most of the .pngs we are using are rendering properly, but there are random pixels that are showing up as white.
These pixels fall into 2 categories. The first are pixels on the edge of the image. These are resulting from the anti-aliasing going on from photoshop's stroke feature (which i am trying to fix). The second is the more mysterious one. When the enemy is standing still, the texture looks fine, but as soon as it jumps a random white line appears on the top of it.
The line on top is of varying solidity (this shot is not the most solid)
It seems like a blending issue, but I am not as familiar with the way openGl handles the transparency (our code for transparency was learned from the other questions on stack overflow though I couldn't find anything on this issue, however). I am hoping something will fix both issues, but am more worried about the second.
Our current setup code:
glEnable (GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_TEXTURE_2D);
glDisable(GL_DEPTH_TEST);
Transparent areas of a bitmap also have a color. If it is 100% transparent, you usually can't see it. Photoshop usually fills white in these areas.
If you are using minifying or magnifying flags that are not GL_NEAREST, then you will have interpolation. If you interpolate in between two pixels, where one is blue and opaque, and the other is white and transparent, then you will get something that is 50% transparent and light-blue. You may also get the same problem with mimaps, as interpolation is used. If you use mipmaps, one solution is to generate them yourself. That way, you can ignore the transparent areas when doing the interpolations. See some good explanations here: http://answers.unity3d.com/questions/10302/messy-alpha-problem-white-around-edges.html
Why are you using png files? You save some disk space, but need to include complex libraries like devil. You don't save any space in the delivery of an application, as most tools that creates delivery packages have very efficient compression. And you don't save any memory on the GPU, which may be the most critical.
This looks like an artifact in your source PNG. Are you sure there are no such light opaque pixels there?
White line appearing on top could be a UV interpolation error from neighbor texture in your texture atlas (or padding if you pad your NPOT textures to POT with white opaque pixels). Thats why usually you need to pad textures with at least one edge pixel in every direction. That won't help with mipmaps though, as Lars said - you might need to use custom mipmap generation or drop it altogether.
I have a 1024x1024 background texture and am trying to render a 100x100 sprite (also stored in a texture) to the bottom left corner of the background texture.
I want to render the sprite at 50% opacity. This needs to be done in the CPU, not the GPU using a shader. Most examples I've found are using shaders to achieve this.
What's the best way to do this?
I suppose you mean from CPU-side opengl commands, therefore using the fixed function (or fixed pipeline). I deduce this from the "no shader" request.
Because "doing this on CPU" would actually really mean do a retrieval/mapping of the texture to access it on CPU, loop on pixels, and copy back result to graphic card using glTexImage or unmap the texture afterward. this last approach would be terribly inefficient.
So you just need to activate blending.
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
and render in order: background, then a little quad with your 100x100 image after. it will take the alpha channel from your 100x100 image to make the blend. You could set it to a constant 50% from an image editing tool.
I been working in a new game, and finally reached the point where I started to code the motion of my main character but I have a doubt about how do that.
Previously, I make two games in Allegro, so the spritesheets are kind of easy to implement, because I establish the frame and position on the image, and save every frame in a different bitmap, but I know that do that with OpenGL it's not neccesary and cost a little bit more.
So, I been thinking in how save my spritesheet and used in my program and I have only one idea.
I loaded the image and transformed in a texture, in my function that help me animate I simply grab a portion of the texture to draw instead of store every single texture in my program.
This is the best way to do that?
Thanks beforehand for the help.
You're on the right track.
Things to consider:
leave enough dead space around each sprite so that the video card does not blend in texels from adjacent sprites at small scales.
set texture min/mag filtering appropriately. GL_NEAREST is OK if you're going for the blocky look.
if you want to be fancy and save some texture memory, there's no reason that the sprites have to be laid out in a regular grid. Smaller sprites can be packed closer in the texture.
if your sprites are being rendered from 3D models, you could output normal & displacement maps from the model into another texture, then combine them in a fragment shader for some awesome lighting and self-shadowing.
You got the right idea, if you have a bunch of sprites it is much better to just stick them all in one big textures. Just draw your sprites as textured quads whose texture coordinates index into the frame of the sprite. You can do a few optimizations, but most of them revolve around trying to get the most out of your texture memory and packing the sprites closely together with out blending issues.
I know that do that with OpenGL it's not neccesary and cost a little bit more.
Why not? There are no real downsides to putting a lot of sprites into a single texture. All you need to do is change the texture coordinates to pick the region in question out of the texture.