Rendering multiple 3D Objects - opengl

I'm learning how to render objects with libGdx. I have one square model, that creates a few model instance from them. If I have only one model it renders fine.
But if I have more instances it doesn't properly. Looks like the front objects are draw first, and the background the last one, so always the background objects are visible and the front objects you can see through them.
To render I use the following
Gdx.gl.glViewport(0,0,Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
Gdx.gl20.glClearColor(1f, 1f, 1f, 1f);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
mb.begin(cam);
worldManager.render(mb, environment);
mb.end();
mb variable is the ModelBatch instance, and inside worldManager.render each model instance is draw as follow:
mb.render(model, environment);
I'm not sure what is happening. But I think it is some GL attribute that I need enable
Is not 100% related to the following post because, yes it uses OPENGL like libgdx, but the solution provided in that post is not working and I think the problem comes from ModelBatch from libgdx
Reproduction of the problem

You didn't setup your camera correctly. First of all your camera's near plane is 0f, which means it is infinitely small. Set it to a value of at least 1f. Secondly you set the camera to look at it's own position, which is impossible (you can't look inside your own eyes, can you ;)).
So it would look something like:
camera = new PerspectiveCamera(90, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
camera.position.set(0, 10, 0);
camera.lookAt(0,0,0);
camera.near = 1f;
camera.far = 100f;
camera.update();
You probably want to start here: https://xoppa.github.io/blog/basic-3d-using-libgdx/
For more information on how the camera works have a look at: http://www.badlogicgames.com/wordpress/?p=1550
Btw, calling Gdx.gl20.glEnable(GL20.GL_DEPTH_TEST); will not help at that location and should definitely not be done when mixed with ModelBatch. ModelBatch manages its own render context, see the documentation for more information: https://github.com/libgdx/libgdx/wiki/ModelBatch

There are a lot of possible answer, but I would say that
glEnable (GL_DEPTH_TEST) ;
could help if you haven't done it yet. Also, enabling depth buffer only works if you actually have a depth buffer, which means you must makes sure you have one, and the method for this depends on your window context.

for show the difference dimentions you can use the fog

Related

Opacity overlap within the same OpenGl TriStrip in LibGdx

While my problem lies strictly in the opacity of the tristrip, I'd like to give some context first.
Recently I started developing a game through LibGdx which involves 2D circles which bounce around the screen. So as to provide a neat graphical effect, I created a small system that would provide a "tail" to the actors, which would fade over time. Visually, it looks like this:
Nice Trail Example
Now that ended up looking satisfactory. My problem, however, lies in situation where parts of the "trail" effect overlap, creating an ugly artifact which I would guess is the sum of the opacities of the points.
Ugly Trail Example
I believe this problem lies in the way with which the tristrip is drawn, specifically with the blending methods used.
The code used to generate the trail is as follows:
Array<Vector2> tristrip = new Array<Vector2>(); //Contains the vector information for OpenGL to build the strip.
Array<Vector2> texcoord = new Array<Vector2>(); //Contains the opacity information for the corresponding tristrip point.
// ... Code Here.... //
gl20.begin(camera.combined, GL20.GL_TRIANGLE_STRIP);
for (int i = 0; i < tristrip.size; i++) {
if (i == batchSize) {
gl20.end();
gl20.begin(camera.combined, GL20.GL_TRIANGLE_STRIP);
}
Vector2 point = tristrip.get(i);
Vector2 textcoord = texcoord.get(i);
gl20.color(color.r, color.g, color.b, color.a); // Color.WHITE
gl20.texCoord(textcoord.x, 0f);
gl20.vertex(point.x, point.y, 0);
}
gl20.end();
It is also important to note that the draw function for the strip is called within another class, in this fashion:
private void renderFX() {
Gdx.gl.glEnable(GL20.GL_BLEND);
Gdx.gl.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
Array<Ball> balls = mainstage.getBalls();
for (int i = 0; i < balls.size; i++) { //Draws the trails for each actor
balls.get(i).drawFX();
}
}
Is this problem a rookie mistake on my part, or was my implementation of the drawing of the vector array tristrip flawed from the start? How can I fix the blending issue in order to create smoother trails even in the presence of sharp curves?
Thanks in advance...
Edit: Since originally asking this question, I've experimented with some possible solutions, also implementing Deniz Yılmaz's suggestion of using a FBO to facilitate blending. Given that, my render function currently looks like this:
private void renderFX() {
frameBuffer.begin();
Gdx.gl20.glDisable(GL20.GL_BLEND);
Gdx.gl20.glClearColor(0f, 0f, 0f, 0);
Gdx.gl20.glClear(GL20.GL_COLOR_BUFFER_BIT);
Gdx.gl20.glEnable(GL20.GL_STENCIL_TEST);
Gdx.gl20.glStencilOp(GL20.GL_KEEP, GL20.GL_INCR, GL20.GL_INCR);
Gdx.gl20.glStencilMask(0xFF);
Gdx.gl20.glClear(GL20.GL_STENCIL_BUFFER_BIT);
Array<Ball> balls = mainstage.getBalls();
for (int i = 0; i < balls.size; i++) {
Gdx.gl20.glStencilFunc(GL20.GL_EQUAL, 0, 0xFF);
balls.get(i).drawFX(1f, Color.RED);
}
frameBuffer.end();
}
As shown, I've also experimented with stencils so as to try and mask the overlapping portion of the trail. This approach, however, results in the following visuals:
Stenciled Version
Again, this is not ideal, and has made me realize that approaching this problem by masking is not a good idea, as the opacity gradient will never be smooth in the corners as there will always be a sharp line between the two overlapping opacity values, even if somehow the logic prevents blending.
Given that, how else could I approach this problem? Should I scrap this method entirely if I plan to achieve a smooth gradient for this trail effect?
Thanks again.
glBlendFunc() is useless in this case because by default the values calculated based on the blend function are added.
So something like glBlendEquation(GL_MAX) needed
BUT
blending alone won't work, since it can't tell the difference between what is the background and what is the overlapping shapes.
Instead use FrameBuffer to draw trail with a glBlendEquation.
https://github.com/mattdesl/lwjgl-basics/wiki/FrameBufferObjects

Changing the alpha texture format for a particle emitter

Using coco2d-iphone 1.0.1, I have a continuous fire particle emitter. I would like to modify its alpha pixel format:
// Change format
[CCTexture2D setDefaultAlphaPixelFormat:kCCTexture2DPixelFormat_RGBA4444];
// Make emitter
emitter = [CCParticleSystemQuad particleWithFile:file];
// Change back
[CCTexture2D setDefaultAlphaPixelFormat:kCCTexture2DPixelFormat_RGBA8888];
This doesn't work. I am well aware that RGBA4444 should make my particles look weird, but they don't look weird - so I know that RGBA4444 is not taking effect.
I suspect that it is because RGBA8888 is being applied on all newly created particles. If I remove the RGBA8888 line, it does work.
How can I make my emitter emit RGBA4444, regardless of the formats used in the rest of my game?
I don't know why, but it works if you modify CCParticleSystem.m
That file loads the particle texture like this
CCTexture2D *tex = [[CCTextureCache sharedTextureCache] addImage:textureName];
So you change the format before and after that, and it works. Not sure why didn't it work in my example above though.

changing textureRect of a CCSprite created by CCRenderTexture

I have a CCSprite which gradually needs to be exhausted linearly from one end, lets say from left to right.For this purpose ,I am trying to change the textureRect property of the sprite so that the part that got exhausted from one end is 'outside' the displaying frame of the sprite.
I did this sort of thing before with a sprite that gets loaded from a spritesheet.And it worked perfectly.But I created this CCSprite using CCRenderTexture and by changing the textureRect property,the entire sprite gets disappeared.
The first image is the original CCSprite which I get from CCRenderTexture.The second image shows what I want to achieve.The black dotted rectangular portion of the Sprite needs to be omitted out.Only the blue dotted portion of the sprite needs to be displayed.Essentially,this blue dotted rectangle is my textureRect.
Is there any way how I could make my sprite reduce from one end.
Also is there any difference between a sprite created normally,and one created using CCRenderTexture.
I have done similar thing like this before using some low-level hack.
There is a work around solution if you use CCProgressTimer, that's very easy and I think it should be enough for your examples.
But you said in comment that you have some special requirements like "exhaust it from both the ends at once" then some low-level hack is needed. My solution from my last object is:
1) Get the texture image's raw data. In cocos2d you can use CCRenderTexture and in cocos2d-x you can use CCImage.
2) CCRenderTexture has a method of - (BOOL) saveToFile: (NSString *) name
format: (tCCImageFormat) format
. You can read its source code then try to save it into an 2D array instead like byte raw[1024][768]. Each element in this array represents one pixel on your picture(the type may not be byte, I'm not sure, nearly forget the details). The format MUST BE PNG since transparency will be needed.
3) Modify raw data directly, set pixel's transparency to 0x0 which you want it to disappear.
4) Re-initialize a CCRenderTexture using picture data you modified.
I can't provide the code directly since is a trade secret and core part of one of my projects. But I can share you my solution. You also need some knowledge about how PNG file works. Read:
https://en.wikipedia.org/wiki/Portable_Network_Graphics#File_header
Turns out I was making a silly mistake.While supplying values to the textureRect(CGRect),I was actually setting the textureRect.origin.y to the height of the texture which made my textureRect go beyond(above) the texture area.This explains why they were disappearing.

What is the best way to detect mouse-location/clicks on object in OpenGL?

I am creating a simple 2D OpenGL game, and I need to know when the player clicks or mouses over an OpenGL primitive. (For example, on a GL_QUADS that serves as one of the tiles...) There doesn't seems to be a simple way to do this beyond brute force or opengl.org's suggestion of using a unique color for every one of my primitives, which seems a little hacky. Am I missing something? Thanks...
My advice, don't use OpenGL's selection mode or OpenGL rendering (brute force method you are talking about), use a CPU-based ray picking algorithm if 3D. For 2D, like in your case, it should be straightforward, it's just a test to know if a 2D point is in a 2D rectangle.
I would suggest to use the hacky method if you want a quick implementation (coding time, I mean). Especially if you don't want to implement a quadtree with moving ojects. If you are using opengl immediate mode, that should be straightforward:
// Rendering part
glClearColor(0,0,0,0);
glClear(GL_COLOR_BUFFER_BIT);
for(unsigned i=0; i<tileCout; ++i){
unsigned tileId = i+1; // we inc the tile ID in order not to pick up the black
glColor3ub(tileId &0xFF, (tileId >>8)&0xFF, (tileId >>16)&0xFF);
renderTileWithoutColorNorTextures(i);
}
// Let's retrieve the tile ID
unsigned tileId = 0;
glReadPixels(mouseX, mouseY, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE,
(unsigned char *)&tileId);
if(tileId!=0){ // if we didn't picked the black
tileId--;
// we picked the tile number tileId
}
// We don't want to show that to the user, so we clean the screen
glClearColor(...); // the color you want
glClear(GL_COLOR_BUFFER_BIT);
// Now, render your real scene
// ...
// And we swap
whateverSwapBuffers(); // might be glutSwapBuffers, glx, ...
You can use OpenGL's glRenderMode(GL_SELECT) mode. Here is some code that uses it, and it should be easy to follow (look for the _pick method)
(and here's the same code using GL_SELECT in C)
(There have been cases - in the past - of GL_SELECT being deliberately slowed down on 'non-workstation' cards in order to discourage CAD and modeling users from buying consumer 3D cards; that ought to be a bad habit of the past that ATI and NVidia have grown out of ;) )

openGL textures that are not 2^x in dimention

I'm trying to display a picture in an openGL environment. The picture's origninal dimensions are 3648x2432, and I want to display it with a 256x384 image. The problem is, 384 is not a power of 2, and when I try to display it, it looks stretched. How can I fix that?
There's three ways of doing this that I know of -
The one Albert suggested (resize it until it fits).
Subdivide the texture into 2**n-sized rectangles, and piece them together in some way.
See if you can use GL_ARB_texture_non_power_of_two. It's probably best to avoid it though, since it looks like it's an Xorg-specific extension.
You can resize your texture so it is a power of two (skew your texture so that when it is mapped onto the object it looks correct).
ARB_texture_rectangle is probably what you're looking for. It lets you bind to GL_TEXTURE_RECTANGLE_ARB instead of GL_TEXTURE_2D, and you can load an image with non power-of-2 dimensions. Be aware that your texture coordinates will range from [0..w]x[0..h] instead of [0..1]x[0..1].
If GL_EXT_texture_rectangle is true then use GL_TEXTURE_RECTANGLE_EXT for the first param in glEnable() and GLBindTexture() calls.