I am writing a loader and a renderer of *.bsp Quake 3 files for my 3D engine. I am supporting the format version 46 (0x2e). Everything is rendered well untill I am using very simple maps. The geometry of simple maps renders correctly both under my engine and the renderer that I found in the Internet (at http://www.paulsprojects.net/opengl/q3bsp/q3bsp.html). Here is the screenshot:
I tried rendering more complicated maps (from: http://lvlworld.com/) with my renderer and a renderer that I found to compare the results. And both renderers suffer from a problem that there are holes in the scene (missing triangles here and there).
I have no clue what my be causing those problems as I checked the maps and they are all of the same version. Has anybody encountered this problem?
EDIT: Some of the very complicated maps render correctly. It confuses me even more :).
The creator of this bsp loader made something wrong. I got fixed it.
Simply edit LoadData function, and make all face data ( except meshes and patches ) into one array and render it. Works for me, no more "holes". Here's piece of code:
int currentFace = 0;
for( int i = 0; i < facesCount; i++ ) {
if( faceData[i].type != SW_POLYGON )
continue;
m_pFaces[i].texture = faceData[i].texture;
m_pFaces[i].lightmapIndex = faceData[i].lightmapIndex;
m_pFaces[i].firstVertexIndex = faceData[i].firstVertexIndex;
m_pFaces[i].vertexCount = faceData[i].vertexCount;
m_pFaces[i].numMeshIndices = faceData[i].numMeshIndices;
m_pFaces[i].firstMeshIndex = faceData[i].firstMeshIndex;
f_bspType[i].faceType = SW_FACE; // Custom one.
f_bspType[i].typeFaceNumber = currentFace;
currentFace++;
}
Related
While my problem lies strictly in the opacity of the tristrip, I'd like to give some context first.
Recently I started developing a game through LibGdx which involves 2D circles which bounce around the screen. So as to provide a neat graphical effect, I created a small system that would provide a "tail" to the actors, which would fade over time. Visually, it looks like this:
Nice Trail Example
Now that ended up looking satisfactory. My problem, however, lies in situation where parts of the "trail" effect overlap, creating an ugly artifact which I would guess is the sum of the opacities of the points.
Ugly Trail Example
I believe this problem lies in the way with which the tristrip is drawn, specifically with the blending methods used.
The code used to generate the trail is as follows:
Array<Vector2> tristrip = new Array<Vector2>(); //Contains the vector information for OpenGL to build the strip.
Array<Vector2> texcoord = new Array<Vector2>(); //Contains the opacity information for the corresponding tristrip point.
// ... Code Here.... //
gl20.begin(camera.combined, GL20.GL_TRIANGLE_STRIP);
for (int i = 0; i < tristrip.size; i++) {
if (i == batchSize) {
gl20.end();
gl20.begin(camera.combined, GL20.GL_TRIANGLE_STRIP);
}
Vector2 point = tristrip.get(i);
Vector2 textcoord = texcoord.get(i);
gl20.color(color.r, color.g, color.b, color.a); // Color.WHITE
gl20.texCoord(textcoord.x, 0f);
gl20.vertex(point.x, point.y, 0);
}
gl20.end();
It is also important to note that the draw function for the strip is called within another class, in this fashion:
private void renderFX() {
Gdx.gl.glEnable(GL20.GL_BLEND);
Gdx.gl.glBlendFunc(GL20.GL_SRC_ALPHA, GL20.GL_ONE_MINUS_SRC_ALPHA);
Array<Ball> balls = mainstage.getBalls();
for (int i = 0; i < balls.size; i++) { //Draws the trails for each actor
balls.get(i).drawFX();
}
}
Is this problem a rookie mistake on my part, or was my implementation of the drawing of the vector array tristrip flawed from the start? How can I fix the blending issue in order to create smoother trails even in the presence of sharp curves?
Thanks in advance...
Edit: Since originally asking this question, I've experimented with some possible solutions, also implementing Deniz Yılmaz's suggestion of using a FBO to facilitate blending. Given that, my render function currently looks like this:
private void renderFX() {
frameBuffer.begin();
Gdx.gl20.glDisable(GL20.GL_BLEND);
Gdx.gl20.glClearColor(0f, 0f, 0f, 0);
Gdx.gl20.glClear(GL20.GL_COLOR_BUFFER_BIT);
Gdx.gl20.glEnable(GL20.GL_STENCIL_TEST);
Gdx.gl20.glStencilOp(GL20.GL_KEEP, GL20.GL_INCR, GL20.GL_INCR);
Gdx.gl20.glStencilMask(0xFF);
Gdx.gl20.glClear(GL20.GL_STENCIL_BUFFER_BIT);
Array<Ball> balls = mainstage.getBalls();
for (int i = 0; i < balls.size; i++) {
Gdx.gl20.glStencilFunc(GL20.GL_EQUAL, 0, 0xFF);
balls.get(i).drawFX(1f, Color.RED);
}
frameBuffer.end();
}
As shown, I've also experimented with stencils so as to try and mask the overlapping portion of the trail. This approach, however, results in the following visuals:
Stenciled Version
Again, this is not ideal, and has made me realize that approaching this problem by masking is not a good idea, as the opacity gradient will never be smooth in the corners as there will always be a sharp line between the two overlapping opacity values, even if somehow the logic prevents blending.
Given that, how else could I approach this problem? Should I scrap this method entirely if I plan to achieve a smooth gradient for this trail effect?
Thanks again.
glBlendFunc() is useless in this case because by default the values calculated based on the blend function are added.
So something like glBlendEquation(GL_MAX) needed
BUT
blending alone won't work, since it can't tell the difference between what is the background and what is the overlapping shapes.
Instead use FrameBuffer to draw trail with a glBlendEquation.
https://github.com/mattdesl/lwjgl-basics/wiki/FrameBufferObjects
In the current ColladaLoader.js I don't see anything that reads or applies the Collada standard's "weighting" value (0.0-1.0) that indicates bump intensity or "bumpScale" in Three.js Phong material. I noticed that when I export my collada from Blender it picks up the bump materials instantly in three.js (which is amazingly simple - YAY!) but my materials always get an exaggerated bumpScale of default 1.0. It gives the materials an exaggerated bumpiness.
I managed to edit my ColladaLoader a bit and try out my ideal value (0.05) but wonder if I'm missing something or doing this wrong? Anybody else try this? Note that I've not had good luck with json exports so I'm sticking with Collada for now.
Thanks
You can set custom properties in the Collada callback. Use a pattern like this one:
loader.load( 'collada.dae', function ( collada ) {
var dae = collada.scene;
dae.traverse( function( child ) {
if( child instanceof THREE.Mesh ) {
child.material.bumpScale = value;
}
} );
scene.add( dae );
} );
three.js r.71
Sorry I'm a bit new with SDL and C++ development. Right now I've created a tile mapper that reads from my map.txt file. So far it works, but I want to add editing the map now.
SDL_Texture *texture;
texture= IMG_LoadTexture(G_Renderer,"assets/tile_1.png");
SDL_RenderCopy(G_Renderer, texture, NULL, &destination);
SDL_RenderPresent(G_Renderer);
The above is the basic way I'm showing my tiles, but if I want to go in and change the texture in real time it's kind of buggy and doesn't work well. Is there a method that is best for editing a texture? Thanks for the help I appreciate everything.
The most basic way is to set up a storage container with some textures which you will use repeatedly; for example a vector or dictionary/map. Using the map approach for example you could do something like:
// remember to #include <map>
map<string, SDL_Texture> myTextures;
// assign using array-like notation:
myTextures["texture1"] = IMG_LoadTexture(G_Renderer,"assets/tile_1.png");
myTextures["texture2"] = IMG_LoadTexture(G_Renderer,"assets/tile_2.png");
myTextures["texture3"] = IMG_LoadTexture(G_Renderer,"assets/tile_3.png");
myTextures["texture4"] = IMG_LoadTexture(G_Renderer,"assets/tile_4.png");
then to utilise a different texture, all you have to do is use something along the lines of:
SDL_RenderCopy(G_Renderer, myTextures["texture1"], NULL, &destination);
SDL_RenderPresent(G_Renderer);
which can be further controlled by changing the first line to
SDL_RenderCopy(G_Renderer, myTextures[textureName], NULL, &destination);
where textureName is a string variable which you can alter in code in realtime.
This approach means you can load all the textures you will need before-hand and simply utilise them as needed later, meaning there's no loading from file system whilst rendering:)
There is a nice explanation of map here.
Hopefully this gives you a nudge in the right direction. Let me know if you need more info:)
I have a win32 application, in which I want to use openGL just for its matrix stack not for any rendering. That is, I want to use openGL to specify the camera, viewport etc so that I dont have to do the maths again. While creating the scene, I just want to project the points using gluProject and use it. The projected points are passed to another library which creates the scene for me, all the windows handles are created by library itself and I dont have access to that.
The problem is, windows needs a device context for initialization. But, since I am not using openGL for any rendering, is there a way to use openGL without any Window handle at all?
Without any explicit initialization, when I read back the matrices using glGet, it returns a garbage. Any thought on how to fix it?
I want to use openGL just for its matrix stack not for any rendering.
That's not what OpenGL is meant for. OpenGL is a drawing/rendering API, not a math library. Actually the whole matrix math stuff has been stripped away from the latest OpenGL versions (OpenGL-3 core and later), for that very reason.
Also doing this matrix math stuff is so simple, you can write it down in less than 1k lines of C code. There's absolutely no benefit in abusing OpenGL for this.
The Matrix stack could potentially live on graphics hardware in your implementation. OpenGL is quite reasonable therefore in insisting you have an OpenGL context in order to be able to use such functions. This is because the act of creating a context probably includes setting up the necessary implementation mechanics required to store the matrix stack.
Even in a purely software based OpenGL implementation one would still expect the act of creating a context to call some equivalent to malloc to secure the storage space for the stack. If you happened to find an OpenGL implementation where creating a context wasn't necessary I'd still keep clear of relying on that behavior since it's most likely undefined and could be broken in the next release of that implementation.
If it's C++ I'd just use std::stack with the Matrix class from your favorite linear algebra package if you're not using OpenGL for anything other than that.
I present to you my complete (open source) matrix class. Enjoy.
https://github.com/TheBuzzSaw/paroxysm/blob/master/newsource/CGE/Matrix4x4.h
I can recommend trying to implement those calls yourself. I did that once for a Palm app I wrote, tinyGL. What I learnt was that the documentation basically tells you in plain text what is done.
i.e the verbatim code for tglFrustum and tglOrth are (note that I was using fix point math to get some performance)
void tglFrustum(fix_t w, fix_t h, fix_t n, fix_t f) {
matrix_t fm, m;
fix_t f_sub_n;
f_sub_n = sub_fix_t(f,n);
fm[0][0] = mult_fix_t(_two_,div_fix_t(n,w));
fm[0][1] = 0;
fm[0][2] = 0;
fm[0][3] = 0;
fm[1][0] = 0;
fm[1][1] = mult_fix_t(_two_,div_fix_t(n,h));
fm[1][2] = 0;
fm[1][3] = 0;
fm[2][0] = 0;
fm[2][1] = 0;
fm[2][2] = inv_fix_t(div_fix_t(add_fix_t(f,n),f_sub_n));
f = mult_fix_t(_two_,f);
fm[2][3] = inv_fix_t(div_fix_t(mult_fix_t(f,n),f_sub_n));
fm[3][0] = 0;
fm[3][1] = 0;
fm[3][2] = _minus_one_;
fm[3][3] = 0;
set_matrix_t(m,_matrix_stack[_toms]);
mult_matrix_t(_matrix_stack[_toms],m,fm);
}
void tglOrtho(fix_t w, fix_t h, fix_t n, fix_t f) {
matrix_t om, m;
fix_t f_sub_n;
f_sub_n = sub_fix_t(f,n);
MemSet(om,sizeof(matrix_t),0);
om[0][0] = div_fix_t(_two_,w);
om[1][1] = div_fix_t(_two_,h);
om[2][2] = div_fix_t(inv_fix_t(_two_),f_sub_n);
om[2][3] = inv_fix_t(div_fix_t(add_fix_t(f,n),f_sub_n));
om[3][3] = _one_;
set_matrix_t(m,_matrix_stack[_toms]);
mult_matrix_t(_matrix_stack[_toms],m,om);
}
Compare those with the man pages for glFrustum and glOrtho
I'm trying to extract vertex and UV map information from a FBX file created with 3ds max 2010.
All i could get from the file are good vertex and polygon index data but wrong UV maps.
Can someone point me in a good direction or give me a tutorial?
Note that when you load Normals for a object that is perfectly smooth they index differently then when not smooth.
Here is a link to some code I have made to load a FBX file into system memory... thought it might help You want to look at "MedelProcess_Mesh.cpp" btw to answer some questions you might have. Hope this helps, remember I have no animation support in there.
SIMPLE ANSWER::
For UVs.
int uvIndex1 = mesh->GetTextureUVIndex(polyIndex, 0);
int uvIndex2 = mesh->GetTextureUVIndex(polyIndex, 1);
int uvIndex3 = mesh->GetTextureUVIndex(polyIndex, 2);
KFbxVector2 uv1 = uv->GetAt(uvIndex1);
KFbxVector2 uv2 = uv->GetAt(uvIndex2);
KFbxVector2 uv3 = uv->GetAt(uvIndex3);
For Verts.
int vertexCount = mesh->GetControlPointsCount();
KFbxVector4* vertexArray = mesh->GetControlPoints();