I am using Cheetah3D if it mattters.
The UV coordinates I am reading in my object made in Cheetah3D are NOT between 0 and 1 like the example model I was provided with the 3DS model object loading code.
Some go above 1 as high as 1.56 or so while others go below 0, as far as -4.56. This is causing extreme abnormalities when trying to map the texture to the object.
Any ideas? Should I contact the Cheetah3D folks or is there a way to fix this in code myself dynamically? (The reason for posting on SO).
Well without seeing screenshots the only thing that comes to mind is that the texture cords should be wrapped but you're clamping them. See the documentation for glTexParameter* on how to change that setting. A screenshot of a SIMPLE model would really help here.
Related
I apologise for the limited code in this question, but It's tied into a personal project with much OpenGL functionality abstracted behind classes. Hoping someone visually recognises the problem and can offer direction.
During the first execution of my animation loop, I'm creating a GL_R32F (format: GL_RED, type: GL_FLOAT) texture, rendering an orthographic projection of a utah teapot to it (for the purposes of debugging this I'm writing the same float to every fragment).
The texture however renders incorrectly, as it should be a solid silhouette.
Re-running the the program causes the patches to move around.
I've spent a good few hours tweaking things trying to work out the cause, I've compared the code to my working shadow mapping example which similarly writes to a GL_R32F texture, yet I can't find a cause.
I've narrowed it down, to find that it's only the first renderpass to the texture which this occurs. This wouldn't be so much of an issue except I don't require more than a single render (and looping the bindFB, setViewport, render, unbindFB doesn't fix it).
I've
If anyone has an suggestions for specific code extracts to provide, I'll try and edit the question.
This was caused by a rogue call to glEnable(GL_BLEND) during an earlier stage of the algorithm.
This makes sense because I was writing to a single channel, therefore the Alpha channel would contain random garbage, leading to garbled texture.
I have inhereted a Direct x project which I am trying to improve. The problem I am having is that I have 2 meshes and I want to move one independent of the other. at the moment I can manipulate the world matrix simply enough, but I am unable to rotate an indervidual mesh.
V( g_MeshLeftWing.Create( pd3dDevice, L"Media\\Wing\\Wing.sdkmesh", true));
loades the mesh and later it is rendered
renderMesh(pd3dDevice, &g_MeshLeftWing );
Is there a way I can rotate the mesh. I tried transforming it using a matirx with no success?
g_MeshLeftWing.TransformMesh(&matLeftWingWorld,0);
any help would be great
Firstly, you appear to be loading a ".sdkmesh" file. It was documented heavily in the DirectX SDK that ".sdkmesh" was made for the SDK and should not be used as an actual mesh loading/drawing solution.
Therefore I would advise you start looking at alternative means to load and draw your model, not only will that give you a greater understanding of DirectX, but it should ultimately answer your question in the long run!
I've been trying (for hours now) to get a b3d model loaded, shown and animated properly.
The model has an animation between frames 0 and 45, it was made and painted (the whole kit) in blender as testing model. Only half of the model is shown, it's completely white and it doesn't move.
I've been googling on information on loading b3d into irrlicht and it's animation system, but trying to load texture from the b3d file failed and all other information I incorporated into the program.
Here is the link to the picture of the actual result.
Here is the link to the code (shortened with comments on the insignificant parts).
Did you correctly define 'node'? If not, try this:
IAnimatedMeshSceneNode* node = smgr->addAnimatedMeshSceneNode (smgr->getMesh ("mesh.b3d"));
If you added the node as an IMeshSceneNode the animations will not show. As for the texturing, I believe with irrlicht you must set all textures manually. Try this in the 'if (node)' block:
node->setMaterialTexture (0, driver->getTexture ("texture.bmp"));
Some painting tools doesn't appear on the model when exported via Blender.
You may've used some, try looking up any possible issues with the tools you used on the net.
Also you sure that you light the model ?
That may cause problems too.
I'm in a bit hurry atm couldn't check the code, I may look deeply later.
Hope it solves
When im using this following code:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, 6);
and then i enable multisampling, i notice that my program no longer cares about the max mip level.
Edit: It renders the last miplevels as well, that is the problem, i dont want them being rendered.
Edit3:
I tested and confirmed that it doesnt forget mip limits at all, so it does follow my GL_TEXTURE_MAX_LEVEL setting. ...So the problem isnt mipmap related, i guess...
Edit2: Screenshots, this is the world map zoomed out a lot and using low angle to make the effect shown the worst possible way, also there is rendered water plane under the map, so theres no possibility to take black pixels from anywhere else than map textures:
alt text http://img511.imageshack.us/img511/6635/multisamplingtexturelim.png
Edit4: All those pics should look like the top right corner pic (just smoother edges depending on multisampling). But apparently theres something horribly wrong in my code. I have to use mipmaps, the mipmaps arent the problem, they work perfectly.
What im doing wrong, or how can i fix this?
Ok. So the problem was not TEXTURE_MAX_LEVEL after all. Funny how a simple test helped figure that out.
I had 2 theories that were about the LOD being picked differently, and both of those seem to be disproved by the solid color test.
Onto a third theory then. If I understand correctly your scene, you have a model that's using a texture atlas, and what we're observing is that some polygons that should fetch from a specific item of the atlas actually fetch from a different one. Is that right ?
This can be explained by the fact that a multisampled fragment usually get sampled at the middle of the pixel. Even when that center is not inside the triangle that generated the sample. See the bottom of this page for an illustration.
The usual way to get around that is called centroid sampling (this page has nice illustrations of the issue too). It forces the sampling to bring back the sampling point inside the triangle.
Now the bad news: I'm not aware of any way to turn on centroid filtering outside of the programmable pipeline, and you're not using it. Do you think you want to switch to get access to that feature ?
Edit to add:
Also, not using texture atlases would be a way to work around this. The reason it is so visible is because you start fetching from another part of the atlas with the "out-of-triangle" sampling pattern.
Check also what have you set for the MIN_FILTER:
glTexParameterf( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, ... );
Try the different settings ( a list is here ).
However, if you're dissatisfied with the results of gluBuild2DMipmaps I advise you to look at alternatives:
glGenerateMipmap/glGenerateMipmapEXT (yes, it works without FBO's)
SGIS_generate_mipmap extension (widely supported)
Especially the latter is highly customizable. And what was not mentioned, this extension is fired up by setting GL_GENERATE_MIPMAP to true. It is automatical so you don't need to do recalculation if data changes.
You should enable multi-sampling through your application, not the nvidia control panel, if you want your rendering to always have it. That might even fix your issue.
As for the GL_TEXTURE_MAX_LEVEL setting being ignored when using the control panel multisampling, it sounds like a driver bug/feature. It's weird because this feature can be used to limit what you actually load in the texture (the so-called texture completeness criteria). What if you don't load the lowest mipmap levels at all ? What gets rendered ?
Edit: From the picture you're showing, it does not really look like it ignores the setting. For one thing, MAX_LEVEL=0 is different from MAX_LEVEL=6. Now, considering the noise in your textures, I don't even get why your MAX_LEVEL=6/MS off looks that clean. It should be noisy based on the MAX_LEVEL=16/MS off picture. At this point, I'd advise to put distinct solid colors in each mip level of your diffuse texture (and not rely on GL to build your mips), to see exactly which mip levels you're getting.
So I've got this class where I have to make a simple game in OpenGL.
I want to make space invanders (basically).
So how in the world should I make anything appear on my screen that looks decent at all? :(
I found some code, finally, that let me import a 3DS object. It was sweet I thought and went and put it in a class to make it a little more modular and usable (http://www.spacesimulator.net/tut4_3dsloader.html).
However, either the program I use (Cheetah3d) is exporting the uv map incorrectly and/or the code for reading in a .bmp that ISN'T the one that came with the demo. The image is all weird. Very hard to explain.
So I arrive at my question. What solution should I use to draw objects? Should I honestly expect to spend hours guessing at vertices to make a space invader ship? Then also try to map a decent texture to this object as well? The code I am using draws the untextured object just fine but I can't begin to go mapping the texture to it because I don't know what vertices correspond to what polygons etc.
Thanks SO for any suggestions on what I should do. :D
You could draw textured quads, provided you have a texture loader.
I really wouldn't worry too much about your "uv map" - if you can get your vertices right then you can generally cludge something anyway. That's what I'd do.