Im currently writing a puzzle game in c++ directX 9. Not much of it has been a problem however some of my .x files that I am using (using a mesh class that reads them in etc) seems to overwrite the colours of other stuff.
For example I have a green floor and a white pointer, on a level that has a Diglett looking character that has been made in 3ds and textured then exported to .x using panda plugin, other items that are unrelated start to change colour, the green floor is now a lot darker and the white pointer is brown?
Anyone have any ideas? not sure if its texture overflow or something?
The most likely explanation given the information here is that the mesh is changing some state (such as: shaders, diffuse color render/stage states, etc.) when it is drawn. Then your other geometry is affected by those states. You should make sure that any state your geometry depends on is set to what you want it to be before rendering to avoid it being affected by previously changed state.
Related
Hi I'm working to create a space environment with a ship inside.But after the creation of the skybox (no errors) I put my ship inside but it hasnt colour. It's something like white-black
I did(modeled) the ship with OPENSCAD and after with MESHLAB I exported it into .OBJ format. I load it inside the source code but he hasn't the texture/colour . That my ship in Meshlab :
I need to know only I have to add something about color in the code or it's error in input. What this?
If you need I post the code, but if is somthing normal this error explain me, anyway I'm a little newbie in opengl, so be patient, thank you
EDIT :
Look my .obj file in windows:
And the same project in ubuntu :
What's this difference?
Anyway the openscad code :
module navicella(){
$fn=100;
rotate([0,180,270]){
union(){
rotate([270,180,0]){
rotate([90,0,0])
cylinder(50,7,10,center=true);
intersection(){
translate([0,-25,0])
sphere(10);
translate([0,-25,0])
cube(19,center=true);
}
difference(){
translate([0,35,0])
cube([10,15,15],center=true);
translate([0,40,0])
sphere(13);
}
translate([5,-10,0])rotate([90,0,70])
cube([35,1,15],center=true);
translate([-2,0,0])rotate([90,0,95])
cube([50,1,10],center=true);
translate([0,3,6])rotate([0,-15,90])
cube([40,1,20],center=true);
translate([0,3,-6])rotate([0,15,90])
cube([40,1,20],center=true);
translate([0,-35,0])rotate([90,0,0])
cylinder(10,5,0,center=true);
translate([0,20,0])rotate([0,90,0])
cube([45,1,2],center=true);
translate([0,25,0]) rotate([90,0,0])
cylinder(5,4,7,center=true);
}
}
}
}
navicella();
Looks like you have forgot to disable all used textures that are not used by your model. That is very common mistake (I do it all the time still today).
What is probably happening?
You rendered skybox or any object with textures
So for example GL_CUBE_MAP and or GL_TEXTURE_2D are enabled. Now when you start rendering your mesh that does not contain texture the textures are still enabled. So for every fragment/pixel GL will always sample the last set texture coordinate from last binded texture in all of the enabled texture targets and combine color according to GL settings.
So if you model does not contain texture coordinates GL will use the last set one. That is usually in corner area where black border is ... or you are just in some dark region of texture. Also if you unbind textures that only means you bind default texture 0 which is usually black.
To test/remedy this just callglDisable(GL_....); for all previously used texture targets. If it helps you know where the problem is.
Also if your object contains texture coordinates and texture is not loaded properly (like wrong file name/path) then the result is usually black.
Missing or negative normals while rendering with lighting enabled
Does not looks like it but it could be also the reason. If your model has wrong or no normals then the lighting computation result in wrong lighting. If the object is always dark even if you rotate it the the normals are negated (facing the other way) and you should change the front face for lighting/material or negate the normals.
If the color intensity is changing with rotation then your object has probably no normal and again the last set normal is used (from previous rendering). In such case either compute normals for the object (via cross product) or disable lighting for that object.
I've been playing around with directx for about a week now. I recently bump in to the default objects, and I played around with it. I can animate it, I can do alot with it, but I have no idea how to put vertex color(again not material). Default object is not really the right word to use so here are the list of functions that would generate these "default objects"
D3DXCreateBox
D3DXCreateSphere
D3DXCreateCylinder
D3DXCreateTeapot
D3DXCreatePolygon
D3DXCreateTorus
So can someone lay out the way on how to get the vertex buffers and then fill it with color data. I can do it with objects that I had to layout the vertex manually but not in these default mesh.
Try to use HLSL Lang ..
High-Level Shader Language. This is a script used by DirectX to
program specific portions of the rendering pipeline, giving a graphics
programmer a wide range of flexibility in special effects.
I would like to know if there is a way to generate a single static image of a 3D object (1 single object represented as a triangle list), using OpenGL or DirectX, that allows you to know which specific triangles defining the object have been used to generate every one of the pixels forming the rendered image. I've cited OpenGL and DirectX because they are widely used APIs graphics if somebody knows other ways of achieving the previous that works at high speed I would be also interested in his/her answer. I currently use my own software implementation of the rendering pipeline to keep track of the relationship, but I would like to use the power and effects (mainly antialiasing, shadows and specific skin rendereing techniques) that graphics cards offer.
Thanks very much for your help
Sure, just output a triangle identifier to a separate render-target (using MRT). In GLSL-terms, this is gl_PrimitiveID, and in HLSL-terms it's SV_PrimitiveID. If you are using multi-sampling, then your multi-sample buffer for that render-target become a list of primitives that contribute to each pixel.
Draw each triangle in a different colour. R8G8B8 offers you about 16.7 million possible colours, so one can index that number of triangles with it. You don't have to draw to a on-screen buffer. You could render the picture as usual, and render to a second target, indexing the triangles in a off-screen buffer.
I been working in a new game, and finally reached the point where I started to code the motion of my main character but I have a doubt about how do that.
Previously, I make two games in Allegro, so the spritesheets are kind of easy to implement, because I establish the frame and position on the image, and save every frame in a different bitmap, but I know that do that with OpenGL it's not neccesary and cost a little bit more.
So, I been thinking in how save my spritesheet and used in my program and I have only one idea.
I loaded the image and transformed in a texture, in my function that help me animate I simply grab a portion of the texture to draw instead of store every single texture in my program.
This is the best way to do that?
Thanks beforehand for the help.
You're on the right track.
Things to consider:
leave enough dead space around each sprite so that the video card does not blend in texels from adjacent sprites at small scales.
set texture min/mag filtering appropriately. GL_NEAREST is OK if you're going for the blocky look.
if you want to be fancy and save some texture memory, there's no reason that the sprites have to be laid out in a regular grid. Smaller sprites can be packed closer in the texture.
if your sprites are being rendered from 3D models, you could output normal & displacement maps from the model into another texture, then combine them in a fragment shader for some awesome lighting and self-shadowing.
You got the right idea, if you have a bunch of sprites it is much better to just stick them all in one big textures. Just draw your sprites as textured quads whose texture coordinates index into the frame of the sprite. You can do a few optimizations, but most of them revolve around trying to get the most out of your texture memory and packing the sprites closely together with out blending issues.
I know that do that with OpenGL it's not neccesary and cost a little bit more.
Why not? There are no real downsides to putting a lot of sprites into a single texture. All you need to do is change the texture coordinates to pick the region in question out of the texture.
I'm creating a tile-based game in C# with OpenGL and I'm trying to optimize my code as best as possible.
I've read several articles and sections in books and all come to the same conclusion (as you may know) that use of VBOs greatly increases performance.
I'm not quite sure, however, how they work exactly.
My game will have tiles on the screen, some will change and some will stay the same. To use a VBO for this, I would need to add the coordinates of each tile to an array, correct?
Also, to texture these tiles, I would have to create a separate VBO for this?
I'm not quite sure what the code would look like for tiling these coordinates if I've got tiles that are animated and tiles that will be static on the screen.
Could anyone give me a quick rundown of this?
I plan on using a texture atlas of all of my tiles. I'm not sure where to begin to use this atlas for the textured tiles.
Would I need to compute the coordinates of the tile in the atlas to be applied? Is there any way I could simply use the coordinates of the atlas to apply a texture?
If anyone could clear up these questions it would be greatly appreciated. I could even possibly reimburse someone for their time & help if wanted.
Thanks,
Greg
OK, so let's split this into parts. You didn't specify which version of OpenGL you want to use - I'll assume GL 3.3.
VBO
Vertex buffer objects, when considered as an alternative to client vertex arrays, mostly save the GPU bandwidth. A tile map is not really a lot of geometry. However, in recent GL versions the vertex buffer objects are the only way of specifying the vertices (which makes a lot of sense), so we cannot really talked about "increasing performance" here. If you mean "compared to deprecated vertex specification methods like immediate mode or client-side arrays", then yes, you'll get a performance boost, but you'd probably only feel it with 10k+ vertices per frame, I suppose.
Texture atlases
The texture atlases are indeed a nice feature to save on texture switching. However, on GL3 (and DX10)-enabled GPUs you can save yourself a LOT of trouble characteristic to this technique, because a more modern and convenient approach is available. Check the GL reference docs for TEXTURE_2D_ARRAY - you'll like it. If GL3 cards are your target, forget texture atlases. If not, have a google which older cards support texture arrays as an extension, I'm not familiar with the details.
Rendering
So how to draw a tile map efficiently? Let's focus on the data. There are lots of tiles and each tile has the following infromation:
grid position (x,y)
material (let's call it "material" not "texture" because as you said the image might be animated and change in time; the "material" would then be interpreted as "one texture or set of textures which change in time" or anything you want).
That should be all the "per-tile" data you'd need to send to the GPU. You want to render each tile as a quad or triangle strip, so you have two alternatives:
send 4 vertices (x,y),(x+w,y),(x+w,y+h),(x,y+h) instead of (x,y) per tile,
use a geometry shader to calculate the 4 points along with texture coords for every 1 point sent.
Pick your favourite. Also note that directly corresponds to what your VBO is going to contain - the latter solution would make it 4x smaller.
For the material, you can pass it as a symbolic integer, and in your fragment shader - basing on current time (passed as an uniform variable) and the material ID for a given tile - you can decide on the texture ID from the texture array to use. In this way you can make a simple texture animation.