Textured cube renders blank in DirectX - c++

I am trying to apply textures to a cube in DirectX 9, so far I have made it to draw it with vertex colors and light as well as with a material and light, but now with the texture I get a blank cube, in my application it renders white, in PIX it renders black. I know the texture is loaded correctly as I can see it from PIX and as I have made sure I can detect any error the loading function may return (through the HResult) Edit: it was the same image on a surface, but with a breakpoint I see the texture has an address, and thus exists, whether it is right or not I cannot tell, should I be able to see it in PIX? I also know that I've got the UV's right as they appear right in PIX as I select the vertices, and they are indeed in clockwise order (at least one face is, but that should be enough to draw something).
I have removed the light code so the code is:
pd3dDevice->BeginScene();
setMatrices();
pd3dDevice->SetTexture(0, tex);
//pd3dDevice->SetRenderState(D3DRS_WRAP0, D3DWRAPCOORD_0);
pd3dDevice->SetTextureStageState(0, D3DTSS_COLOROP, D3DTOP_SELECTARG1);
pd3dDevice->SetTextureStageState(0, D3DTSS_COLORARG1, D3DTA_TEXTURE);
pd3dDevice->SetTextureStageState(0, D3DTSS_COLORARG2, D3DTA_DIFFUSE);
//pd3dDevice->SetRenderState(D3DRS_LIGHTING, FALSE);
// select which vertex format we are using
pd3dDevice->SetFVF(CUSTOMFVF);
// select the vertex buffer to display
pd3dDevice->SetStreamSource(0, v_buffer, 0, sizeof(CUSTOMVERTEX));
pd3dDevice->SetIndices(i_buffer);
// copy the vertex buffer to the back buffer
pd3dDevice->DrawIndexedPrimitive(D3DPT_TRIANGLELIST,
0,
0,
8,
0,
12);
pd3dDevice->EndScene();
setMatrices() does the transformations, it worked well before I added lights so it should be fine. I commented out the call to the light handling function, so it shouldn't meddle with the current code.
I notice that if I remove pd3dDevice->SetTextureStageState(0, D3DTSS_COLOROP, D3DTOP_SELECTARG1); I get a black unshaded but transformed cube, which to me seems reasonable.
The commented out lines of code, I have tried with and without them, they do not seem to effect the situation as far as I can see. I have tried a couple more render state settings but no luck - after all they were mostly educated guesses and not something that was supposed to work.
The FVF has XYZ, normal vector and UV.
Again, PIX shows the cube as black, I have it white in the application. Perhaps there's a hint there?
Thank you for any attempt to help out, much obliged.

Related

Drawing the grid over the texture

Before diving into details, I have added opengl tag because JoGL is a Java OpenGL binding and the questions seem to be accessible for experts of both to answer.
Basically what I am trying to do is to render the grid over the texture in JoGL using GLSL. So far, my idea was to render first the texture and draw the grid on top. So what I am doing is:
gl2.glBindTexture(GL2.GL_TEXTURE_2D, textureId);
// skipped the code where I setup the model-view matrix and where I do fill the buffers
gl2.glVertexAttribPointer(positionAttrId, 3, GL2.GL_FLOAT, false, 0, vertexBuffer.rewind());
gl2.glVertexAttribPointer(textureAttrId, 2, GL2.GL_FLOAT, false, 0, textureBuffer.rewind());
gl2.glDrawElements(GL2.GL_TRIANGLES, indices.length, GL2.GL_UNSIGNED_INT, indexBuffer.rewind());
And after I draw the grid, using:
gl2.glBindTexture(GL2.GL_TEXTURE_2D, 0);
gl2.glDrawElements(GL2.GL_LINE_STRIP, indices, GL2.GL_UNSIGNED_INT, indexBuffer.rewind());
Without enabling the depth test, the result look pretty awesome.
But when I start updating the coordinates of the vertices (namely updating one of its axes which corresponds to height), the rendering is done in a wrong way (some things which should be in front appear behind, which makes sense without the depth test enabled). So I have enabled the depth test:
gl.glEnable(GL2.GL_DEPTH_TEST);
gl.glDepthMask(true);
An the result of the rendering is the following:
You can clearly see that the lines of the grid are blured, some of the are displayed thinner then others, etc. What I have tried to do to fix the problem is some line smoothing:
gl2.glHint(GL2.GL_LINE_SMOOTH_HINT, GL2.GL_NICEST);
gl2.glEnable(GL2.GL_LINE_SMOOTH);
The result is better, but I am not still satisfied.
QUESTION: So basically the question is how to improve further the solution, so I can see solid lines and those are displayed nicely when I start updating the vertex coordinates.
If it is required I can provide the code of Shaders (which is really simple, Vertex Shader only calculates the position based on projection, model view matrix and the vertex coords and Fragment Shader calculates the color from texture sampler).

OpenGL3 two sets of shaders, texture showing black

I've recently succeeded at making a small test app with a GL_TEXTURE_RECTANGLE. Now I'm trying to integrate it into my larger project, but when I call glBindTexture(GL_TEXTURE_RECTANGLE, _tex_id[0]) inside the render function, it's causing the GL_INVALID_OPERATION​ error. The texture image sometimes shows for a fraction of a second, then turns black and stays black.
I am trying to do this by using two sets of vertex and fragment shaders, one set for the 3D scene, and one set for the 2D overlay, but I've never tried this before so I don't know if that's what's causing the error, or if I should be going about this a different way. The shaders are all compiling and linking fine.
Any insight would be much appreciated, and if it would help to see some code, let me know and I'll post some of it (although I think it may be too much for anyone to reasonably look through).
Edit: gDEBugger breaks at the call to glBindTexture(), and when clicking on the breakpoint, the properties window shows a picture of one of my other textures (one that's being loaded by the 3D scene's shaders), it shows that it's trying to load texture number 1, but I know this number is already being used to draw the same 3D scene's texture shown in the properties window... why would glGenTextures() be giving me overlapping texture id numbers? Is this normal or maybe part of the problem?
The black texture was due to me not forwarding some vertex shader inputs (normals) through to the fragment shader, even though I'm not using normals for anything in the 2D overlay shaders. As soon as I added outputs for all the inputs, and forwarded them along to the fragment shader, the texture was no longer black, but it was still disappearing after a fraction of a second. This was because I was calling glBindTexture(GL_TEXTURE_RECTANGLE, 0) at the end of the render function with the hopes that it would clean up some state... this was clearly the wrong thing to do, because removing that call caused the 2D texture to stay on-screen. Furthermore, calling glBindTexture() with the GL_TEXTURE_RECTANGLE target seems to work during the texture setup stage, but during rendering the GL_TEXTURE_RECTANGLE target was causing the GL_INVALID_OPERATION​ error. Changing the target to GL_TEXTURE_2D only in the render function made the error go away, and everything seems to work nicely now.

OpenGL FBO - White pixels appear transparent

I'm making a 2D game using OpenGL. I recently tried implementing Framebuffer-objects, and I am having some problems regarding blending.
I'm creating an FBO (using GL_RGBA as format).
When I render to the FBO, I first clear it to fully transparent black, and disable GL_BLEND. I then draw my textures and then I enable GL_BLEND again.
When I'm drawing the FBO-texture, I use GL_SRC_ALPHA and GL_ONE_MINUS_SRC_ALPHA as source and destination pixels respectively, as the blending-function. I am rendering it as a textured quad.
This does not work properly, as white pixels appear transparent. I have tried experimenting with different blend-function values, but all that I have tried have had issues. I do not fully understand how blending works, so it's hard for me to wrap my head around this. Maybe I'm missing something obvious here?
Here's an image of how it looks right now. There is supposed to be a glow around the button when it is being highlighted, but instead the pixels around it appear transparent: http://i.snag.gy/RnV4s.jpg
You can also see two boxes of text in the image. The top one is drawn normally, without an FBO. The textures are also rendered normally without an FBO, so I know that the problem lies within my framebuffer-code somewhere.
I have pasted my "RenderTarget" class to pastebin (I'm used to calling it a rendertarget instead of FBO): http://pastebin.com/dBXgjrUX
This is how I use it:
RT->Begin();
// draw stuff
RT->End();
RT->Draw();
Can someone help me? Let me know if you need any more info about my issue.
Edit:
Here are the properties of OpenGL that I set on startup:
// Initialize shaders
shaderManager.InitializeStockShaders();
// Set some OpenGL properties
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glShadeModel(GL_SMOOTH);
glAlphaFunc(GL_GREATER, 0.0f);
// Enables/disables
glEnable(GL_ALPHA_TEST);
glEnable(GL_BLEND);
glDisable(GL_DEPTH_TEST);
glDisable(GL_DITHER);
glDisable(GL_LIGHTING);
I'ts a bit difficult to tell what your problem is exactly, because you didn't provide source code. Alas, I see several potential troublemakers:
First you told that you want to draw a glow around the button. I presume, that all the buttons are drawn into the FBO, merging them into a UI overlay. Glow sounds to me, like you want to blend something, so you probably also want to have blending enabled, drawing to the FBO.
Next be aware of depth buffer issues. Blending and Depth Buffering have peculiar interactions. In your case I suggest disabling depth testing and depth writes to the FBO (or not using a depth buffer attachment to the FBO at all). Draw the glowing button last, so that it won't block the other buttons from being drawn. Also you must make sure, that your glow comes out with a nonzero alpha value, otherwise it will blend transparent. This is something you control in your shaders, or texture environment (depending on what you use).
Update 1:
Your FBO class doesn't propperly ensure, that textures attached to a bound framebuffer must not be bound themself. It's easy to fix though, by moving attachment code into bind, where the textures are also unbound apropriately. See my edited pastebin http://pastebin.com/1uVT7VkR (I probably missed a few things).

Rendering 3D Models With Textures That Have Alpha In OpenGL

So Im trying to figure out the best way to render a 3D model in OpenGL when some of the textures applied to it have alpha channels.
When I have the depth buffer enabled, and start drawing all the triangles in a 3D model, if it draws a triangle that is in front of another triangle in the model, it will simply not render the back triangle when it gets to it. The problem is when the front triangle has alpha transparency, and should be able to be seen through to the triangle behind it, but the triangle behind is still not rendered.
Disabling the depth buffer eliminates that problem, but creates the obvious issue that if the triangle IS opaque, then it will still render triangles behind it on top if rendered after.
For example, I am trying to render a pine tree that is basically some cones stacked on top of each other that have a transparent base. The following picture shows the problem that arises when the depth buffer is enabled:
You can see how you can still see the outline of the transparent triangles.
The next picture shows what it looks like when the depth buffer is disabled.
Here you can see how some of the triangles on the back of the tree are being rendered in front of the rest of the tree.
Any ideas how to address this issue, and render the pine tree properly?
P.S. I am using shaders to render everything.
If you're not using any partial transparency (everything is either 0 or 255), you can glEnable(GL_ALPHA_TEST) and that should help you. The problem is that if you render the top cone first, it deposits the whole quad into the z-buffer (even the transparent parts), so the lower branches underneath get z-rejected when its their time to be drawn. Enabling alpha testing doesn't write pixels to the z buffer if they fail the alpha test (set with glAlphaFunc).
If you want to use partial transparency, you'll need to sort the order of rendering objects from back to front, or bottom to top in your case.
You'll need to leave z-buffer enabled as well.
[edit] Whoops I realized that those functions I don't believe work when you're using shaders. In the shader case you want to use the discard function in the fragment shader if the alpha value is close to zero.
if(color.a < 0.01) {
discard;
} else {
outcolor = color;
}
You needs to implement a two-pass algorithm.
The first pass render only the back faces, while the second pass render only the front faces.
In this way you don't need to order the triangles, but some artifacts may occour depending whether your geometry is convex or not.
I may be wrong, but this is because when you render in 3d you do no render the backside of triangles using Directx's default settings, when the Z is removed - it draws them in order, with the Z on it doesnt draw the back side of the triangles anymore.
It is possible to show both sides of the triangle, even with Z enabled, however I'm thinking there might be a reason its normally enabled.. such as speed..
Device->SetRenderState(D3DRS_CULLMODE, Value);
value can equal
D3DCULL_NONE - Shows both sides of triangle
D3DCULL_CW - Culls Front side of triangle
D3DCULL_CCW - Default state

OpenGL draw GL_LINES on top of GL_QUADS

I am drawing a cube and a line grid in 3D in OpenGL:
glBegin(GL_QUADS);
...
glEnd();
glBegin(GL_LINES);
...
glEnd();
Now, independent of the order (if I draw the lines first or the quads first) and independent of the position it always happens that the lines are draw over the cube. I thought OpenGL draws front to back.
What I tried is to use:
glEnable(GL_BLEND);
glBlendFunc (GL_ONE, GL_ONE);
which does work but part of the cube is transparent now.
I also tried glDepthFunc(GL_NEVER) with disabling glEnable (GL_DEPTH_TEST) but I get the same problem that the cube appears transparent.
Does anyone have a hint to overcome this problem?
If you want to draw the lines in the background, just draw them (and the rest of the background) first, clear the depth buffer, then render the rest of the scene.
Or you can just give the lines a depth such that they will always be behind everything else, but then you have to make sure that none of your gameworld objects go behind it.
Now, independent of the order (if I draw the lines first or the quads first) and independent of the position it always happens that the lines are draw over the cube.
IF your lines have proper depth, then you forgot to enable depth buffer. If you enabled depth buffer, then you must make sure that your library used for initializing OpenGL requested depth buffer.
I thought OpenGL draws front to back.
It does not. OpenGL draws polygons in the same order you specify them. There is no automatic sorting of any kind.
Does anyone have a hint to overcome this problem?
Well, you could clear depth buffer, but this will be slow and inefficient. Technically, you should almost never do that.
glDepthMask(GL_FALSE) will disable writing to depth buffer. i.e. any object drawn after that call will not update depth buffer but will use data that is already stored. It is frequently used for particle systems. So call glDepthMask(GL_FALSE), draw "lines", call glDepthMask(GL_TRUE), then draw cube.
If you combine glDepthMask(GL_FALSE) with glDepthFunc(GL_ALWAYS) then object will be always drawn, completely ignoring depth buffer (but depth buffer won't be changed).