OpenGL3 two sets of shaders, texture showing black - c++

I've recently succeeded at making a small test app with a GL_TEXTURE_RECTANGLE. Now I'm trying to integrate it into my larger project, but when I call glBindTexture(GL_TEXTURE_RECTANGLE, _tex_id[0]) inside the render function, it's causing the GL_INVALID_OPERATION​ error. The texture image sometimes shows for a fraction of a second, then turns black and stays black.
I am trying to do this by using two sets of vertex and fragment shaders, one set for the 3D scene, and one set for the 2D overlay, but I've never tried this before so I don't know if that's what's causing the error, or if I should be going about this a different way. The shaders are all compiling and linking fine.
Any insight would be much appreciated, and if it would help to see some code, let me know and I'll post some of it (although I think it may be too much for anyone to reasonably look through).
Edit: gDEBugger breaks at the call to glBindTexture(), and when clicking on the breakpoint, the properties window shows a picture of one of my other textures (one that's being loaded by the 3D scene's shaders), it shows that it's trying to load texture number 1, but I know this number is already being used to draw the same 3D scene's texture shown in the properties window... why would glGenTextures() be giving me overlapping texture id numbers? Is this normal or maybe part of the problem?

The black texture was due to me not forwarding some vertex shader inputs (normals) through to the fragment shader, even though I'm not using normals for anything in the 2D overlay shaders. As soon as I added outputs for all the inputs, and forwarded them along to the fragment shader, the texture was no longer black, but it was still disappearing after a fraction of a second. This was because I was calling glBindTexture(GL_TEXTURE_RECTANGLE, 0) at the end of the render function with the hopes that it would clean up some state... this was clearly the wrong thing to do, because removing that call caused the 2D texture to stay on-screen. Furthermore, calling glBindTexture() with the GL_TEXTURE_RECTANGLE target seems to work during the texture setup stage, but during rendering the GL_TEXTURE_RECTANGLE target was causing the GL_INVALID_OPERATION​ error. Changing the target to GL_TEXTURE_2D only in the render function made the error go away, and everything seems to work nicely now.

Related

White (maybe uncolored) OBJ in GLSL/C++

Hi I'm working to create a space environment with a ship inside.But after the creation of the skybox (no errors) I put my ship inside but it hasnt colour. It's something like white-black
I did(modeled) the ship with OPENSCAD and after with MESHLAB I exported it into .OBJ format. I load it inside the source code but he hasn't the texture/colour . That my ship in Meshlab :
I need to know only I have to add something about color in the code or it's error in input. What this?
If you need I post the code, but if is somthing normal this error explain me, anyway I'm a little newbie in opengl, so be patient, thank you
EDIT :
Look my .obj file in windows:
And the same project in ubuntu :
What's this difference?
Anyway the openscad code :
module navicella(){
$fn=100;
rotate([0,180,270]){
union(){
rotate([270,180,0]){
rotate([90,0,0])
cylinder(50,7,10,center=true);
intersection(){
translate([0,-25,0])
sphere(10);
translate([0,-25,0])
cube(19,center=true);
}
difference(){
translate([0,35,0])
cube([10,15,15],center=true);
translate([0,40,0])
sphere(13);
}
translate([5,-10,0])rotate([90,0,70])
cube([35,1,15],center=true);
translate([-2,0,0])rotate([90,0,95])
cube([50,1,10],center=true);
translate([0,3,6])rotate([0,-15,90])
cube([40,1,20],center=true);
translate([0,3,-6])rotate([0,15,90])
cube([40,1,20],center=true);
translate([0,-35,0])rotate([90,0,0])
cylinder(10,5,0,center=true);
translate([0,20,0])rotate([0,90,0])
cube([45,1,2],center=true);
translate([0,25,0]) rotate([90,0,0])
cylinder(5,4,7,center=true);
}
}
}
}
navicella();
Looks like you have forgot to disable all used textures that are not used by your model. That is very common mistake (I do it all the time still today).
What is probably happening?
You rendered skybox or any object with textures
So for example GL_CUBE_MAP and or GL_TEXTURE_2D are enabled. Now when you start rendering your mesh that does not contain texture the textures are still enabled. So for every fragment/pixel GL will always sample the last set texture coordinate from last binded texture in all of the enabled texture targets and combine color according to GL settings.
So if you model does not contain texture coordinates GL will use the last set one. That is usually in corner area where black border is ... or you are just in some dark region of texture. Also if you unbind textures that only means you bind default texture 0 which is usually black.
To test/remedy this just callglDisable(GL_....); for all previously used texture targets. If it helps you know where the problem is.
Also if your object contains texture coordinates and texture is not loaded properly (like wrong file name/path) then the result is usually black.
Missing or negative normals while rendering with lighting enabled
Does not looks like it but it could be also the reason. If your model has wrong or no normals then the lighting computation result in wrong lighting. If the object is always dark even if you rotate it the the normals are negated (facing the other way) and you should change the front face for lighting/material or negate the normals.
If the color intensity is changing with rotation then your object has probably no normal and again the last set normal is used (from previous rendering). In such case either compute normals for the object (via cross product) or disable lighting for that object.

Can you use glColor3f() whilst bypassing the fragment shader? [duplicate]

Okay, so here is my problem. I have a framework used by my school for visualizations, and I've been trying to set it up to do 3d graphics. Problem is, the framework currently only uses the fixed function pipeline to draw. Without messing that up, i've been trying to work around the old code which still needs to use the fixed function pipeline, and i have been setting up facilities to allow for the creation of shaders and shader programs. I've got a simple color shader to compile and i've also made a test vertex array (a green triangle).
Now when i tried to render it, the screen went black. Before hand, there was a lot of 2d sprites and what not moving about the screen, but stepping through the code i added to the render function, I found that the screen goes black the moment I call glUseProgram. If i comment out the glUseProgram, and the parts where i set the uniforms and draw, everything works normally. Does glUseProgram disable the fixed function pipeline? if so, is there anyway to reactivate it, per se?
The moment you use glUseProgram fixed function pipeline is replaced by programmable pipeline. You can't have like fixed function + programmable pipeline at the same time. For example suppose your scene contains fog. But if you haven't taken care of that in your fragment shader you wont see it in final output.
Though in your render/draw function you can do something like this
draw
{
glUseProgram(program);
// render stuff with shader
glUseProgram(0)
// render stuff with fixed pipeline
}

opengl strange behavior shadowmap (Bindtexture)

I have a strange bug in opengl I cannot explain.
I have "programmed" using copy-paste of tutorials a shadow map.
I wanted to display the shadow map, so I wrote a little routine that displays it. It doesn't work (I only get a white square).
Finally, by trying a lot, I get the shadow to be almost correct on the main rendering of the scene. But the shadow doesn't render anymore when I un-comment the routine to display the shadowmap. (which goes before the main rendering)
I guess it has to do with glBindTexture(GL_TEXTURE_2D, depthTex) (which is stored in the shadowbuffer). If it is not asked to refer to this depthTex to display the shadow map, then somehow the shadow is built, but when asked to display this depthTex, then the shadow computed is nonsens.
I wonder if once asked glBindTexture(GL_TEXTURE_2D, depthTex) for the displaying of the shadowMap, the depthTex is not anymore able to be linked for the computation of the shadow shader.
I don't understand if glBindTexture is for reading or for writing...
This depthTex is indeed defined as the texture in which is stored the shadowmap.
to sum up : two pieces of code interfere, so that the rendering of a texture appears to become more complicated than simply giving the regular opengl commands to display a texture. Just as if glBindTexture(GL_TEXTURE_2D, depthTex) could be called only once.
If I ask in the displaying of the shadow map routine to display another texture of the program (such as "floor"), then the shadow is ok on the main rendered scene, but when i ask to display "depthTex", then the shadow doesn't works anymore.
I have a strange bug in opengl I cannot explain. I have "programmed" using copy-past of tutorials a shadow map.
In short: You're cargo culting. Read some good OpenGL tutorial(s), understand what they teach and then you'll make some progress.
I don't understand if glBindTexture is for reading or for writing... This depthTex is indeed defined as the texture in which is stored the shadowmap.
It's for both. It selects the texture bound to the active texture unit. When texturing is enabled or a shader bound that samples from the texture unit the bound texture is read. Calling one of the glTex…Image functions will write to it.

DirectX post-processing shader

I have a simple application in which I need to let the user select a shader (.fx HLSL or assembly file, possibly with multiple passes, but all and only pixel shader) and preview it.
The application runs, the list of shaders comes up, and a button launches the "preview window."
From this preview window (which has a DirectX viewport in it), the user selects an image and the shader is run on that image and displayed. Only one frame needs rendered (not real-time).
I have a vertex/pixel shader combination set up that takes a quad and renders it to the screen, textured with the chosen image. This works perfectly.
I need to then run another effect, purely pixel shader, on the output from the first effect, and display the final image (post-processed) to the screen. This doesn't work at all.
I've tried for the past few days to get it working, but for no apparent reason, the identical code blocks used to render each effect only render the first. I can add the second shader file as a second pass in the first shader file and it runs perfectly (although completely defeats my goal of previewing user-created shaders). When I try to use a second effect (which loads and compiles just fine), it does nothing.
I've taken the results of the first shader (with GetRenderTargetData) and placed them in a texture & surface (destTex and destSur), then set that texture as the input for the second pass (using dev->SetTexture and later effect->SetTexture("thisframe", destTex)).
All calls succeed, effects compile, textures load, quads are drawn, but the effect is not visible.
I suspected at first the device (created with software vertex processing) was causing the issue, but that doesn't seem to be the case (I tried with hardware and mixed).
Additionally, using both a HAL and REF device (not a problem, since the app isn't realtime anyways), that second shader isn't visible.
Everything is written in C++ for Direct3D 9.
Try clearing the depth-stencil buffer after each time you render the quad.
First Create a texture, then render the first shader directly into that texture. Finally render the second shader with the texture as input to the Backbuffer.
There must be some kind of vertex input and vertex processing (either fixed-function or shader) in order for the pixel shader to be run. Are you supplying the vertex shader, and if so are you sure it does what the pixel shader expects? What does your draw call look like?
It's probably worth looking at a PIX trace of your app to see what the device state is when trying to use the user effect.

OpenGL primitives too dark when multitexturing?

I'm having a problem getting accurate primitive colours when I'm using multi-texturing elsewhere in the scene. Basically, I have some lines and polygons that I am trying render over a video texture (I'm using 3 stage multitexturing to create the video texture)... Anyhow, I know the problem is not alpha related... In fact, I know that in my texture update function if I just comment out the calls to glBindTexture() for texture levels 1 and 2, the primitive color is fine (so leaving texture level 0)... Is it trying to multitexture the primitives too (even though I'm obviously not setting texture coordinates for primitives)?
Make sure to disable multitexturing when not using it. OpenGL uses a state machine, so if you turn on a texture it will stay on until you explicitly turn it off.
Just because you're not setting coordinates, doesn't mean OpenGL will assume you're not using the texture.