glDrawBuffers usage for multiple render targets, under OS X - opengl

I'm having some very strange behaviour from my multiple-render-target code, and have started to wonder whether I'm catastrophically misunderstanding the way that this is supposed to work.
I'm running in a version 2.1 context. Here's the core bit of render setup code I'm executing:
glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, m_fbo);
GLenum buffers[] = { GL_COLOR_ATTACHMENT0_EXT, GL_COLOR_ATTACHMENT1_EXT };
glDrawBuffers(2,buffers);
My shader then writes out color data to gl_FragColor[0] and glFragColor[1].
This is essentially the same situation as was discussed in this question. However, when I run this on OS X, my shader only outputs to the first render target. OpenGL throws no errors, either during the construction of the FBO with two color attachments, or at any point during the rendering process.
When I examine what's going on via the OSX 'OpenGL Profiler' "trace" view, it shows the driver's side of this code execution as being:
2.86 µs glBindFramebufferEXT(GL_FRAMEBUFFER, 1);
3.48 µs glDrawBuffersARB(2, {GL_COLOR_ATTACHMENT0, GL_ZERO});
Which perhaps explains why nothing was being written to GL_COLOR_ATTACHMENT1; it seems to be being replaced by GL_ZERO in the call to glDrawBuffers!
If I switch the order of the buffers in the buffers[] array to be GL_COLOR_ATTACHMENT1_EXT first and then GL_COLOR_ATTACHMENT0_EXT, then my shader only writes into GL_COLOR_ATTACHMENT1_EXT, and GL_COLOR_ATTACHMENT0_EXT appears to be replaced with GL_ZERO.
Here's where it gets weird. If I use the code:
GLenum buffers[] = { GL_COLOR_ATTACHMENT0_EXT, GL_COLOR_ATTACHMENT1_EXT };
glDrawBuffers(3, buffers);
Then the statistics view shows this:
0.46 µs glDrawBuffersARB(3, {GL_COLOR_ATTACHMENT0, GL_ZERO, GL_COLOR_ATTACHMENT1});
OpenGL still throws no errors, and my shader successfully writes out data to both color attachments, even though it's writing to gl_FragColor[0] and gl_FragColor[1].
So even though my program is now working, it seems to me like this shouldn't work. And I was curious to see how far I could push this, hoping that pushing OpenGL to an eventual failure would be educational. So I tried compiling with this code:
GLenum buffers[] = { GL_COLOR_ATTACHMENT0_EXT, GL_COLOR_ATTACHMENT1_EXT };
glDrawBuffers(4, buffers);
When running that, the OpenGL Profiler "trace" view shows this as being executed:
4.26 µs glDrawBuffersARB(4, {GL_COLOR_ATTACHMENT0, GL_ZERO, GL_COLOR_ATTACHMENT1, GL_ZERO});
And now OpenGL is throwing "invalid framebuffer operations" all over the place, but my shader is still successfully writing color data to both color attachment points.
Does all this make sense to anyone? Have I catastrophically misunderstood the way that glDrawBuffers is supposed to be called?
According to the OpenGL Profiler's "Resources" view, my framebuffer (number 1) looks fine; it does have two color attachments attached, as expected.
Attached Objects:
{
{
GL_FRAMEBUFFER_ATTACHMENT: GL_COLOR_ATTACHMENT0
GL_FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE_EXT: GL_TEXTURE
GL_FRAMEBUFFER_ATTACHMENT_OBJECT_NAME_EXT: 1
GL_FRAMEBUFFER_ATTACHMENT_TEXTURE_LEVEL_EXT: 0
GL_FRAMEBUFFER_ATTACHMENT_TEXTURE_3D_ZOFFSET_EXT: 0
GL_FRAMEBUFFER_ATTACHMENT_TEXTURE_CUBE_MAP_FACE_EXT: 0
}
{
GL_FRAMEBUFFER_ATTACHMENT: GL_COLOR_ATTACHMENT1
GL_FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE_EXT: GL_TEXTURE
GL_FRAMEBUFFER_ATTACHMENT_OBJECT_NAME_EXT: 2
GL_FRAMEBUFFER_ATTACHMENT_TEXTURE_LEVEL_EXT: 0
GL_FRAMEBUFFER_ATTACHMENT_TEXTURE_3D_ZOFFSET_EXT: 0
GL_FRAMEBUFFER_ATTACHMENT_TEXTURE_CUBE_MAP_FACE_EXT: 0
}
{
GL_FRAMEBUFFER_ATTACHMENT: GL_COLOR_ATTACHMENT2
GL_FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE_EXT: GL_NONE
}
{
GL_FRAMEBUFFER_ATTACHMENT: GL_COLOR_ATTACHMENT3
GL_FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE_EXT: GL_NONE
}
{
GL_FRAMEBUFFER_ATTACHMENT: GL_COLOR_ATTACHMENT4
GL_FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE_EXT: GL_NONE
}
{
GL_FRAMEBUFFER_ATTACHMENT: GL_COLOR_ATTACHMENT5
GL_FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE_EXT: GL_NONE
}
{
GL_FRAMEBUFFER_ATTACHMENT: GL_COLOR_ATTACHMENT6
GL_FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE_EXT: GL_NONE
}
{
GL_FRAMEBUFFER_ATTACHMENT: GL_COLOR_ATTACHMENT7
GL_FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE_EXT: GL_NONE
}
{
GL_FRAMEBUFFER_ATTACHMENT: GL_DEPTH_ATTACHMENT
GL_FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE_EXT: GL_TEXTURE
GL_FRAMEBUFFER_ATTACHMENT_OBJECT_NAME_EXT: 3
GL_FRAMEBUFFER_ATTACHMENT_TEXTURE_LEVEL_EXT: 0
GL_FRAMEBUFFER_ATTACHMENT_TEXTURE_3D_ZOFFSET_EXT: 0
GL_FRAMEBUFFER_ATTACHMENT_TEXTURE_CUBE_MAP_FACE_EXT: 0
}
{
GL_FRAMEBUFFER_ATTACHMENT: GL_STENCIL_ATTACHMENT
GL_FRAMEBUFFER_ATTACHMENT_OBJECT_TYPE_EXT: GL_NONE
}
}

...and after banging my head against this for almost a week, I figured it out just five minutes after finally posting the question on StackOverflow. Posting my solution since it seems to be a header problem that's likely to affect other OSX folks.
For whatever reason, on my 64-bit OSX build, GLenum is defined as an 8-byte integer type, while the OpenGL drivers actually want 32-bit values in the array being passed to glDrawBuffers. If I rewrite the code as:
uint32_t buffers[] = { GL_COLOR_ATTACHMENT0_EXT, GL_COLOR_ATTACHMENT1_EXT };
glDrawBuffers(2,(GLenum*)buffers);
Then everything works as expected. (The placement of the GL_ZERO entries was the hint that eventually led me to this answer)

Related

Passing Texture through Shader DirectX 9

I am trying to render a texture that gets passed through a pixel shader.
Currently my shader is as follows:
float4 EffectProcess( float2 Tex : TEXCOORD0 ) : COLOR0
{
return float4(1,0,0,1);
}
technique MyTechnique
{
pass p0
{
VertexShader = null;
PixelShader = compile ps_2_0 EffectProcess();
}
}
As you can see, it is a very basic shader that makes that forces the pixels to be red.
UINT uiPasses = 0;
res= g_lpEffect->Begin(&uiPasses, 0);
for (UINT uiPass = 0; uiPass < uiPasses; uiPass++)
{
res = g_lpEffect->BeginPass(uiPass);
res = sprite->Begin(D3DXSPRITE_SORT_TEXTURE);
res = sprite->Draw(tex, NULL, 0x0, 0x0, 0xFFFFFFFF);
res = sprite->End();
res = g_lpEffect->EndPass();
}
res = g_lpEffect->End();
And I am drawing the texture using the shader like so. I am not sure this is the correct way to do it though and have found very little resources on the subject.
The shader is being created correctly and the texture aswell, all calls return a hresult of S_OK, yet when I run the code, the texture shows perfectly, without being overwritten by red.
Both sprite and effects by default store initial pipeline state and set up their own when Begin is called and then restore it when End is called. So I suspect that sprite->Begin(D3DXSPRITE_SORT_TEXTURE); will disable effect processing and your pixel shader is never called. You may try to pass something like D3DXSPRITE_DONOTMODIFY_RENDERSTATE into Begin to prevent it from modifying pipeline state, though this may break sprite rendering. It would be better to get rid of sprite altogether and write your own sprite shader (both vertex and pixel) because fixed pipeline rendering is mostly deprecated these days.

Get texture target by texture id

glBindTextures is a nice function, not only because it binds multiple textures in one call, but also because it knows to bind each texture to "the target [...] that was specified when the object was created". This way I can specify the target only at texture creation and then forget about it, which helps in generic code.
Unfortunately, I must know the target when calling functions like glGetTexParamater. Is there a way to retrieve the texture target from the texture id? Widely supported extensions are also ok.
As far as I know, there isn't.
A possible workaround could be querying the current binding for every texture target used by your application and compare the current texture against the id you have.
GLuint currentTex;
glGetIntegerv(GL_TEXTURE_BINDING_1D, &currentTex);
if (currentTex == testTex)
{
target = GL_TEXTURE_1D;
return;
}
glGetIntegerv(GL_TEXTURE_BINDING_2D, &currentTex);
if (currentTex == testTex)
{
target = GL_TEXTURE_2D;
return
}
// and so on ...
Of course that you must have a texture bound for this to work. If binding with glBindTexture then you need the target anyway.
But this solution is so clumsy and non-scalable that it is generally much easier to just keep an extra int together with the id for the texture target.
Since OpenGL 4.5 this can be done by:
GLenum target;
glGetTextureParameteriv(textureId, GL_TEXTURE_TARGET, (GLint*)&target);
It's also true that since the introduction of the direct-state-access API (DSA) in OpenGL 4.5, knowing the target of the texture became not as useful.
There really isn't a pretty way to do this that I could find, even after looking at the state tables in the specs. Two possibilities that are both far from attractive:
Try binding it to various targets, and see if you get a GL_INVALID_OPERATION error:
glBindTexture(GL_TEXTURE_1D, texId);
if (glGetError() != GL_INVALID_OPERATION) {
return GL_TEXTURE_1D;
}
glBindTexture(GL_TEXTURE_2D, texId);
if (glGetError() != GL_INVALID_OPERATION) {
return GL_TEXTURE_2D;
}
...
This is similar to what #glamplert suggested. Bind the texture to a given texture unit with glBindTextures(), and then query the textures bound to the various targets for that unit:
glBindTextures(texUnit, 1, &texId);
glActiveTexture(GL_TEXTURE0 + texUnit);
GLuint boundId = 0;
glGetIntegerv(GL_TEXTURE_BINDING_1D, &boundId);
if (boundId == texId) {
return GL_TEXTURE_1D;
}
glGetIntegerv(GL_TEXTURE_BINDING_2D, &boundId);
if (boundId == texId) {
return GL_TEXTURE_2D;
}
But I think you would be much happier if you simply store away which target is used for each texture when you first create it.

Can't generate mipmaps with off-screen OpenGL context on Linux

This question is a continuation of the problem I described here .This is one of the weirdest bugs I have ever seen.I have my engine running in 2 modes:display mode and offscreen.The OS is Linux.I generate mipmaps for the textures and in Display mode it all works fine.In that mode I use GLFW3 for context creation.Now,the funny part:in the offscreen mode,context for which I create manually with the code below,the mipmap generation fails OCCASIONALLY!That's on some runs the resulting output looks ok,and in other the missing levels are clearly seen as the frame is full of texture junk data or entirely empty.
At first I though I had my mipmap gen routine wrong which goes like this:
glGenTextures(1, &textureName);
glBindTexture(GL_TEXTURE_2D, textureName);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, imageInfo.Width, imageInfo.Height, 0, imageInfo.Format, imageInfo.Type, imageInfo.Data);
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0 );
glGenerateMipmap(GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
I also tried to play with this param:
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, XXX);
including Max level detection formula:
int numMipmaps = 1 + floor(log2(glm::max(imageInfoOut.width, imageInfoOut.height)));
But all this stuff didn't work consistently.Out of 10-15 runs 3-4 come out with broken Mipmaps.What I then found was that switching to GL_LINEAR solved it.Also in mipmap mode,setting just 1 level worked as well.Finally I started thinking there could a problem on a context level because in screen mode it works!I switched context creation to GLFW3 and it works.So I wonder what's going on here?Do I miss something in Pbuffer setup which breaks mipmaps generation?I doubt it because AFAIK it is done by the driver.
Here is my custom off-screen context creation setup:
int visual_attribs[] = {
GLX_RENDER_TYPE,
GLX_RGBA_BIT,
GLX_RED_SIZE, 8,
GLX_GREEN_SIZE, 8,
GLX_BLUE_SIZE, 8,
GLX_ALPHA_SIZE, 8,
GLX_DEPTH_SIZE, 24,
GLX_STENCIL_SIZE, 8,
None
};
int context_attribs[] = {
GLX_CONTEXT_MAJOR_VERSION_ARB, vmaj,
GLX_CONTEXT_MINOR_VERSION_ARB, vmin,
GLX_CONTEXT_FLAGS_ARB,
GLX_CONTEXT_ROBUST_ACCESS_BIT_ARB
#ifdef DEBUG
| GLX_CONTEXT_DEBUG_BIT_ARB
#endif
,
GLX_CONTEXT_PROFILE_MASK_ARB, GLX_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB,
None
};
_xdisplay = XOpenDisplay(NULL);
int fbcount = 0;
_fbconfig = NULL;
// _render_context
if (!_xdisplay) {
throw();
}
/* get framebuffer configs, any is usable (might want to add proper attribs) */
if (!(_fbconfig = glXChooseFBConfig(_xdisplay, DefaultScreen(_xdisplay), visual_attribs, &fbcount))) {
throw();
}
/* get the required extensions */
glXCreateContextAttribsARB = (glXCreateContextAttribsARBProc) glXGetProcAddressARB((const GLubyte *) "glXCreateContextAttribsARB");
glXMakeContextCurrentARB = (glXMakeContextCurrentARBProc) glXGetProcAddressARB((const GLubyte *) "glXMakeContextCurrent");
if (!(glXCreateContextAttribsARB && glXMakeContextCurrentARB)) {
XFree(_fbconfig);
throw();
}
/* create a context using glXCreateContextAttribsARB */
if (!(_render_context = glXCreateContextAttribsARB(_xdisplay, _fbconfig[0], 0, True, context_attribs))) {
XFree(_fbconfig);
throw();
}
// GLX_MIPMAP_TEXTURE_EXT
/* create temporary pbuffer */
int pbuffer_attribs[] = {
GLX_PBUFFER_WIDTH, 128,
GLX_PBUFFER_HEIGHT, 128,
None
};
_pbuff = glXCreatePbuffer(_xdisplay, _fbconfig[0], pbuffer_attribs);
XFree(_fbconfig);
XSync(_xdisplay, False);
/* try to make it the current context */
if (!glXMakeContextCurrent(_xdisplay, _pbuff, _pbuff, _render_context)) {
/* some drivers does not support context without default framebuffer, so fallback on
* using the default window.
*/
if (!glXMakeContextCurrent(_xdisplay, DefaultRootWindow(_xdisplay),
DefaultRootWindow(_xdisplay), _render_context)) {
throw();
}
}
Almost forgot:My system and hardware:
Kubuntu 13.04 64bit. GPU: NVidia Geforce GTX 680 . The engine uses OpenGL 4.2 API
Full OpenGL info:
**OpenGL vendor string: NVIDIA Corporation
OpenGL renderer string: GeForce GTX 680/PCIe/SSE2
OpenGL version string: 4.4.0 NVIDIA 331.49
OpenGL shading language version string: 4.40 NVIDIA via Cg compiler**
Btw,I used also older drivers and it doesn't matter.
UPDATE:
Seems like my assumption regarding GLFW was wrong.When I compile the engine and run it from the terminal the same is happening.BUT - if I run the engine from IDE (debug or release )there are no issues with the mipmaps.Is it possible the standalone app works against different SOs?
To make it clear,I dont't use Pbuffers to render into.I render into custom Frame buffers.
UPDATE1:
I have read that non-power of 2 textures can be tricky to auto generate mipmaps.And that in case OpenGL fails to generate all the levels it turns of texture usage.Is it possible that's what I am experiencing here?Because once the mipmapped texture goes wrong the rest of textures (non mipmapped) disappear too.But if this is the case then why this behavior is inconsistent?
Uh, why are you using PBuffers in the first place? PBuffers have just too many caveats as that there was only one valid reason to use them in a new project?
You want offscreen rendering? Then use Framebuffer Objects (FBOs).
You need a purely off-screen context? Then create a normal window which you simply don't show and create an FBO on it.

Driver error when using multiple shaders

I'm using 3 different shaders:
a tessellation shader to use the tessellation feature of DirectX11 :)
a regular shader to show how it would look without tessellation
and a text shader to display debug-info such as FPS, model count etc.
All of these shaders are initialized at the beginning.
Using the keyboard, I can switch between the tessellation shader and regular shader to render the scene. Additionally, I also want to be able toggle the display of debug-info using the text shader.
Since implementing the tessellation shader the text shader doesn't work anymore. When I activate the DebugText (rendered using the text-shader) my screens go black for a while, and Windows displays the following message:
Display Driver stopped responding and has recovered
This happens with either of the two shaders used to render the scene.
Additionally:
I can start the application using the regular shader to render the scene and then switch to the tessellation shader. If I try to switch back to the regular shader I get the same error as with the text shader.
What am I doing wrong when switching between shaders?
What am I doing wrong when displaying text at the same time?
What file can I post to help you help me? :) thx
P.S. I already checked if my keyinputs interrupt at the wrong time (during render or so..), but that seems to be ok
Testing Procedure
Regular Shader without text shader
Add text shader to Regular Shader by keyinput (works now, I built the text shader back to only vertex and pixel shader) (somthing with the z buffer is stil wrong...)
Remove text shader, then change shader to Tessellation Shader by key input
Then if I add the Text Shader or switch back to the Regular Shader
Switching/Render Shader
Here the code snipet from the Renderer.cpp where I choose the Shader according to the boolean "m_useTessellationShader":
if(m_useTessellationShader)
{
// Render the model using the tesselation shader
ecResult = m_ShaderManager->renderTessellationShader(m_D3D->getDeviceContext(), meshes[lod_level]->getIndexCount(),
worldMatrix, viewMatrix, projectionMatrix, textures, texturecount,
m_Light->getDirection(), m_Light->getAmbientColor(), m_Light->getDiffuseColor(),
(D3DXVECTOR3)m_Camera->getPosition(), TESSELLATION_AMOUNT);
} else {
// todo: loaded model depends on distance to camera
// Render the model using the light shader.
ecResult = m_ShaderManager->renderShader(m_D3D->getDeviceContext(),
meshes[lod_level]->getIndexCount(), lod_level, textures, texturecount,
m_Light->getDirection(), m_Light->getAmbientColor(), m_Light->getDiffuseColor(),
worldMatrix, viewMatrix, projectionMatrix);
}
And here the code snipet from the Mesh.cpp where I choose the Typology according to the boolean "useTessellationShader":
// RenderBuffers is called from the Render function. The purpose of this function is to set the vertex buffer and index buffer as active on the input assembler in the GPU. Once the GPU has an active vertex buffer it can then use the shader to render that buffer.
void Mesh::renderBuffers(ID3D11DeviceContext* deviceContext, bool useTessellationShader)
{
unsigned int stride;
unsigned int offset;
// Set vertex buffer stride and offset.
stride = sizeof(VertexType);
offset = 0;
// Set the vertex buffer to active in the input assembler so it can be rendered.
deviceContext->IASetVertexBuffers(0, 1, &m_vertexBuffer, &stride, &offset);
// Set the index buffer to active in the input assembler so it can be rendered.
deviceContext->IASetIndexBuffer(m_indexBuffer, DXGI_FORMAT_R32_UINT, 0);
// Check which Shader is used to set the appropriate Topology
// Set the type of primitive that should be rendered from this vertex buffer, in this case triangles.
if(useTessellationShader)
{
deviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_3_CONTROL_POINT_PATCHLIST);
}else{
deviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
}
return;
}
RenderShader
Could there be a problem using sometimes only vertex and pixel shader and after switching using vertex, hull, domain and pixel shader?
Here a little overview of my architecture:
TextClass: uses font.vs and font.ps
deviceContext->VSSetShader(m_vertexShader, NULL, 0);
deviceContext->PSSetShader(m_pixelShader, NULL, 0);
deviceContext->PSSetSamplers(0, 1, &m_sampleState);
RegularShader: uses vertex.vs and pixel.ps
deviceContext->VSSetShader(m_vertexShader, NULL, 0);
deviceContext->PSSetShader(m_pixelShader, NULL, 0);
deviceContext->PSSetSamplers(0, 1, &m_sampleState);
TessellationShader: uses tessellation.vs, tessellation.hs, tessellation.ds, tessellation.ps
deviceContext->VSSetShader(m_vertexShader, NULL, 0);
deviceContext->HSSetShader(m_hullShader, NULL, 0);
deviceContext->DSSetShader(m_domainShader, NULL, 0);
deviceContext->PSSetShader(m_pixelShader, NULL, 0);
deviceContext->PSSetSamplers(0, 1, &m_sampleState);
ClearState
I'd like to switch between 2 shaders and it seems they have different context parameters, right? In clearstate methode it says it resets following params to NULL:
I found following in my Direct3D Class:
depth-stencil state -> m_deviceContext->OMSetDepthStencilState
rasterizer state -> m_deviceContext->RSSetState(m_rasterState);
blend state -> m_device->CreateBlendState
viewports -> m_deviceContext->RSSetViewports(1, &viewport);
I found following in every Shader Class:
input/output resource slots -> deviceContext->PSSetShaderResources
shaders -> deviceContext->VSSetShader to - deviceContext->PSSetShader
input layouts -> device->CreateInputLayout
sampler state -> device->CreateSamplerState
These two I didn't understand, where can I find them?
predications -> ?
scissor rectangles -> ?
Do I need to store them all localy so I can switch between them, because it doesn't feel right to reinitialize the Direct3d and the Shaders by every switch (key input)?!
Have you checked if the device is being reset by the system. Check the return variable of the Present() Method. When switching shaders abruptly DX tends to reset the device for some odd reason.
If this is the problem, just recreate the device and context and you should be good.
Right now you have
void Direct3D::endScene()
{
// Present the back buffer to the screen since rendering is complete.
if(m_vsync_enabled)
{
// Lock to screen refresh rate.
m_swapChain->Present(1, 0);
}
else
{
// Present as fast as possible.
m_swapChain->Present(0, 0);
}
return;
}
I would suggest doing something like so to catch the return value of Present()
ULONG Direct3D::endScene()
{
int synch = 0;
if(m_vsync_enabled)
synch = 1;
// Present as fast as possible or synch it to 1 vertical blank
return m_swapChain->Present(synch, 0);
}
Of course this is only MY way of doing it, and there are many. Also, I forgot to tell you that the issue I had in the past was also using the Effects library. Have you recompiled it in your system? If not, then do so. Or even better get rid of it, that's what I did when I solved my problem.

GLSL change uniform texture for each object

I'm currently trying to draw simple meshes using different textures (using C# and OpenTK). I read a lot about TextureUnit and bindings, and that's my current implementation (not working as expected) :
private void ApplyOpaquePass()
{
GL.UseProgram(this.shaderProgram);
GL.CullFace(CullFaceMode.Back);
while (this.opaqueNodes.Count > 0)
Draw(this.opaqueNodes.Pop());
GL.UseProgram(0);
}
And my draw method :
private void Draw(Assets.Model.Geoset geoset)
{
GL.ActiveTexture(TextureUnit.Texture1);
GL.BindTexture(TextureTarget.Texture2D, geoset.TextureId /*buffer id returned by GL.GenTextures*/ );
GL.Uniform1(GL.GetUniformLocation(this.shaderProgram, "Texture1"), 1 /*see note below*/ );
//Note: if I'm correct, it should be 1 when using TextureUnit.Texture1
// (2 for Texture2...), note that doesn't seem to work since no
// texture texture at all is sent to the shader, however a texture
// is shown when specifying any other number (0, 2, 3...)
// Draw vertices & indices buffers...
}
And my shader code (that shouldn't be the problem since uv mapping is ok):
uniform sampler2D Texture1;
void main(void)
{
gl_FragColor = texture2D(Texture1, gl_TexCoord[0].st);
}
What's the problem :
Since geoset.TextureId can vary from one geoset to another, I'm expecting different texture to be sent to the shader.
Instead, always the same texture is applied to all objects (geosets).
Ideas :
Using different TextureUnit for each textures (working well), but what happens if we have 2000 different textures? If my understanding is right, we must use multiple TextureUnit only if we want to use multiple texture at the same time in the shader.
I first thought that uniforms couldn't be changed once defined, but a test with a boolean uniform told me that it was actually possible.
private void Draw(Assets.Model.Geoset geoset)
{
GL.ActiveTexture(TextureUnit.Texture1);
GL.BindTexture(TextureTarget.Texture2D, geoset.TextureId);
GL.Uniform1(GL.GetUniformLocation(this.shaderProgram, "Texture1"), 1 );
//added line...
GL.Uniform1(GL.GetUniformLocation(this.shaderProgram, "UseBaseColor"), (geoset.Material.FilterMode == Assets.Model.Material.FilterType.Blend) ? 1: 0);
// Draw vertices & indices buffers...
}
Shader code:
uniform sampler2D Texture1;
uniform bool UseBaseColor;
void main(void)
{
gl_FragColor = texture2D(Texture1, gl_TexCoord[0].st);
if (UseBaseColor)
gl_FragColor = mix(vec4(0,1,1,1), gl_FragColor , gl_FragColor .a);
}
This code works great, drawing some geoset with a base color instead of transparency, that (should ?) prove that uniforms can be changed here. Why this isn't working with my textures ?
Should I use a different shader program per geoset ?
Thanks in advance for your answers :)
Regards,
Bruce
EDIT: that's how I generate textures in the renderer:
override public uint GenTexture(Bitmap bmp)
{
uint texture;
GL.GenTextures(1, out texture);
//I disabled this line because I now bind the texture before drawing a geoset
//Anyway, uncommenting this line doesn't show a better result
//GL.BindTexture(TextureTarget.Texture2D, texture);
System.Drawing.Imaging.BitmapData data = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, data.Width, data.Height, 0,
OpenTK.Graphics.OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte, data.Scan0);
bmp.UnlockBits(data);
//temp settings
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Linear);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Linear);
return texture;
}
I finally solved my problem !
All the answers perfected my understanding and lead me to the solution which lied on two major problems:
1) as Calvin1602 said, this is very important to bind a newly created texture before calling glTexImage2d.
2) also UncleZeiv rose my attention about the last GL.Uniform1's parameter. The OpenTK tutorial is very misleading because the guy pass the id of the texture object to the function, that happens to work here because the order of generation of the texture exactly matches the id of used TextureUnit.
As I was unsure that my comprehension was exact, I wrongly changed this parameter back to the geoset.TextureId.
Thanks !
You don't need multiple shader programs if the only thing you are changing is the texture. Also uniform locations are constant throughout the lifetime of a shader program, so there is no need to retrieve those each frame. However, you do need to rebind the texture each time you change it, and you will need to bind each distinct texture to a separate texture ID.
As a result, I would conclude that what you posted ought to work and so the problem is likely somewhere else in your code.
EDIT: After the updated version it should still work. However I am concerned about why the following line is commented out:
//GL.BindTexture(TextureTarget.Texture2D, texture);
This should be in there. Otherwise you will keep over writing the same texture (which is ridiculous). You need to bind the texture before you initialize. Now it is entirely conceivable that something else is broken, but given what I see now this is the only error that jumps out at me.