I'm using 3 different shaders:
a tessellation shader to use the tessellation feature of DirectX11 :)
a regular shader to show how it would look without tessellation
and a text shader to display debug-info such as FPS, model count etc.
All of these shaders are initialized at the beginning.
Using the keyboard, I can switch between the tessellation shader and regular shader to render the scene. Additionally, I also want to be able toggle the display of debug-info using the text shader.
Since implementing the tessellation shader the text shader doesn't work anymore. When I activate the DebugText (rendered using the text-shader) my screens go black for a while, and Windows displays the following message:
Display Driver stopped responding and has recovered
This happens with either of the two shaders used to render the scene.
Additionally:
I can start the application using the regular shader to render the scene and then switch to the tessellation shader. If I try to switch back to the regular shader I get the same error as with the text shader.
What am I doing wrong when switching between shaders?
What am I doing wrong when displaying text at the same time?
What file can I post to help you help me? :) thx
P.S. I already checked if my keyinputs interrupt at the wrong time (during render or so..), but that seems to be ok
Testing Procedure
Regular Shader without text shader
Add text shader to Regular Shader by keyinput (works now, I built the text shader back to only vertex and pixel shader) (somthing with the z buffer is stil wrong...)
Remove text shader, then change shader to Tessellation Shader by key input
Then if I add the Text Shader or switch back to the Regular Shader
Switching/Render Shader
Here the code snipet from the Renderer.cpp where I choose the Shader according to the boolean "m_useTessellationShader":
if(m_useTessellationShader)
{
// Render the model using the tesselation shader
ecResult = m_ShaderManager->renderTessellationShader(m_D3D->getDeviceContext(), meshes[lod_level]->getIndexCount(),
worldMatrix, viewMatrix, projectionMatrix, textures, texturecount,
m_Light->getDirection(), m_Light->getAmbientColor(), m_Light->getDiffuseColor(),
(D3DXVECTOR3)m_Camera->getPosition(), TESSELLATION_AMOUNT);
} else {
// todo: loaded model depends on distance to camera
// Render the model using the light shader.
ecResult = m_ShaderManager->renderShader(m_D3D->getDeviceContext(),
meshes[lod_level]->getIndexCount(), lod_level, textures, texturecount,
m_Light->getDirection(), m_Light->getAmbientColor(), m_Light->getDiffuseColor(),
worldMatrix, viewMatrix, projectionMatrix);
}
And here the code snipet from the Mesh.cpp where I choose the Typology according to the boolean "useTessellationShader":
// RenderBuffers is called from the Render function. The purpose of this function is to set the vertex buffer and index buffer as active on the input assembler in the GPU. Once the GPU has an active vertex buffer it can then use the shader to render that buffer.
void Mesh::renderBuffers(ID3D11DeviceContext* deviceContext, bool useTessellationShader)
{
unsigned int stride;
unsigned int offset;
// Set vertex buffer stride and offset.
stride = sizeof(VertexType);
offset = 0;
// Set the vertex buffer to active in the input assembler so it can be rendered.
deviceContext->IASetVertexBuffers(0, 1, &m_vertexBuffer, &stride, &offset);
// Set the index buffer to active in the input assembler so it can be rendered.
deviceContext->IASetIndexBuffer(m_indexBuffer, DXGI_FORMAT_R32_UINT, 0);
// Check which Shader is used to set the appropriate Topology
// Set the type of primitive that should be rendered from this vertex buffer, in this case triangles.
if(useTessellationShader)
{
deviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_3_CONTROL_POINT_PATCHLIST);
}else{
deviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
}
return;
}
RenderShader
Could there be a problem using sometimes only vertex and pixel shader and after switching using vertex, hull, domain and pixel shader?
Here a little overview of my architecture:
TextClass: uses font.vs and font.ps
deviceContext->VSSetShader(m_vertexShader, NULL, 0);
deviceContext->PSSetShader(m_pixelShader, NULL, 0);
deviceContext->PSSetSamplers(0, 1, &m_sampleState);
RegularShader: uses vertex.vs and pixel.ps
deviceContext->VSSetShader(m_vertexShader, NULL, 0);
deviceContext->PSSetShader(m_pixelShader, NULL, 0);
deviceContext->PSSetSamplers(0, 1, &m_sampleState);
TessellationShader: uses tessellation.vs, tessellation.hs, tessellation.ds, tessellation.ps
deviceContext->VSSetShader(m_vertexShader, NULL, 0);
deviceContext->HSSetShader(m_hullShader, NULL, 0);
deviceContext->DSSetShader(m_domainShader, NULL, 0);
deviceContext->PSSetShader(m_pixelShader, NULL, 0);
deviceContext->PSSetSamplers(0, 1, &m_sampleState);
ClearState
I'd like to switch between 2 shaders and it seems they have different context parameters, right? In clearstate methode it says it resets following params to NULL:
I found following in my Direct3D Class:
depth-stencil state -> m_deviceContext->OMSetDepthStencilState
rasterizer state -> m_deviceContext->RSSetState(m_rasterState);
blend state -> m_device->CreateBlendState
viewports -> m_deviceContext->RSSetViewports(1, &viewport);
I found following in every Shader Class:
input/output resource slots -> deviceContext->PSSetShaderResources
shaders -> deviceContext->VSSetShader to - deviceContext->PSSetShader
input layouts -> device->CreateInputLayout
sampler state -> device->CreateSamplerState
These two I didn't understand, where can I find them?
predications -> ?
scissor rectangles -> ?
Do I need to store them all localy so I can switch between them, because it doesn't feel right to reinitialize the Direct3d and the Shaders by every switch (key input)?!
Have you checked if the device is being reset by the system. Check the return variable of the Present() Method. When switching shaders abruptly DX tends to reset the device for some odd reason.
If this is the problem, just recreate the device and context and you should be good.
Right now you have
void Direct3D::endScene()
{
// Present the back buffer to the screen since rendering is complete.
if(m_vsync_enabled)
{
// Lock to screen refresh rate.
m_swapChain->Present(1, 0);
}
else
{
// Present as fast as possible.
m_swapChain->Present(0, 0);
}
return;
}
I would suggest doing something like so to catch the return value of Present()
ULONG Direct3D::endScene()
{
int synch = 0;
if(m_vsync_enabled)
synch = 1;
// Present as fast as possible or synch it to 1 vertical blank
return m_swapChain->Present(synch, 0);
}
Of course this is only MY way of doing it, and there are many. Also, I forgot to tell you that the issue I had in the past was also using the Effects library. Have you recompiled it in your system? If not, then do so. Or even better get rid of it, that's what I did when I solved my problem.
Related
I'm currently working on a D3D project and want to implement directional shadow mapping. I set everything up according to the Microsoft Guide, but it just doesn't work.
I've created a 2D texture object, a depth stencil view and a shader resource view and set them up using the following descriptions:
D3D11_TEXTURE2D_DESC shadowMapDesc;
ZeroMemory(&shadowMapDesc, sizeof(D3D11_TEXTURE2D_DESC));
shadowMapDesc.Width = width;
shadowMapDesc.Height = height;
shadowMapDesc.MipLevels = 1;
shadowMapDesc.ArraySize = 1;
shadowMapDesc.Format = DXGI_FORMAT_R24G8_TYPELESS;
shadowMapDesc.SampleDesc.Count = 1;
shadowMapDesc.SampleDesc.Quality = 0;
shadowMapDesc.Usage = D3D11_USAGE_DEFAULT;
shadowMapDesc.BindFlags = D3D11_BIND_DEPTH_STENCIL | D3D11_BIND_SHADER_RESOURCE;
shadowMapDesc.CPUAccessFlags = 0;
shadowMapDesc.MiscFlags = 0;
ID3D11Device& d3ddev = dev.getD3DDevice();
uint32_t *initData = new uint32_t[width * height];
ZeroMemory(initData, sizeof(uint32_t) * width * height);
D3D11_SUBRESOURCE_DATA data;
ZeroMemory(&data, sizeof(D3D11_SUBRESOURCE_DATA));
data.pSysMem = initData;
data.SysMemPitch = sizeof(uint32_t) * width;
data.SysMemSlicePitch = 0;
HRESULT hr = d3ddev.CreateTexture2D(&shadowMapDesc, &data, &texture_);
D3D11_DEPTH_STENCIL_VIEW_DESC depthStencilViewDesc;
ZeroMemory(&depthStencilViewDesc, sizeof(D3D11_DEPTH_STENCIL_VIEW_DESC));
depthStencilViewDesc.Format = DXGI_FORMAT_D24_UNORM_S8_UINT;
depthStencilViewDesc.ViewDimension = D3D11_DSV_DIMENSION_TEXTURE2D;
depthStencilViewDesc.Texture2D.MipSlice = 0;
hr = d3ddev.CreateDepthStencilView(texture_, &depthStencilViewDesc, &stencilView_);
D3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc;
ZeroMemory(&shaderResourceViewDesc, sizeof(D3D11_SHADER_RESOURCE_VIEW_DESC));
shaderResourceViewDesc.Format = DXGI_FORMAT_R24_UNORM_X8_TYPELESS;
shaderResourceViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE2D;
shaderResourceViewDesc.Texture2D.MipLevels = 1;
shaderResourceViewDesc.Texture2D.MostDetailedMip = 0;
hr = d3ddev.CreateShaderResourceView(texture_, &shaderResourceViewDesc, &shaderView_);
Between these steps there is additional error checking, but all the create-functions return successfully. I then bind the texture, render my scene and unbind the texture using the following functions:
void D3DDepthTexture2D::bindAsTarget(D3DDevice& dev)
{
dev.getDeviceContext().ClearDepthStencilView(stencilView_, D3D11_CLEAR_DEPTH | D3D11_CLEAR_STENCIL, 1.0f, 0);
// Bind target
dev.getDeviceContext().OMSetRenderTargets(0, 0, stencilView_);
// Set viewport
dev.setViewport(static_cast<float>(width_), static_cast<float>(height_), 0.0f, 0.0f);
}
void D3DDepthTexture2D::unbindAsTarget(D3DDevice& dev, float width, float height)
{
// Unbind target
dev.resetRenderTarget();
// Reset viewport
dev.setViewport(width, height, 0.0f, 0.0f);
}
My render-to-depth-texture routine basically looks like this (removing all the unnecessary details):
camera = buildCameraFromLight(light);
setCameraCBuffer(camera);
bindTexture();
activateShader();
for(Object j : objects) {setTransformationCBuffer(j); renderObject(j);}
deactivateShader();
unbindTexture();
Rendering the scene from the light's perspective to the normal render target (screen) results in the proper image (both the actual image and just rendering the depth values). I use a simple vertex shader that just transforms the vertices and a pixel shader that does nothing at all OR returns the depth values (I tried both, doesn't change anything about the end result since we don't care about the color buffer).
After clearing the texture and rendering to it, I render it onto a quad to my screen, but all I get is a red square - so the depth value is 1.0f, the value I've cleared the texture to. I'm really at a loss for what to do, I tried everything, implemented every possible solution from online tutorials or changed things around on my own, but nothing helps. Here's a list of all the things I already checked:
All FAILED(hr)-calls return false, no error message is printed to the console
I tested whether the geometry gets transformed properly by rendering the geometry and their depth values (z / w) to screen, which worked and looked correct
I tested calculating the depth values in the fragment shader and rendering to a normal render target (basically trying to render my color buffer to texture) instead of a depth stencil texture, but that didn't work either, red square
I tested different formats and format combinations for the shadow map and the views, which either caused the creation to fail or didn't change a thing
I checked whether any call between setting and unsetting my texture as the render target during the render call resetted the depth stencil target to something else - not the case
I debugged my texture-to-screen/quad rendering routine already and it works properly with other textures, so I am in fact seeing what the depth texture looks like
I changed the geometry and camera perspective around to see whether that makes anything visible in the depth texture - it doesn't
I came across this similar StackOverflow problem and checked whether my default depth stencil buffer had the same dimensions, AA settings etc. as my texture - and it does (count 1, quality 0)
I really don't know what's up, I've been trying to debug this for hours and hours. I hope someone here can give me any advice on what I'm doing wrong or what I could try to fix this. I'm using C++11 with Direct3D11.
Note: I can't debug any of this using NSight or any Visual Studio tools since they don't seem to work properly with my system right now and I don't have any administrative rights to fix any of it. I just have to deal with it for now. I hope the given information and code samples are enough to provide a rough idea of what I could also try to make this work.
Thanks in advance.
I got NSight to work and debugged the whole thing with that. Turns out the depth texture was properly created and filled with the depth and stencil data and I just forgot that all the depth information is stored in the first channel - so I ignored the g and b data and used 1.0 for a and it worked. Using the g and b channels somehow made the whole thing red (maybe someone wants to add to this and explain why).
Debugging this got much easier once I could observe the texture that is present in the shader - I should've used a debugging tool like NSight or RenderDoc way earlier. Thanks to #EgorShkorov for the advice.
I am trying to render a texture that gets passed through a pixel shader.
Currently my shader is as follows:
float4 EffectProcess( float2 Tex : TEXCOORD0 ) : COLOR0
{
return float4(1,0,0,1);
}
technique MyTechnique
{
pass p0
{
VertexShader = null;
PixelShader = compile ps_2_0 EffectProcess();
}
}
As you can see, it is a very basic shader that makes that forces the pixels to be red.
UINT uiPasses = 0;
res= g_lpEffect->Begin(&uiPasses, 0);
for (UINT uiPass = 0; uiPass < uiPasses; uiPass++)
{
res = g_lpEffect->BeginPass(uiPass);
res = sprite->Begin(D3DXSPRITE_SORT_TEXTURE);
res = sprite->Draw(tex, NULL, 0x0, 0x0, 0xFFFFFFFF);
res = sprite->End();
res = g_lpEffect->EndPass();
}
res = g_lpEffect->End();
And I am drawing the texture using the shader like so. I am not sure this is the correct way to do it though and have found very little resources on the subject.
The shader is being created correctly and the texture aswell, all calls return a hresult of S_OK, yet when I run the code, the texture shows perfectly, without being overwritten by red.
Both sprite and effects by default store initial pipeline state and set up their own when Begin is called and then restore it when End is called. So I suspect that sprite->Begin(D3DXSPRITE_SORT_TEXTURE); will disable effect processing and your pixel shader is never called. You may try to pass something like D3DXSPRITE_DONOTMODIFY_RENDERSTATE into Begin to prevent it from modifying pipeline state, though this may break sprite rendering. It would be better to get rid of sprite altogether and write your own sprite shader (both vertex and pixel) because fixed pipeline rendering is mostly deprecated these days.
I have a fairly simple DirectX 11 framework setup that I want to use for various 2D simulations. I am currently trying to implement the 2D Wave Equation on the GPU. It requires I keep the grid state of the simulation at 2 previous timesteps in order to compute the new one.
How I went about it was this - I have a class called FrameBuffer, which has the following public methods:
bool Initialize(D3DGraphicsObject* graphicsObject, int width, int height);
void BeginRender(float clearRed, float clearGreen, float clearBlue, float clearAlpha) const;
void EndRender() const;
// Return a pointer to the underlying texture resource
const ID3D11ShaderResourceView* GetTextureResource() const;
In my main draw loop I have an array of 3 of these buffers. Every loop I use the textures from the previous 2 buffers as inputs to the next frame buffer and I also draw any user input to change the simulation state. I then draw the result.
int nextStep = simStep+1;
if (nextStep > 2)
nextStep = 0;
mFrameArray[nextStep]->BeginRender(0.0f,0.0f,0.0f,1.0f);
{
mGraphicsObj->SetZBufferState(false);
mQuad->GetRenderer()->RenderBuffers(d3dGraphicsObj->GetDeviceContext());
ID3D11ShaderResourceView* texArray[2] = { mFrameArray[simStep]->GetTextureResource(),
mFrameArray[prevStep]->GetTextureResource() };
result = mWaveShader->Render(d3dGraphicsObj, mQuad->GetRenderer()->GetIndexCount(), texArray);
if (!result)
return false;
// perform any extra input
I_InputSystem *inputSystem = ServiceProvider::Instance().GetInputSystem();
if (inputSystem->IsMouseLeftDown()) {
int x,y;
inputSystem->GetMousePos(x,y);
int width,height;
mGraphicsObj->GetScreenDimensions(width,height);
float xPos = MapValue((float)x,0.0f,(float)width,-1.0f,1.0f);
float yPos = MapValue((float)y,0.0f,(float)height,-1.0f,1.0f);
mColorQuad->mTransform.position = Vector3f(xPos,-yPos,0);
result = mColorQuad->Render(&viewMatrix,&orthoMatrix);
if (!result)
return false;
}
mGraphicsObj->SetZBufferState(true);
}
mFrameArray[nextStep]->EndRender();
prevStep = simStep;
simStep = nextStep;
ID3D11ShaderResourceView* currTexture = mFrameArray[nextStep]->GetTextureResource();
// Render texture to screen
mGraphicsObj->SetZBufferState(false);
mQuad->SetTexture(currTexture);
result = mQuad->Render(&viewMatrix,&orthoMatrix);
if (!result)
return false;
mGraphicsObj->SetZBufferState(true);
The problem is nothing is happening. Whatever I draw appears on the screen(I draw using a small quad) but no part of the simulation is actually ran. I can provide the shader code if required, but I am certain it works since I've implemented this before on the CPU using the same algorithm. I'm just not certain how well D3D render targets work and if I'm just drawing wrong every frame.
EDIT 1:
Here is the code for the begin and end render functions of the frame buffers:
void D3DFrameBuffer::BeginRender(float clearRed, float clearGreen, float clearBlue, float clearAlpha) const {
ID3D11DeviceContext *context = pD3dGraphicsObject->GetDeviceContext();
context->OMSetRenderTargets(1, &(mRenderTargetView._Myptr), pD3dGraphicsObject->GetDepthStencilView());
float color[4];
// Setup the color to clear the buffer to.
color[0] = clearRed;
color[1] = clearGreen;
color[2] = clearBlue;
color[3] = clearAlpha;
// Clear the back buffer.
context->ClearRenderTargetView(mRenderTargetView.get(), color);
// Clear the depth buffer.
context->ClearDepthStencilView(pD3dGraphicsObject->GetDepthStencilView(), D3D11_CLEAR_DEPTH, 1.0f, 0);
void D3DFrameBuffer::EndRender() const {
pD3dGraphicsObject->SetBackBufferRenderTarget();
}
Edit 2 Ok, I after I set up the DirectX debug layer I saw that I was using an SRV as a render target while it was still bound to the Pixel stage in out of the shaders. I fixed that by setting shader resources to NULL after I render with the wave shader, but the problem still persists - nothing actually gets ran or updated. I took the render target code from here and slightly modified it, if its any help: http://rastertek.com/dx11tut22.html
Okay, as I understand correct you need a multipass-rendering to texture.
Basiacally you do it like I've described here: link
You creating SRVs with both D3D11_BIND_SHADER_RESOURCE and D3D11_BIND_RENDER_TARGET bind flags.
You ctreating render targets from textures
You set first texture as input (*SetShaderResources()) and second texture as output (OMSetRenderTargets())
You Draw()*
then you bind second texture as input, and third as output
Draw()*
etc.
Additional advices:
If your target GPU capable to write to UAVs from non-compute shaders, you can use it. It is much more simple and less error prone.
If your target GPU suitable, consider using compute shader. It is a pleasure.
Don't forget to enable DirectX debug layer. Sometimes we make obvious errors and debug output can point to them.
Use graphics debugger to review your textures after each draw call.
Edit 1:
As I see, you call BeginRender and OMSetRenderTargets only once, so, all rendering goes into mRenderTargetView. But what you need is to interleave:
SetSRV(texture1);
SetRT(texture2);
Draw();
SetSRV(texture2);
SetRT(texture3);
Draw();
SetSRV(texture3);
SetRT(backBuffer);
Draw();
Also, we don't know what is mRenderTargetView yet.
so, before
result = mColorQuad->Render(&viewMatrix,&orthoMatrix);
somewhere must be OMSetRenderTargets .
Probably, it s better to review your Begin()/End() design, to make resource binding more clearly visible.
Happy coding! =)
In my OpenGL program I have two shaders. One renders with textures, and the other renders just solid colors. After compiling and linking a shader, I enable a texture coordinate vertex attribute array depending on weather or not the shader contains the attribute.
//This code is called after the shaders are compiled.
//Get the handles
textureU = glGetUniformLocation(program,"texture");
tintU = glGetUniformLocation(program,"tint");
viewMatrixU = glGetUniformLocation(program,"viewMatrix");
transformMatrixU = glGetUniformLocation(program,"transformMatrix");
positionA = glGetAttribLocation(program,"position");
texcoordA = glGetAttribLocation(program,"texcoord");
//Detect if this shader can handle textures
if(texcoordA < 0 || textureU < 0) hasTexture = false;
else hasTexture = true;
//Enable Attributes
glEnableVertexAttribArray(positionA);
if(hasTexture) glEnableVertexAttribArray(texcoordA);
If I am rendering an item that is textured, each element in verts consists of 5 values (x,y,z,tx,ty), but if the item isn't textures, each element in verts contains only 3 values (x,y,z).
Here is the problem: When the first item rendered in the GL context does not have a texture, glDrawElements segfaults! However, if the first item rendered does have a texture, it works fine, and any untextured items after the textured one work fine (that is, until a new context is created).
This chunk of code renders an item
glBindBuffer(GL_ARRAY_BUFFER,engine->vertBuffer);
glBufferData(GL_ARRAY_BUFFER,sizeof(GLfloat)*item->verts.size(),&item->verts[0],GL_DYNAMIC_DRAW);
item->shader->SetShader();
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,engine->elementBuffer);
glBufferData(GL_ELEMENT_ARRAY_BUFFER,sizeof(GLuint) * item->indicies.size(),&item->indicies[0],GL_DYNAMIC_DRAW);
if(item->usingTexture)
item->shader->SetTexture(item->texture->handle);
glUniformMatrix4fv(item->shader->transformMatrixU,1,GL_TRUE,&item->matrix.contents[0]);
glUniformMatrix4fv(item->shader->viewMatrixU,1,GL_TRUE,&item->batch->matrix.contents[0]);
glUniform4f(item->shader->tintU,item->color.x,item->color.y,item->color.z,item->color.w);
glDrawElements(GL_TRIANGLES,item->indicies.size(),GL_UNSIGNED_INT,0); //segfault
Here is the function seen above that sets the shader.
glUseProgram(program); currentShader = program;
GLsizei stride = 12;
if(hasTexture) stride = 20;
glVertexAttribPointer(positionA,3,GL_FLOAT,GL_FALSE,stride,0);
if(hasTexture)
glVertexAttribPointer(texcoordA,2,GL_FLOAT,GL_FALSE,stride,(void*)12);
As far as I know, this problem is not apparent on Intel Integrated Graphics, which seem to be quite lenient.
Edit: If it is useful to know, I am using GLFW and GLEW.
Try adding corresponding glDisableVertexAttribArray()s after your draw call:
glEnableVertexAttribArray(positionA);
if(hasTexture) glEnableVertexAttribArray(texcoordA);
// draw
glDisableVertexAttribArray(positionA);
if(hasTexture) glDisableVertexAttribArray(texcoordA);
The problem is that I enabled the vertex attribute arrays after compiling my shader. This is not where I should have been enabling them.
I enabled the texcoord attribute when compiling the shader, but since the first item didn't use it, glDrawElements would segfault.
I fixed this by enabling the attribute when setting the shader.
I'm currently trying to draw simple meshes using different textures (using C# and OpenTK). I read a lot about TextureUnit and bindings, and that's my current implementation (not working as expected) :
private void ApplyOpaquePass()
{
GL.UseProgram(this.shaderProgram);
GL.CullFace(CullFaceMode.Back);
while (this.opaqueNodes.Count > 0)
Draw(this.opaqueNodes.Pop());
GL.UseProgram(0);
}
And my draw method :
private void Draw(Assets.Model.Geoset geoset)
{
GL.ActiveTexture(TextureUnit.Texture1);
GL.BindTexture(TextureTarget.Texture2D, geoset.TextureId /*buffer id returned by GL.GenTextures*/ );
GL.Uniform1(GL.GetUniformLocation(this.shaderProgram, "Texture1"), 1 /*see note below*/ );
//Note: if I'm correct, it should be 1 when using TextureUnit.Texture1
// (2 for Texture2...), note that doesn't seem to work since no
// texture texture at all is sent to the shader, however a texture
// is shown when specifying any other number (0, 2, 3...)
// Draw vertices & indices buffers...
}
And my shader code (that shouldn't be the problem since uv mapping is ok):
uniform sampler2D Texture1;
void main(void)
{
gl_FragColor = texture2D(Texture1, gl_TexCoord[0].st);
}
What's the problem :
Since geoset.TextureId can vary from one geoset to another, I'm expecting different texture to be sent to the shader.
Instead, always the same texture is applied to all objects (geosets).
Ideas :
Using different TextureUnit for each textures (working well), but what happens if we have 2000 different textures? If my understanding is right, we must use multiple TextureUnit only if we want to use multiple texture at the same time in the shader.
I first thought that uniforms couldn't be changed once defined, but a test with a boolean uniform told me that it was actually possible.
private void Draw(Assets.Model.Geoset geoset)
{
GL.ActiveTexture(TextureUnit.Texture1);
GL.BindTexture(TextureTarget.Texture2D, geoset.TextureId);
GL.Uniform1(GL.GetUniformLocation(this.shaderProgram, "Texture1"), 1 );
//added line...
GL.Uniform1(GL.GetUniformLocation(this.shaderProgram, "UseBaseColor"), (geoset.Material.FilterMode == Assets.Model.Material.FilterType.Blend) ? 1: 0);
// Draw vertices & indices buffers...
}
Shader code:
uniform sampler2D Texture1;
uniform bool UseBaseColor;
void main(void)
{
gl_FragColor = texture2D(Texture1, gl_TexCoord[0].st);
if (UseBaseColor)
gl_FragColor = mix(vec4(0,1,1,1), gl_FragColor , gl_FragColor .a);
}
This code works great, drawing some geoset with a base color instead of transparency, that (should ?) prove that uniforms can be changed here. Why this isn't working with my textures ?
Should I use a different shader program per geoset ?
Thanks in advance for your answers :)
Regards,
Bruce
EDIT: that's how I generate textures in the renderer:
override public uint GenTexture(Bitmap bmp)
{
uint texture;
GL.GenTextures(1, out texture);
//I disabled this line because I now bind the texture before drawing a geoset
//Anyway, uncommenting this line doesn't show a better result
//GL.BindTexture(TextureTarget.Texture2D, texture);
System.Drawing.Imaging.BitmapData data = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, data.Width, data.Height, 0,
OpenTK.Graphics.OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte, data.Scan0);
bmp.UnlockBits(data);
//temp settings
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Linear);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Linear);
return texture;
}
I finally solved my problem !
All the answers perfected my understanding and lead me to the solution which lied on two major problems:
1) as Calvin1602 said, this is very important to bind a newly created texture before calling glTexImage2d.
2) also UncleZeiv rose my attention about the last GL.Uniform1's parameter. The OpenTK tutorial is very misleading because the guy pass the id of the texture object to the function, that happens to work here because the order of generation of the texture exactly matches the id of used TextureUnit.
As I was unsure that my comprehension was exact, I wrongly changed this parameter back to the geoset.TextureId.
Thanks !
You don't need multiple shader programs if the only thing you are changing is the texture. Also uniform locations are constant throughout the lifetime of a shader program, so there is no need to retrieve those each frame. However, you do need to rebind the texture each time you change it, and you will need to bind each distinct texture to a separate texture ID.
As a result, I would conclude that what you posted ought to work and so the problem is likely somewhere else in your code.
EDIT: After the updated version it should still work. However I am concerned about why the following line is commented out:
//GL.BindTexture(TextureTarget.Texture2D, texture);
This should be in there. Otherwise you will keep over writing the same texture (which is ridiculous). You need to bind the texture before you initialize. Now it is entirely conceivable that something else is broken, but given what I see now this is the only error that jumps out at me.