DirectX 11 Multiple Constant Buffers - c++

I'm trying to make my bounding boxes render on top of the models however I can't get them both to render at the same time. I've narrowed it down to it being a constant buffer issue, so I've set them into 2 different slots e.g.
BoundingBox
//Constant Buffer Code!
CONSTANT_BUFFER0 cb0_values;
cb0_values.WorldViewProjection = (*world) * (*view) * (*proj);
//Draw the buffer
m_pImmediateContext->DrawIndexed(8, 0, 0);
m_pImmediateContext->UpdateSubresource(m_pConstantBuffer1, 0, 0, &cb0_values, 0, 0);
m_pImmediateContext->VSSetConstantBuffers(1, 1, &m_pConstantBuffer1);
Models
m_pImmediateContext->UpdateSubresource(m_pConstantBuffer, 0, 0, &modelValues, 0, 0);
m_pImmediateContext->VSSetConstantBuffers(0, 1, &m_pConstantBuffer);
m_pImmediateContext->VSSetShader(m_pVShader, 0, 0);
m_pImmediateContext->PSSetShader(m_pPShader, 0, 0);
m_pImmediateContext->IASetInputLayout(m_pInputLayout);
m_pImmediateContext->PSSetSamplers(0, 1, &m_pSampler0);
m_pImmediateContext->PSSetShaderResources(0, 1, &m_pTexture0
However, this doesn't work and only the models appear correctly and my cube is nowhere to be seen, however if I comment out the model draw and do:
m_pImmediateContext->UpdateSubresource(m_pConstantBuffer1, 0, 0, &cb0_values, 0, 0);
m_pImmediateContext->VSSetConstantBuffers(0, 1, &m_pConstantBuffer1);
I can see my cube but no models (obviously), there must be some simple concept I'm not grasping but I can't work it out.
Thank you,

After more wall-headbutting I finally figured it out, it was due to the input layout and shaders getting set in my model but then not changed back when it came to render the box again.
TLDR - BoundingBox draw needed
UINT stride = sizeof(POS_COL_VERTEX);
UINT offset = 0;
m_pImmediateContext->IASetVertexBuffers(0, 1, &m_pVertexBuffer, &stride, &offset);
//Constant Buffer Code!
CONSTANT_BUFFER0 cb0_values;
cb0_values.WorldViewProjection = (*world) * (*view) * (*proj);
m_pImmediateContext->VSSetShader(m_pVertexShader, 0, 0);
m_pImmediateContext->PSSetShader(m_pPixelShader, 0, 0);
m_pImmediateContext->IASetInputLayout(m_pInputLayout);

Related

Issue using glTexCoordPointer()

I'm fairly new to OpenGL (and GLSL) and I have an issue using glTexCoordPointer().
I have the texture loaded in and it is rendering on the object correctly (a single quad) but I also get another quad appearing which is a single colour not a part of the loaded texture.
The arrays are defined as follows:
static const GLfloat obj_vert_buf[] = {
-1, 0, -1,
-1, 0, 1,
1, 0, 1,
1, 0, -1
};
static const GLfloat obj_tex_buf[] = {
0, 0,
0, 1,
1, 1,
1, 0
};
And the relevant excerpt from the draw function:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY_EXT);
glGenBuffers(1, &obj_id);
glTexCoordPointer(2, GL_FLOAT, 0, obj_tex_buf);
glVertexPointer(3, GL_FLOAT, 0, obj_vert_buf);
glDrawArrays(GL_QUADS, 0, sizeof(obj_vert_buf) / sizeof(GLfloat));
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY_EXT);
To my understanding glTexCoordPointer()'s first argument specifies the number of elements per vertex which would be two as in:
glTexCoord2f(0.0, 0.0);
The second argument is the type, GLfloat.
The third argument is the offset between each set of elements pertaining to each vertex, so zero after the two stated before (I have also tried it with 2 * sizeof(GLfloat) to no change).
And the fourth argument is a pointer to the start of the data, i.e. obj_tex_buf.
The quad renders correctly and the texture is drawn on it correctly, but I get another random shape coming off from its centre and textured incorrectly, any thoughts would be great. The additional quad isn't visible without the glTexCoordPointer() line.
From the docs:
count
Specifies the number of indices to be rendered.
Thus you have to call glDrawArrays(GL_QUADS, 0, 4);
Please note that GL_QUADS isn't officially supported anymore as of OpenGL 3.1.

Getting exact pixel from texture

I have a question about textures in OpenGL. I am trying to use them for GPGPU operations but I am stuck at beggining. I have created a texture like this (4x4 int matrix).
OGLTexImageFloat dataTexImage = new OGLTexImageFloat(4, 4, 4);
dataTexImage.setPixel(0, 0, 0, 0);
dataTexImage.setPixel(0, 1, 0, 10);
dataTexImage.setPixel(0, 2, 0, 5);
dataTexImage.setPixel(0, 3, 0, 15);
dataTexImage.setPixel(1, 0, 0, 10);
dataTexImage.setPixel(1, 1, 0, 0);
dataTexImage.setPixel(1, 2, 0, 2);
dataTexImage.setPixel(1, 3, 0, 1000);
dataTexImage.setPixel(2, 0, 0, 5);
dataTexImage.setPixel(2, 1, 0, 2);
dataTexImage.setPixel(2, 2, 0, 0);
dataTexImage.setPixel(2, 3, 0, 2);
dataTexImage.setPixel(3, 0, 0, 15);
dataTexImage.setPixel(3, 1, 0, 1000);
dataTexImage.setPixel(3, 2, 0, 2);
dataTexImage.setPixel(3, 3, 0, 0);
texture = new OGLTexture2D(gl, dataTexImage);
Now I would like to add value from [1,1] matrix position to value of each pixel (matrix entry). As I am speaking about every picture I should probably do it in fragment shader. But i dont know how can i get exact pixel form texture ([1,1] entry from matrix). Can someone explain me, how to do this?
If you are trying to add a single constant value (i.e. a value from [1,1]) to the entire image (every pixel of the rendered image), then you should pass that constant value as a separate uniform value into your shader program.
Then in the fragment shader, add this constant value to the current pixel color. The current pixel color comes as an input vec4 from your vertex shader.

Swapping between DirectX11 Vertex and Pixel Shaders

so I have been following a tutorial in the Frank Luna book "3D Games programming with DirectX11" and Have been working on a Sky-box. This sky-box is rendering correctly apart from a small tweak needed to the texture. I have created a separate vertex and pixel shader for the Sky Box as it doesn't need so much work in the .fx file. When I draw my object they all draw but when I use the normal vertex and pixel shader which works normally my objects appear black (I think they are not able to get the colour from their shader).
_pImmediateContext->RSSetState(_solidFrame);
_pImmediateContext->VSSetShader(_pSkyVertexShader, nullptr, 0);
_pImmediateContext->PSSetShaderResources(3, 1, &_pTextureSkyMap);
_pImmediateContext->PSSetShaderResources(0, 1, &_pTextureRV);
_pImmediateContext->PSSetShaderResources(1, 1, &_pSpecTextureRV);
_pImmediateContext->PSSetShaderResources(2, 1, &_pNormTextureRV);
_pImmediateContext->PSSetSamplers(0, 1, &_pSamplerLinear);
_pImmediateContext->PSSetShader(_pSkyPixelShader, nullptr, 0);
//Imported Sky
world = XMLoadFloat4x4(&_sky.GetWorld());
cb.mWorld = XMMatrixTranspose(world);
_pImmediateContext->UpdateSubresource(_pConstantBuffer, 0, nullptr, &cb, 0, 0); //Copies the constant buffer to the shaders.
//Draw the Pitch
_sky.Draw(_pd3dDevice, _pImmediateContext);
_pImmediateContext->VSSetShader(_pVertexShader, nullptr, 0);
_pImmediateContext->VSSetConstantBuffers(0, 1, &_pConstantBuffer);
_pImmediateContext->PSSetConstantBuffers(0, 1, &_pConstantBuffer);
_pImmediateContext->PSSetShaderResources(0, 1, &_pTextureMetalRV);
_pImmediateContext->PSSetShaderResources(1, 1, &_pSpecTextureRV);
_pImmediateContext->PSSetShaderResources(2, 1, &_pNormTextureRV);
_pImmediateContext->PSSetSamplers(0, 1, &_pSamplerLinear);
_pImmediateContext->PSSetShader(_pPixelShader, nullptr, 0);
//Floor
// Render opaque objects //
// Set vertex buffer for the Floor
_pImmediateContext->IASetVertexBuffers(0, 1, &_pVertexBufferFloor, &stride, &offset);
// Set index buffer
_pImmediateContext->IASetIndexBuffer(_pIndexBufferFloor, DXGI_FORMAT_R16_UINT, 0);
world = XMLoadFloat4x4(&_worldFloor);
cb.mWorld = XMMatrixTranspose(world);
_pImmediateContext->UpdateSubresource(_pConstantBuffer, 0, nullptr, &cb, 0, 0); //Copies the constant buffer to the shaders.
_pImmediateContext->DrawIndexed(96, 0, 0);
//Imported Pitch
world = XMLoadFloat4x4(&_pitch.GetWorld());
cb.mWorld = XMMatrixTranspose(world);
_pImmediateContext->UpdateSubresource(_pConstantBuffer, 0, nullptr, &cb, 0, 0); //Copies the constant buffer to the shaders.
//Draw the Pitch
_pitch.Draw(_pd3dDevice, _pImmediateContext);
_pImmediateContext->PSSetShaderResources(0, 1, &_pTextureMetalRV);
_pImmediateContext->PSSetShaderResources(1, 1, &_pSpecTextureMetalRV);
_pImmediateContext->PSSetShaderResources(2, 1, &_pNormTextureMetalRV);
Am i missing out a line of code to clear something between changing the shaders and not using the wrong data??
Was a problem with a small edit to the .fx file that I hadn't noticed. Is now fixed by reverting that file back.

Strange blending when rendering self-transparent texture to the framebuffer

I'm trying to render self-transparent textures to the framebuffer, but I'm getting not what I guessed: everything previously rendered on the framebuffer gets ignored, and this texture blends with the colour I cleaned my main canvas.
That's what I would like to get, but without using framebuffers:
package test;
import com.badlogic.gdx.*;
import com.badlogic.gdx.graphics.*;
import com.badlogic.gdx.graphics.g2d.*;
public class GdxTest extends ApplicationAdapter {
SpriteBatch batch;
Texture img;
#Override
public void create () {
batch = new SpriteBatch();
Pixmap pixmap = new Pixmap(1, 1, Pixmap.Format.RGBA8888);
pixmap.setColor(1, 1, 1, 1);
pixmap.fillRectangle(0, 0, 1, 1);
// Generating a simple 1x1 white texture
img = new Texture(pixmap);
pixmap.dispose();
}
#Override
public void render () {
Gdx.gl.glClearColor(1, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.begin();
batch.setColor(1, 1, 1, 1);
batch.draw(img, 0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
batch.setColor(0, 0, 0, 0.5f);
batch.draw(img, 0, 0, 300, 300);
batch.end();
}
}
And it works as perfectly as it should do:
http://i.stack.imgur.com/wpFNg.png
And that's what I get with using of framebuffer (I can't understand why the second rendered texture doesn't blend with the previous one, as it do without framebuffer):
package test;
import com.badlogic.gdx.*;
import com.badlogic.gdx.graphics.*;
import com.badlogic.gdx.graphics.g2d.*;
import com.badlogic.gdx.graphics.glutils.*;
public class GdxTest extends ApplicationAdapter {
SpriteBatch batch;
Texture img;
FrameBuffer buffer;
TextureRegion region;
#Override
public void create () {
batch = new SpriteBatch();
Pixmap pixmap = new Pixmap(1, 1, Pixmap.Format.RGBA8888);
pixmap.setColor(1, 1, 1, 1);
pixmap.fillRectangle(0, 0, 1, 1);
// Generating a simple 1x1 white texture
img = new Texture(pixmap);
pixmap.dispose();
// Generating a framebuffer
buffer = new FrameBuffer(Pixmap.Format.RGBA8888, Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), false);
region = new TextureRegion(buffer.getColorBufferTexture());
region.flip(false, true);
}
#Override
public void render () {
// Filling with red shows the problem
Gdx.gl.glClearColor(1, 0, 0, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
buffer.begin();
batch.begin();
Gdx.gl.glClearColor(1, 1, 1, 1);
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT);
batch.setColor(1, 1, 1, 1);
batch.draw(img, 0, 0, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
batch.setColor(0, 0, 0, 0.5f);
batch.draw(img, 0, 0, 300, 300);
batch.end();
buffer.end();
batch.begin();
batch.setColor(1, 1, 1, 1);
batch.draw(region, 0, 0);
batch.end();
}
}
And an unpredictable result:
http://i.stack.imgur.com/UdDKD.png
So how could I make the framebuffer version work the way the first version does? ;)
The easy answer is to disable blending when rendering to the screen.
But I think it is good to understand why this is happening if you want to use FBO. So let's walk through what's actually going on.
First make sure to understand what the color of the texture and the color of the batch (the vertex color) does: they are multiplied. So when setting the batch color to 0,0,0,0.5 and the texture pixel (texel) is 1,1,1,1 this will result in a value of 1*0,1*0,1*0,1*0.5 = 0,0,0,0.5.
Next make sure to understand how blending works. Blending is enabled by default and will use the SRC_ALPHA and ONE_MINUS_SRC_ALPHA functions. This means that the source value (the texel) is multiplied by the source alpha and that the destination value (the screen pixel) is multiplied by one minus the source alpha. So if your screen pixel has the value 1,1,1,1 and your texel has the value 0,0,0,0.5 then the screen pixel will be set to:(0.5*0, 0.5*0, 0.5*0, 0.5*0.5) + ((1-0.5)*1, (1-0.5)*1, (1-0.5)*1, (1-0.5)*1) which is (0,0,0,0.25) + (0.5, 0.5, 0.5, 0.5) = (0.5, 0.5, 0.5, 0.75).
So let's see how that works for you in your first code:
You clear the screen with 1, 0, 0, 1, in other words: every pixel of the screen contains the value 1, 0, 0, 1.
Then you render a full rectangle with each texel value 1,1,1,1, every pixel of the screen now contains the value 1, 1, 1, 1.
Then you render a smaller rectangle with each texel value 0,0,0,0.5, every pixel on that part of the screen now contains the value 0.5, 0.5, 0.5, 0.75.
Got a feeling about the issue already? Let's see what happens in your second code:
You clear the screen with 1, 0, 0, 1: every pixel of the screen contains the value 1, 0, 0, 1.
You bind the FBO and clear it with 1, 1, 1, 1: every pixel of the FBO contains the value 1, 1, 1, 1.
You render a full rectangle with each texel value 1,1,1,1 to the FBO: every pixel of the FBO now contains the value 1,1,1,1.
You render a smaller rectangle with each texel value 0,0,0,0.5, every pixel on that part of the FBO now contains the value 0.5, 0.5, 0.5, 0.75.
Then you bind the screen again as the render target of which each pixel still contains the value 1, 0, 0, 1.
Finally you render the FBO texture as full rectangle to the screen, causing these texels to be blended with the screen pixels. For the smaller rectangle this means blending 0.5, 0.5, 0.5, 0.75 multiplied by 0.75 and 1, 0, 0, 1 multiplied by 1-0.75=0.25, which will result in 0.375, 0.375, 0.375, 0.5625 and 0.25, 0, 0, 0.25. So the final color is 0.625, 0.375, 0.375, 0,8125
Make sure to understand this process, otherwise it can cause quite some frustrating weird issues. If you find it hard to follow then you could take pen and paper and manually calculate the value for each step.

Using multiple QGLWidgets (QT) to display (same) 3D Texture?

I have a question about OpenGL and Qt. I haven’t worked with OpenGL so far and got the code from somebody else. Unfortunately I can't ask him.
I create multiple CTAGLWidgets (ref. Constructor) to display CT image data from different perspectives (sagittal, axial, coronal). Therefore I want all OpenGL widgets to load the created 3D texture (see last part of source code). So far only the last widget loads the texture and the other widgets keep showing a black screen.
I can provide additional code if necessary (shader, ...), but I think I added all relevant parts.
What do I need to change? Or can you provide a link that could help me solve the problem? Anything will help!
Code snippet of the .cpp file:
CTAGLWidget::CTAGLWidget(QWidget* parent ) : QGLWidget (parent) {
}
void CTAGLWidget::initShaders() {
setlocale(LC_NUMERIC, "C");
if (!program.addShaderFromSourceFile(QGLShader::Vertex, ":/vshader.glsl"))
close();
if (!program.addShaderFromSourceFile(QGLShader::Fragment, ":/fshader.glsl"))
close();
if (!program.link())
close();
if (!program.bind())
close();
setlocale(LC_ALL, "");
}
void CTAGLWidget::initializeGL() {
initializeGLFunctions();
initShaders();
qglClearColor(Qt::black);
zoom = 1.0;
qNow = QQuaternion(1,0,0,0);
min = QVector3D( 1.0, 1.0, 1.0);
max = QVector3D(-1.0,-1.0,-1.0);
center = QVector3D(0,0,0);
glGenBuffers(1,&vboQuadId);
std::vector<QVector3D> vertex;
vertex.push_back(QVector3D(-2,-2, 0));
vertex.push_back(QVector3D( 0, 0, 0));
vertex.push_back(QVector3D( 2,-2, 0));
vertex.push_back(QVector3D( 1, 0, 0));
vertex.push_back(QVector3D( 2, 2, 0));
vertex.push_back(QVector3D( 1, 1, 0));
vertex.push_back(QVector3D(-2, 2, 0));
vertex.push_back(QVector3D( 0, 1, 0));
glBindBuffer(GL_ARRAY_BUFFER,vboQuadId);
glBufferData(GL_ARRAY_BUFFER,vertex.size()*sizeof(QVector3D),vertex.data(),GL_STATIC_DRAW);
}
void CTAGLWidget::paintGL() {
glClear(GL_COLOR_BUFFER_BIT);
QMatrix4x4 P(projection);
P.scale(zoom,zoom,zoom);
modelView.setToIdentity();
modelView.rotate(qNow.conjugate());
modelView.translate(-center);
program.bind();
program.setUniformValue("uPMat", P);
program.setUniformValue("uMVMat", modelView);
program.setUniformValue("uColor", QVector4D(1.0,0.0,0.0,1.0));
glBindBuffer(GL_ARRAY_BUFFER, vboQuadId);
int vertexLocation = program.attributeLocation("a_position");
program.enableAttributeArray(vertexLocation);
glVertexAttribPointer(vertexLocation, 3, GL_FLOAT, GL_FALSE,
2*sizeof(QVector3D), 0);
int texAttribLoc = program.attributeLocation("aTexCoord");
program.enableAttributeArray(texAttribLoc);
glVertexAttribPointer(texAttribLoc, 3, GL_FLOAT, GL_FALSE,
2*sizeof(QVector3D), (const void*) sizeof(QVector3D));
glDrawArrays(GL_QUADS,0,4);
}
Here the 3D texture is created from the QByteArray "texture":
void CTScanMain::setTexture() {
...
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage3D(GL_TEXTURE_3D, 0, GL_RGB8,
ctAnalyser->x(), ctAnalyser->y(), ctAnalyser->z(),
0, GL_RGB, GL_UNSIGNED_BYTE, texture);
...
}
There are no further OpenGL calls in the program.
One possible solution can be found looking at the constructor of the QGLWidget:
QGLWidget ( QWidget * parent = 0, const QGLWidget * shareWidget = 0, Qt::WindowFlags f = 0 )
Passing the first opengl widget object as "shareWidget" makes further widgets share the same texture.
Qt documentation:
"If shareWidget is a valid QGLWidget, this widget will share OpenGL display lists and texture objects with shareWidget."
A similar question has been asked at:
http://qt-project.org/forums/viewthread/8265