How can the particle objects have different textures? - c++

I have a problem making particles, so I'm going to ask you a question. The problem is that when you make several particles, the texture of the particles is the texture of the particles made at the end. This is my code.
enter code herem_pParticleBuffer = new CStructuredBuffer;
m_pParticleBuffer->Create(sizeof(tParticle), m_iMaxParticle, nullptr);
m_pSharedBuffer = new CStructuredBuffer;
m_pSharedBuffer->Create(sizeof(tParticleShared), 1, nullptr);
m_pMesh = CResMgr::GetInst()->FindRes<CMesh>(L"PointMesh");
m_pMtrl = CResMgr::GetInst()->FindRes<CMaterial>(L"ParticleMtrl");
Ptr<CTexture> pParticle = _pTexture;
m_pMtrl->SetData(SHADER_PARAM::TEX_0, pParticle.GetPointer());
m_pUpdateMtrl = CResMgr::GetInst()->FindRes<CMaterial>(L"ParticleUpdateMtrl");
This is where the particles are initialized.
float fRatio = tData[_in.iInstID].m_fCurTime / tData[_in.iInstID].m_fLifeTime;
float4 vCurColor = (g_vec4_1 - g_vec4_0) * fRatio + g_vec4_0;
return vCurColor * g_tex_0.Sample(g_sam_0, _in.vUV);
This is the hlsl corresponding to the pixel shader of the particles.

Related

Vertex skinning artifacts with specific glTF models

I recently implemented vertex skinning in my own Vulkan engine. Pretty much all models render properly, however, I find that with Mixamo models, I get skinning artifacts. The main difference I found between "regular" glTF models and Mixamo models, is that the Mixamo models share inverse bind matrices between multiple meshes, but I highly doubt that this causes this issue.
Here you can see that for some reason the vertices are pulled towards one specific point which seems to be at (0, 0, 0). I know for sure that this is not caused by the vertex and index loading, as the model renders properly without vertex skinning.
Calculation of joint matrices
void Object::updateJointsByNode(Node *node) {
if (node->mesh && node->skinIndex > -1) {
auto inverseTransform = glm::inverse(node->getLocalMatrix());
auto skin = this->skinLookup[node->skinIndex];
auto numberOfJoints = skin->jointsIndices.size();
std::vector<glm::mat4> jointMatrices(numberOfJoints);
for (size_t i = 0; i < numberOfJoints; i++) {
jointMatrices[i] =
this->getNodeByIndex(skin->jointsIndices[i])->getLocalMatrix() * skin->inverseBindMatrices[i];
jointMatrices[i] = inverseTransform * jointMatrices[i];
}
this->inverseBindMatrices = jointMatrices;
}
for (auto &child : node->children) {
this->updateJointsByNode(child);
}
}
Calculation of vertex displacement in GLSL Vertex Shader
mat4 skinMat =
inWeight0.x * model.jointMatrix[int(inJoint0.x)] +
inWeight0.y * model.jointMatrix[int(inJoint0.y)] +
inWeight0.z * model.jointMatrix[int(inJoint0.z)] +
inWeight0.w * model.jointMatrix[int(inJoint0.w)];
localPosition = model.model * model.local * skinMat * vec4(inPosition, 1.0);
So basically the mistake I made was relatively simple. It was caused by casting the joints to the wrong type. When a glTF model uses a byte stride of size 4 for the vertex joints, the implicit type is uint8_t (unsigned byte) and not the uint16_t (unsigned short) which I was using.

Qt3D - Instance rendering of 2 squares. PrimitiveType? InstanceCount?

I'd like to display on the screen 2 squares using instance rendering.
The problem is :
When I draw 2 squares, they are connected. I can't draw multiple separate square.
I know it has to do with PrimitiveType : i'm using Qt3DRender::QGeometryRenderer::LineLoop or Qt3DRender::QGeometryRenderer::TriangleFan but is there a way to tell Qt to separate my data buffer in multiple instances ? so that the PrimitiveType apply on each instances and not all the vertices.
I got a function that create shapes :
Qt3DCore::QEntity* SceneBuilder::createShapeInstancing(const QList<QVector3D> & listVertices, int nbPointGeometry, const QColor color, Qt3DCore::QEntity * rootEntity)
{
// Material
Qt3DExtras::QPhongMaterial *material = new Qt3DExtras::QPhongMaterial(rootEntity);
material->setAmbient(color);
// Custom entity
Qt3DCore::QEntity *customMeshEntity = new Qt3DCore::QEntity(rootEntity);
// Custom Mesh
Qt3DRender::QGeometryRenderer *customMeshRenderer = new Qt3DRender::QGeometryRenderer(rootEntity);
Qt3DRender::QGeometry *customGeometry = new Qt3DRender::QGeometry(customMeshRenderer);
Qt3DRender::QBuffer *vertexDataBuffer = new Qt3DRender::QBuffer(Qt3DRender::QBuffer::VertexBuffer, customGeometry);
int nbPointGeometry = nbPoint;
QByteArray vertexBufferData;
vertexBufferData.resize(listVertices.size() * nbPointGeometry * sizeof(QVector3D));
QVector3D *posData = reinterpret_cast<QVector3D *>(vertexBufferData.data());
QList<QVector3D>::const_iterator it;
for (it = listVertices.begin(); it != listVertices.end(); it++)
{
*posData = *it;
++posData;
}
vertexDataBuffer->setData(vertexBufferData);
//Attributes
Qt3DRender::QAttribute *positionAttribute = new Qt3DRender::QAttribute();
positionAttribute->setName(Qt3DRender::QAttribute::defaultPositionAttributeName());
positionAttribute->setAttributeType(Qt3DRender::QAttribute::VertexAttribute);
positionAttribute->setBuffer(vertexDataBuffer);
positionAttribute->setVertexBaseType(Qt3DRender::QAttribute::Float);
positionAttribute->setVertexSize(3);
positionAttribute->setByteOffset(0);
positionAttribute->setByteStride((0 + 3) * sizeof(float));
positionAttribute->setCount(listVertices.size());
//positionAttribute->setDivisor(1);
customGeometry->addAttribute(positionAttribute);
customMeshRenderer->setGeometry(customGeometry);
//customMeshRenderer->setInstanceCount(listVertices.size()/nbPoint);
if (nbPointGeometry == 1)
customMeshRenderer->setPrimitiveType(Qt3DRender::QGeometryRenderer::Points);
else if (nbPointGeometry == 2)
customMeshRenderer->setPrimitiveType(Qt3DRender::QGeometryRenderer::Lines);
else if (nbPointGeometry == 3)
customMeshRenderer->setPrimitiveType(Qt3DRender::QGeometryRenderer::Triangles);
else
customMeshRenderer->setPrimitiveType(Qt3DRender::QGeometryRenderer::LineLoop);
customMeshEntity->addComponent(customMeshRenderer);
customMeshEntity->addComponent(material);
return customMeshEntity;
}
When i call this function, i put my 2 squares inside the listVertices variables. So that all my squares are display in one draw call.
But the shapes are one way or another connected. Is there a way to remove that ? I'm not using InstanceCount or Divisor but i don't know how it works. I made some tests with it but nothing worked
Actually, I just found an example of instanced rendering here.
They subclassed QSphereGeometry and performed the instanced rendering in a shader. You could create a RectangleGeometry which holds the vertices for one triangle and equip it with the same functionality as the InstancedGeometry in the example.
Result:

Access framebuffer in Qt3D

My task: Calculate the pixel coordinates (e.g. make a snapshot) of a 3D mesh to find the 2D shape of this mesh from a specific camera angle.
I'm currently using Qt3D with a QGeometryRenderer to render a scene containing a mesh to a QWidget which works fine.
I tried to render the content of the QWidget into a Pixmap with QWidget::render() as proposed by this post How to create screenshot of QWidget?. Saving the pixmap to a .jpg results in a blank image with a default background color which makes sense because the QWidget is not holding the mesh object itself.
Here is how the scene is set in my mainwindow.cpp
// sets the scene objects, camera, lights,...
void MainWindow::setScene() {
scene = custommesh->createScene(mesh->getVertices(),
mesh->getVerticesNormals(),
mesh->getFaceNormals(),
mesh->getVerticesIndex(),
mesh->getFacesIndex()); // QEntity*
custommesh->setMaterial(scene); // CustomMeshRenderer object
camera = custommesh->setCamera(view);
custommesh->setLight(scene, camera);
custommesh->setCamController(scene, camera);
view->setRootEntity(scene); // Qt3DExtras::Qt3DWindow object
// Setting up a QWiget working as a container for the view
QWidget *container = QWidget::createWindowContainer(view);
container->setMinimumSize(QSize(500, 500));
QSizePolicy policy = QSizePolicy(QSizePolicy::Policy(5), QSizePolicy::Policy(5));
policy.setHorizontalStretch(1);
policy.setVerticalStretch(1);
container->setSizePolicy(policy);
container->setObjectName("meshWidget");
this->ui->meshLayout->insertWidget(0, container);
}
As for the rendering here is the custommeshrenderer class where the QGeometryRenderer is defined and a QEntity* is returned when initializing the mesh.
#include "custommeshrenderer.h"
#include <Qt3DRender/QAttribute>
#include <Qt3DExtras>
#include <Qt3DRender/QGeometryRenderer>
CustommeshRenderer::CustommeshRenderer()
{
rootEntity = new Qt3DCore::QEntity;
customMeshEntity = new Qt3DCore::QEntity(rootEntity);
transform = new Qt3DCore::QTransform;
customMeshRenderer = new Qt3DRender::QGeometryRenderer;
customGeometry = new Qt3DRender::QGeometry(customMeshRenderer);
m_pVertexDataBuffer = new Qt3DRender::QBuffer(Qt3DRender::QBuffer::VertexBuffer, customGeometry);
m_pNormalDataBuffer = new Qt3DRender::QBuffer(Qt3DRender::QBuffer::VertexBuffer, customGeometry);
m_pColorDataBuffer = new Qt3DRender::QBuffer(Qt3DRender::QBuffer::VertexBuffer, customGeometry);
m_pIndexDataBuffer = new Qt3DRender::QBuffer(Qt3DRender::QBuffer::IndexBuffer, customGeometry);
}
/**
Set vertices and their normals for the scene
#param vertices List with all vertices of the mesh
#param vertices_normals List with all vertice normals
#param face_normals List with all face normals
#param vertice_idx List with the indices for the vertices
#param face_idx List with all indices for the faces
#return Entity where some components were added
*/
Qt3DCore::QEntity *CustommeshRenderer::createScene(QList<QVector3D> vertices, QList<QVector3D> vertices_normals, QList<QVector3D> face_normals, QList<int> vertices_idx, QList<QVector3D> faces_idx) {
// Setting scale to 8.0
transform->setScale(8.0f);
// Setting all the colors to (200, 0, 0)
QList<QVector3D> color_list;
for(int i = 0; i < vertices.length(); i++) {
color_list.append(QVector3D(200.0f, 0.0f, 0.0f));
}
// Fill vertexBuffer with data which hold the vertices, normals and colors
// Build structure: Size of Verticles List * 3 (x,y,z) * 4 (since x,y,z are floats, which needs 4 bytes each)
vertexBufferData.resize(vertices.length() * 3 * (int)sizeof(float));
float *rawVertexArray = reinterpret_cast<float *>(vertexBufferData.data());
normalBufferData.resize(vertices_normals.length() * 3 * (int)sizeof(float));
float *rawNormalArray = reinterpret_cast<float *>(normalBufferData.data());
colorBufferData.resize(color_list.length() * 3 * (int)sizeof(float));
float *rawColorArray = reinterpret_cast<float *>(colorBufferData.data());
setRawVertexArray(rawVertexArray, vertices);
setRawNormalArray(rawNormalArray, vertices_normals);
setRawColorArray(rawColorArray, color_list);
//Fill indexBufferData with data which holds the triangulation information (patches/tris/lines)
indexBufferData.resize(faces_idx.length() * 3 * (int)sizeof(uint));
uint *rawIndexArray = reinterpret_cast<uint *>(indexBufferData.data());
setRawIndexArray(rawIndexArray, faces_idx);
//Set data to buffers
m_pVertexDataBuffer->setData(vertexBufferData);
m_pNormalDataBuffer->setData(normalBufferData);
m_pColorDataBuffer->setData(colorBufferData);
m_pIndexDataBuffer->setData(indexBufferData);
// Attributes
Qt3DRender::QAttribute *positionAttribute = new Qt3DRender::QAttribute();
positionAttribute->setAttributeType(Qt3DRender::QAttribute::VertexAttribute);
positionAttribute->setBuffer(m_pVertexDataBuffer);
// positionAttribute->setBuffer(m_pVertexDataBuffer.data());
positionAttribute->setDataType(Qt3DRender::QAttribute::Float);
positionAttribute->setDataSize(3);
positionAttribute->setByteOffset(0);
positionAttribute->setByteStride(3 * sizeof(float));
positionAttribute->setCount(vertices.length());
positionAttribute->setName(Qt3DRender::QAttribute::defaultPositionAttributeName());
Qt3DRender::QAttribute *normalAttribute = new Qt3DRender::QAttribute();
normalAttribute->setAttributeType(Qt3DRender::QAttribute::VertexAttribute);
normalAttribute->setBuffer(m_pNormalDataBuffer);
//normalAttribute->setBuffer(m_pNormalDataBuffer.data());
normalAttribute->setDataType(Qt3DRender::QAttribute::Float);
normalAttribute->setDataSize(3);
normalAttribute->setByteOffset(0);
normalAttribute->setByteStride(3 * sizeof(float));
normalAttribute->setCount(vertices.length());
normalAttribute->setName(Qt3DRender::QAttribute::defaultNormalAttributeName());
Qt3DRender::QAttribute* colorAttribute = new Qt3DRender::QAttribute();
colorAttribute->setAttributeType(Qt3DRender::QAttribute::VertexAttribute);
colorAttribute->setBuffer(m_pColorDataBuffer);
//colorAttribute->setBuffer(m_pColorDataBuffer.data());
colorAttribute->setDataType(Qt3DRender::QAttribute::Float);
colorAttribute->setDataSize(3);
colorAttribute->setByteOffset(0);
colorAttribute->setByteStride(3 * sizeof(float));
colorAttribute->setCount(vertices.length());
colorAttribute->setName(Qt3DRender::QAttribute::defaultColorAttributeName());
Qt3DRender::QAttribute *indexAttribute = new Qt3DRender::QAttribute();
indexAttribute->setAttributeType(Qt3DRender::QAttribute::IndexAttribute);
indexAttribute->setBuffer(m_pIndexDataBuffer);
//indexAttribute->setBuffer(m_pIndexDataBuffer.data());
indexAttribute->setDataType(Qt3DRender::QAttribute::UnsignedInt);
indexAttribute->setDataSize(3);
indexAttribute->setByteOffset(0);
indexAttribute->setByteStride(3 * sizeof(uint));
indexAttribute->setCount(face_normals.length());
customGeometry->addAttribute(positionAttribute);
customGeometry->addAttribute(normalAttribute);
/*customGeometry->addAttribute(colorAttribute);*/
customGeometry->addAttribute(indexAttribute);
//Set the final geometry and primitive type
customMeshRenderer->setPrimitiveType(Qt3DRender::QGeometryRenderer::Triangles);
customMeshRenderer->setVerticesPerPatch(3);
customMeshRenderer->setGeometry(customGeometry);
customMeshRenderer->setVertexCount(faces_idx.length()*3);
customMeshEntity->addComponent(customMeshRenderer);
customMeshEntity->addComponent(transform);
setMaterial(customMeshEntity);
return rootEntity;
}
What is the best way access the framebuffer or is there any other method to take a snapshot of the mesh?
My last hope would be to implement the rendering pipeline (at least from projected coords to pixel coords) myself, but i would prefer another solution. Unfortunately I have to rely on Qt3D and can't switch to other classes like QOpenGLWidget. At least I haven't found a possibility to integrate it yet.
I'm pretty new to Qt3D and the lack of detailed documentation doesn't make it easier.
You can use QRenderCapture for this. This essentially does a glReadPixels for you. The documentation is a bit sparse on this one, but there is an example online.
Alternatively, I implemented an offline renderer, which could help you in case that you don't want a whole 3D window.
I'm not sure what you mean by
Calculate the pixel coordinates (e.g. make a snapshot) of a 3D mesh to find the 2D shape of this mesh from a specific camera angle
but if you e.g. want to render the whole mesh in only one color (without highlights), you could try QPerVertexColorMaterial, which gave me exactly that result.

Rendering point cloud data with draw instancing from OSG Cookbook not working

I am rendering a point cloud using OSG. I followed the example in the OSG cookbook titled "Rendering point cloud data with draw instancing" that shows how to make one point with many instances and then transfer the point locations to the graphics card via a texture. It then uses a shader to pull the points out of the texture and move each instance to the right location. There appear to be two problems with what is getting rendered.
First, the points aren't in the right location compared to a more straight forward, working approach to rendering. It looks like they are roughly scaled from zero wrong, some kind of multiplicative factor on position.
Second, the imagery is blurry. Points tend to be generally in the right place; there are many points in the place where a large object should be. However, I can't tell what the object. Data rendered with my working (but slower) rendering method looks sharp.
I have verified that I have the same input data going into the texture and draw list in both methods so it seems it has to be something with the rendering.
Here is the code to set up the Geometry which is nearly directly copied from the text book.
osg::Geometry* geo = new osg::Geometry;
osg::ref_ptr<osg::Image> img = new osg::Image;
img->allocateImage(w,h, 1, GL_RGBA, GL_FLOAT);
osg::BoundingBox box;
float* data = (float*)img->data();
for (unsigned long int k=0; k<NPoints; k++)
{
*(data++) = cloud->x[k];
*(data++) = cloud->y[k];
*(data++) = cloud->z[k];
*(data++) = cloud->meta[0][k];
box.expandBy(cloud->x[k],cloud->y[k],cloud->z[k]);
}
geo->setUseDisplayList(false);
geo->setUseVertexBufferObjects(true);
geo->setVertexArray( new osg::Vec3Array(1));
geo->addPrimitiveSet( new osg::DrawArrays(GL_POINTS, 0, 1, stop) );
geo->setInitialBound(box);
osg::ref_ptr<osg::Texture2D> tex = new osg::Texture2D;
tex->setImage( img);
tex->setInternalFormat( GL_RGBA32F_ARB );
tex->setFilter( osg::Texture2D::MIN_FILTER, osg::Texture2D::LINEAR);
tex->setFilter( osg::Texture2D::MAG_FILTER, osg::Texture2D::LINEAR);
And here is the shader code.
void main () {
float row;
row = float(gl_InstanceID) / float(width);
vec2 uv = vec2( fract(row), floor(row) / float(height) );
vec4 texValue = texture2D(defaultTex,uv);
vec4 pos = gl_Vertex + vec4(texValue.xyz, 1.0);
gl_Position = gl_ModelViewProjectionMatrix * pos;
}
After a bunch of experimenting, I found that the example code from the OSG Cookbook has some problems.
The scale issue (the first problem) is in the shader.
vec4 pos = gl_Vertex + vec4(texValue.xyz, 1.0);
Should be
vec4 pos = gl_Vertex + vec4(texValue.xyz, 0.0);
This is because the gl_Vertex is a 3-vector with an extra 1 element to aide with matrix transformation. That element should always be 1. The example created another 3+1 vector and added it to gl_Vertex making it a 2. Replace the 1 with a zero and the scale problem goes away.
The blurriness (the second problem) was caused by texture interpolation.
tex->setFilter( osg::Texture2D::MIN_FILTER, osg::Texture2D::LINEAR);
tex->setFilter( osg::Texture2D::MAG_FILTER, osg::Texture2D::LINEAR);
needs to be
tex->setFilter( osg::Texture2D::MIN_FILTER, osg::Texture2D::NEAREST);
tex->setFilter( osg::Texture2D::MAG_FILTER, osg::Texture2D::NEAREST);
so that the interpolator will just take the values from the texture instead of interpolating them from neighboring texture pixels which may be points on the other side of the point cloud. After fixing these two issues, the example works as advertised and seems to be a bit faster in my limited testing.

Drawing a full screen Quad?

What's wrong with this:
pVertexBuffer[0].Position = D3DXVECTOR3(0.0f,0.0f,0.0f);
pVertexBuffer[0].TexCoord = D3DXVECTOR2(0.0f,0.0f);
pVertexBuffer[1].Position = D3DXVECTOR3(m_ScreenResolutionX,0.0f,0.0f);
pVertexBuffer[1].TexCoord = D3DXVECTOR2(1.0f,0.0f);
pVertexBuffer[2].Position = D3DXVECTOR3(0.0f,m_ScreenResolutionY,0.0f);
pVertexBuffer[2].TexCoord = D3DXVECTOR2(0.0f,1.0f);
pVertexBuffer[3].Position = D3DXVECTOR3(0.0f,m_ScreenResolutionY,0.0f);
pVertexBuffer[3].TexCoord = D3DXVECTOR2(0.0f,1.0f);
pVertexBuffer[4].Position = D3DXVECTOR3(m_ScreenResolutionX,0.0f,0.0f);
pVertexBuffer[4].TexCoord = D3DXVECTOR2(1.0f,0.0f);
pVertexBuffer[5].Position = D3DXVECTOR3(m_ScreenResolutionX,m_ScreenResolutionY,0.0f);
pVertexBuffer[5].TexCoord = D3DXVECTOR2(1.0f,1.0f);
if i try to render this, i don't see anything. in the vertex shader i use these vertex positions without transforming them.
Vertex shaders output vertices in homogenous screenspace coordinates; they are usually screen resolution independent. In other words, you should output coordinates from (-1,-1,0) to (1, 1, 0).