I have two meshes. One of them is animated and the other one is not animated. When i render them the inanimate mesh gets animated the same way the animated mesh is animating.
So, let just the animated mesh goes to left than comes back, my inanimate mesh does the same thing!
This is my code class for the inanimate meshes.
main class
class StaticMesh
{
public:
StaticMesh(LPDIRECT3DDEVICE9* device);
~StaticMesh(void);
void Render(void);
void Load(LPCWSTR fileName);
void CleanUp(void);
private:
LPDIRECT3DDEVICE9* d3ddev; // the pointer to the device class
LPD3DXMESH mesh; // define the mesh pointer
D3DMATERIAL9* material; // define the material object
DWORD numMaterials; // stores the number of materials in the mesh
LPDIRECT3DTEXTURE9* texture; // a pointer to a texture
LPD3DXBUFFER bufMeshMaterial;
};
#include "StdAfx.h"
#include "StaticMesh.h"
StaticMesh::StaticMesh(LPDIRECT3DDEVICE9* device)
{
d3ddev=device;
}
StaticMesh::~StaticMesh(void)
{
}
void StaticMesh::Render(void)
{
LPDIRECT3DDEVICE9 device=*d3ddev;
for(DWORD i = 0; i < numMaterials; i++) // loop through each subset
{
device->SetMaterial(&material[i]); // set the material for the subset
device->SetTexture(0, texture[i]); // ...then set the texture
mesh->DrawSubset(i); // draw the subset
}
}
void StaticMesh::Load(LPCWSTR fileName)
{
LPDIRECT3DDEVICE9 device=*d3ddev;
D3DXLoadMeshFromX(fileName, // load this file
D3DXMESH_SYSTEMMEM, // load the mesh into system memory
device, // the Direct3D Device
NULL, // we aren't using adjacency
&bufMeshMaterial, // put the materials here
NULL, // we aren't using effect instances
&numMaterials, // the number of materials in this model
&mesh); // put the mesh here
// retrieve the pointer to the buffer containing the material information
D3DXMATERIAL* tempMaterials = (D3DXMATERIAL*)bufMeshMaterial->GetBufferPointer();
// create a new material buffer and texture for each material in the mesh
material = new D3DMATERIAL9[numMaterials];
texture = new LPDIRECT3DTEXTURE9[numMaterials];
for(DWORD i = 0; i < numMaterials; i++) // for each material...
{
// Copy the material
material[i] = tempMaterials[i].MatD3D;
// Set the ambient color for the material (D3DX does not do this)
material[i].Ambient = material[i].Diffuse;
// Create the texture if it exists - it may not
texture[i] = NULL;
if (tempMaterials[i].pTextureFilename)
{
D3DXCreateTextureFromFileA(device, tempMaterials[i].pTextureFilename,&texture[i]);
}
}
}
void StaticMesh::CleanUp(void)
{
mesh->Release();
}
The anination comes from a transformation matrix.
You either pass that directly to the device (if using fixed function) or to a shader/effect.
All meshes you draw after the matrix has been set will undergo the same transform.
So, if you want to have one mesh stay inanimate you have to change the transform before drawing the mesh.
Pseudo-code:
SetTransform( world* view * projection );
DrawMesh();
SetTransform( identity * view * projection );
DrawMesh();
Related
I'm trying to implement rendering with different cameras.
When I use only one camera, everything is ok. But when I have two cameras with different transformation, rendering doesn't work properly, first scene is rendered with second camera transformation.
void Setup() {
...
// create uniform buffer (vmaCreateBuffer)
global_uniform_buffer = ...
// allocate descriptor set
global_descriptor_set = ...
// write descriptor set
...
}
void SetCameraProperties(Camera* camera) {
auto proj_view = camera->GetProjectionMatrix() * camera->GetViewMatrix();
global_uniform_buffer->update(&proj_view, sizeof(glm::mat4));
}
void BindGlobalDescriptorSet(vk::CommandBuffer cmd, vk::PipelineLayout pipeline_layout) {
cmd.bindDescriptorSets(
vk::PipelineBindPoint::eGraphics,
pipeline_layout,
0,
{ global_descriptor_set },
{}
);
}
void DrawScene() {
...
cmd.bindPipeline(vk::PipelineBindPoint::eGraphics, pipeline);
BindGlobalDescriptorSet(cmd, pipeline_layout);
cmd.bindVertexBuffers(0, vtx->handle, vk::DeviceSize{0});
cmd.bindIndexBuffer(idx->handle, vk::DeviceSize{0}, vk::IndexType::eUint16);
cmd.drawIndexed(3, 1, 0, 0, 0);
...
}
void Loop() {
... // begin render pass with scene framebuffer
SetCameraProperties(Editor::Get()->GetEditorCamera()):
DrawScene();
... // end previous render pass
... // begin render pass with game frame buffer
SetCameraProperties(CameraSystem::Get()->GetMainCamera()):
DrawScene();
... // end previous render pass
... // present
}
Looking at your code, you change the same uniform buffer between two draw calls at command buffer creation time. But the contents of the bound uniform buffer are consumed at submission time instead and not at creation time. So by the time you submit your command buffer, it uses the uniform buffer contents from your last update.
So your current setup won't work in Vulkan that way. To make this work, you could use one UBO per camera or one dynamic UBO with separate ranges for the cameras.
My task: Calculate the pixel coordinates (e.g. make a snapshot) of a 3D mesh to find the 2D shape of this mesh from a specific camera angle.
I'm currently using Qt3D with a QGeometryRenderer to render a scene containing a mesh to a QWidget which works fine.
I tried to render the content of the QWidget into a Pixmap with QWidget::render() as proposed by this post How to create screenshot of QWidget?. Saving the pixmap to a .jpg results in a blank image with a default background color which makes sense because the QWidget is not holding the mesh object itself.
Here is how the scene is set in my mainwindow.cpp
// sets the scene objects, camera, lights,...
void MainWindow::setScene() {
scene = custommesh->createScene(mesh->getVertices(),
mesh->getVerticesNormals(),
mesh->getFaceNormals(),
mesh->getVerticesIndex(),
mesh->getFacesIndex()); // QEntity*
custommesh->setMaterial(scene); // CustomMeshRenderer object
camera = custommesh->setCamera(view);
custommesh->setLight(scene, camera);
custommesh->setCamController(scene, camera);
view->setRootEntity(scene); // Qt3DExtras::Qt3DWindow object
// Setting up a QWiget working as a container for the view
QWidget *container = QWidget::createWindowContainer(view);
container->setMinimumSize(QSize(500, 500));
QSizePolicy policy = QSizePolicy(QSizePolicy::Policy(5), QSizePolicy::Policy(5));
policy.setHorizontalStretch(1);
policy.setVerticalStretch(1);
container->setSizePolicy(policy);
container->setObjectName("meshWidget");
this->ui->meshLayout->insertWidget(0, container);
}
As for the rendering here is the custommeshrenderer class where the QGeometryRenderer is defined and a QEntity* is returned when initializing the mesh.
#include "custommeshrenderer.h"
#include <Qt3DRender/QAttribute>
#include <Qt3DExtras>
#include <Qt3DRender/QGeometryRenderer>
CustommeshRenderer::CustommeshRenderer()
{
rootEntity = new Qt3DCore::QEntity;
customMeshEntity = new Qt3DCore::QEntity(rootEntity);
transform = new Qt3DCore::QTransform;
customMeshRenderer = new Qt3DRender::QGeometryRenderer;
customGeometry = new Qt3DRender::QGeometry(customMeshRenderer);
m_pVertexDataBuffer = new Qt3DRender::QBuffer(Qt3DRender::QBuffer::VertexBuffer, customGeometry);
m_pNormalDataBuffer = new Qt3DRender::QBuffer(Qt3DRender::QBuffer::VertexBuffer, customGeometry);
m_pColorDataBuffer = new Qt3DRender::QBuffer(Qt3DRender::QBuffer::VertexBuffer, customGeometry);
m_pIndexDataBuffer = new Qt3DRender::QBuffer(Qt3DRender::QBuffer::IndexBuffer, customGeometry);
}
/**
Set vertices and their normals for the scene
#param vertices List with all vertices of the mesh
#param vertices_normals List with all vertice normals
#param face_normals List with all face normals
#param vertice_idx List with the indices for the vertices
#param face_idx List with all indices for the faces
#return Entity where some components were added
*/
Qt3DCore::QEntity *CustommeshRenderer::createScene(QList<QVector3D> vertices, QList<QVector3D> vertices_normals, QList<QVector3D> face_normals, QList<int> vertices_idx, QList<QVector3D> faces_idx) {
// Setting scale to 8.0
transform->setScale(8.0f);
// Setting all the colors to (200, 0, 0)
QList<QVector3D> color_list;
for(int i = 0; i < vertices.length(); i++) {
color_list.append(QVector3D(200.0f, 0.0f, 0.0f));
}
// Fill vertexBuffer with data which hold the vertices, normals and colors
// Build structure: Size of Verticles List * 3 (x,y,z) * 4 (since x,y,z are floats, which needs 4 bytes each)
vertexBufferData.resize(vertices.length() * 3 * (int)sizeof(float));
float *rawVertexArray = reinterpret_cast<float *>(vertexBufferData.data());
normalBufferData.resize(vertices_normals.length() * 3 * (int)sizeof(float));
float *rawNormalArray = reinterpret_cast<float *>(normalBufferData.data());
colorBufferData.resize(color_list.length() * 3 * (int)sizeof(float));
float *rawColorArray = reinterpret_cast<float *>(colorBufferData.data());
setRawVertexArray(rawVertexArray, vertices);
setRawNormalArray(rawNormalArray, vertices_normals);
setRawColorArray(rawColorArray, color_list);
//Fill indexBufferData with data which holds the triangulation information (patches/tris/lines)
indexBufferData.resize(faces_idx.length() * 3 * (int)sizeof(uint));
uint *rawIndexArray = reinterpret_cast<uint *>(indexBufferData.data());
setRawIndexArray(rawIndexArray, faces_idx);
//Set data to buffers
m_pVertexDataBuffer->setData(vertexBufferData);
m_pNormalDataBuffer->setData(normalBufferData);
m_pColorDataBuffer->setData(colorBufferData);
m_pIndexDataBuffer->setData(indexBufferData);
// Attributes
Qt3DRender::QAttribute *positionAttribute = new Qt3DRender::QAttribute();
positionAttribute->setAttributeType(Qt3DRender::QAttribute::VertexAttribute);
positionAttribute->setBuffer(m_pVertexDataBuffer);
// positionAttribute->setBuffer(m_pVertexDataBuffer.data());
positionAttribute->setDataType(Qt3DRender::QAttribute::Float);
positionAttribute->setDataSize(3);
positionAttribute->setByteOffset(0);
positionAttribute->setByteStride(3 * sizeof(float));
positionAttribute->setCount(vertices.length());
positionAttribute->setName(Qt3DRender::QAttribute::defaultPositionAttributeName());
Qt3DRender::QAttribute *normalAttribute = new Qt3DRender::QAttribute();
normalAttribute->setAttributeType(Qt3DRender::QAttribute::VertexAttribute);
normalAttribute->setBuffer(m_pNormalDataBuffer);
//normalAttribute->setBuffer(m_pNormalDataBuffer.data());
normalAttribute->setDataType(Qt3DRender::QAttribute::Float);
normalAttribute->setDataSize(3);
normalAttribute->setByteOffset(0);
normalAttribute->setByteStride(3 * sizeof(float));
normalAttribute->setCount(vertices.length());
normalAttribute->setName(Qt3DRender::QAttribute::defaultNormalAttributeName());
Qt3DRender::QAttribute* colorAttribute = new Qt3DRender::QAttribute();
colorAttribute->setAttributeType(Qt3DRender::QAttribute::VertexAttribute);
colorAttribute->setBuffer(m_pColorDataBuffer);
//colorAttribute->setBuffer(m_pColorDataBuffer.data());
colorAttribute->setDataType(Qt3DRender::QAttribute::Float);
colorAttribute->setDataSize(3);
colorAttribute->setByteOffset(0);
colorAttribute->setByteStride(3 * sizeof(float));
colorAttribute->setCount(vertices.length());
colorAttribute->setName(Qt3DRender::QAttribute::defaultColorAttributeName());
Qt3DRender::QAttribute *indexAttribute = new Qt3DRender::QAttribute();
indexAttribute->setAttributeType(Qt3DRender::QAttribute::IndexAttribute);
indexAttribute->setBuffer(m_pIndexDataBuffer);
//indexAttribute->setBuffer(m_pIndexDataBuffer.data());
indexAttribute->setDataType(Qt3DRender::QAttribute::UnsignedInt);
indexAttribute->setDataSize(3);
indexAttribute->setByteOffset(0);
indexAttribute->setByteStride(3 * sizeof(uint));
indexAttribute->setCount(face_normals.length());
customGeometry->addAttribute(positionAttribute);
customGeometry->addAttribute(normalAttribute);
/*customGeometry->addAttribute(colorAttribute);*/
customGeometry->addAttribute(indexAttribute);
//Set the final geometry and primitive type
customMeshRenderer->setPrimitiveType(Qt3DRender::QGeometryRenderer::Triangles);
customMeshRenderer->setVerticesPerPatch(3);
customMeshRenderer->setGeometry(customGeometry);
customMeshRenderer->setVertexCount(faces_idx.length()*3);
customMeshEntity->addComponent(customMeshRenderer);
customMeshEntity->addComponent(transform);
setMaterial(customMeshEntity);
return rootEntity;
}
What is the best way access the framebuffer or is there any other method to take a snapshot of the mesh?
My last hope would be to implement the rendering pipeline (at least from projected coords to pixel coords) myself, but i would prefer another solution. Unfortunately I have to rely on Qt3D and can't switch to other classes like QOpenGLWidget. At least I haven't found a possibility to integrate it yet.
I'm pretty new to Qt3D and the lack of detailed documentation doesn't make it easier.
You can use QRenderCapture for this. This essentially does a glReadPixels for you. The documentation is a bit sparse on this one, but there is an example online.
Alternatively, I implemented an offline renderer, which could help you in case that you don't want a whole 3D window.
I'm not sure what you mean by
Calculate the pixel coordinates (e.g. make a snapshot) of a 3D mesh to find the 2D shape of this mesh from a specific camera angle
but if you e.g. want to render the whole mesh in only one color (without highlights), you could try QPerVertexColorMaterial, which gave me exactly that result.
I am a newbie with gameplay3d and went through all tutorials, however I cant manage to display this simple(not much polygons and material) model that I encoded from Fbx. I checked the model with unity3D, and a closed source software that uses gameplay3d and all seems to fine. I guess I am missing some detail loading the scene.
This is the model file including also the original fbx file. I suspect if it has something to do with light
https://www.dropbox.com/sh/ohgpsfnkm3iv24s/AACApRcxwtbmpKu4_5nnp8rZa?dl=0
This is the class that loads the scene.
#include "Demo.h"
// Declare our game instance
Demo game;
Demo::Demo()
: _scene(NULL), _wireframe(false)
{
}
void Demo::initialize()
{
// Load game scene from file
Bundle* bundle = Bundle::create("KGN56AI30N.gpb");
_scene = bundle->loadScene();
SAFE_RELEASE(bundle);
// Get the box model and initialize its material parameter values and bindings
Camera* camera = Camera::createPerspective(45.0f, getAspectRatio(), 1.0f, 20.0f);
Node* cameraNode = _scene->addNode("camera");
// Attach the camera to a node. This determines the position of the camera.
cameraNode->setCamera(camera);
// Make this the active camera of the scene.
_scene->setActiveCamera(camera);
SAFE_RELEASE(camera);
// Move the camera to look at the origin.
cameraNode->translate(0,0, 10);
cameraNode->rotateX(MATH_DEG_TO_RAD(0.25f));
// Update the aspect ratio for our scene's camera to match the current device resolution
_scene->getActiveCamera()->setAspectRatio(getAspectRatio());
// Set the aspect ratio for the scene's camera to match the current resolution
_scene->getActiveCamera()->setAspectRatio(getAspectRatio());
Light* directionalLight = Light::createDirectional(Vector3::one());
_directionalLightNode = Node::create("directionalLight");
_directionalLightNode->setLight(directionalLight);
SAFE_RELEASE(directionalLight);
_scene->addNode(_directionalLightNode);
_scene->setAmbientColor(1.0, 1.0, 1.0);
_scene->visit(this, &Demo::initializeMaterials);
}
bool Demo::initializeMaterials(Node* node)
{
Model* model = dynamic_cast<Model*>(node->getDrawable());
if (model)
{
for(int i=0;i<model->getMeshPartCount();i++)
{
Material* material = model->getMaterial(i);
if(material)
{
// For this sample we will only bind a single light to each object in the scene.
MaterialParameter* colorParam = material->getParameter("u_directionalLightColor[0]");
colorParam->setValue(Vector3(0.75f, 0.75f, 0.75f));
MaterialParameter* directionParam = material->getParameter("u_directionalLightDirection[0]");
directionParam->setValue(Vector3(1, 1, 1));
}
}
}
return true;
}
void Demo::finalize()
{
SAFE_RELEASE(_scene);
}
void Demo::update(float elapsedTime)
{
// Rotate model
//_scene->findNode("box")->rotateY(MATH_DEG_TO_RAD((float)elapsedTime / 1000.0f * 180.0f));
}
void Demo::render(float elapsedTime)
{
// Clear the color and depth buffers
clear(CLEAR_COLOR_DEPTH, Vector4::zero(), 1.0f, 0);
// Visit all the nodes in the scene for drawing
_scene->visit(this, &Demo::drawScene);
}
bool Demo::drawScene(Node* node)
{
// If the node visited contains a drawable object, draw it
Drawable* drawable = node->getDrawable();
if (drawable)
drawable->draw(_wireframe);
return true;
}
void Demo::keyEvent(Keyboard::KeyEvent evt, int key)
{
if (evt == Keyboard::KEY_PRESS)
{
switch (key)
{
case Keyboard::KEY_ESCAPE:
exit();
break;
}
}
}
void Demo::touchEvent(Touch::TouchEvent evt, int x, int y, unsigned int contactIndex)
{
switch (evt)
{
case Touch::TOUCH_PRESS:
_wireframe = !_wireframe;
break;
case Touch::TOUCH_RELEASE:
break;
case Touch::TOUCH_MOVE:
break;
};
}
I can't download your dropbox .fbx file. How many models do you have in the scene? Here's a simple way of doing what you want to do -- not optimal, but it'll get you started...
So first off, I can't see where in your code you actually assign a Shader to be used with the material. I use something like this:
material = model->setMaterial("Shaders/Animation/ADSVertexViewAnim.vsh", "Shaders/Animation/ADSVertexViewAnim.fsh");
You need to assign a Shader, and the above code will take the vertex and fragment shaders and use that when the object needs to be drawn.
I went about it a slightly different way by not loading the scene file automatically, but creating an empty scene and then extracting my model from the bundle and adding it to the scene manually. That way, I can see exactly what is happening and I'm in control of each step. GamePlay3D has some fancy property files, but use them only once you know how the process works manually..
Initially, I created a simple cube in a scene, and created a scene manually, and added the monkey to the node graph, as follows:
void GameMain::ExtractFromBundle()
{
/// Create a new empty scene.
_scene = Scene::create();
// Create the Model and its Node
Bundle* bundle = Bundle::create("res/monkey.gpb"); // Create the bundle from GPB file
/// Create the Cube
{
Mesh* meshMonkey = bundle->loadMesh("Character_Mesh"); // Load the mesh from the bundle
Model* modelMonkey = Model::create(meshMonkey);
Node* nodeMonkey = _scene->addNode("Monkey");
nodeMonkey->setTranslation(0,0,0);
nodeMonkey->setDrawable(modelMonkey);
}
}
Then I want to search the scene graph and only assign a material to the object that I want to draw (the monkey). Use this if you want to assign different materials to different objects manually...
bool GameMain::initializeScene(Node* node)
{
Material* material;
std::cout << node->getId() << std::endl;
// find the node in the scene
if (strcmp(node->getId(), "Monkey") != 0)
return false;
Model* model = dynamic_cast<Model*>(node->getDrawable());
if( !model )
return false;
material = model->setMaterial("Shaders/Animation/ADSVertexViewAnim.vsh", "Shaders/Animation/ADSVertexViewAnim.fsh");
material->getStateBlock()->setCullFace(true);
material->getStateBlock()->setDepthTest(true);
material->getStateBlock()->setDepthWrite(true);
// The World-View-Projection Matrix is needed to be able to see view the 3D world thru the camera
material->setParameterAutoBinding("u_worldViewProjectionMatrix", "WORLD_VIEW_PROJECTION_MATRIX");
// This matrix is necessary to calculate normals properly, but the WORLD_MATRIX would also work
material->setParameterAutoBinding("u_worldViewMatrix", "WORLD_VIEW_MATRIX");
material->setParameterAutoBinding("u_viewMatrix", "VIEW_MATRIX");
return true;
}
Now the object is ready to be drawn.... so I use these functions:
void GameMain::render(float elapsedTime)
{
// Clear the color and depth buffers
clear(CLEAR_COLOR_DEPTH, Vector4(0.0, 0.0, 0.0, 0.0), 1.0f, 0);
// Visit all the nodes in the scene for drawing
_scene->visit(this, &GameMain::drawScene);
}
bool GameMain::drawScene(Node* node)
{
// If the node visited contains a drawable object, draw it
Drawable* drawable = node->getDrawable();
if (drawable)
drawable->draw(_wireframe);
return true;
}
I use my own shaders, so I don't have to worry about Light and DirectionalLight and all that stuff. Once I can see the object, then I'll add dynamic lights, etc, but for starters, start simple.
Regards.
I've been stuck now for quite a while on this problem:
My code looks something like that
enum class LayerType {Foreground, Terrain, Background, Target};
class Texture
{
private:
...
Texture * target;
LayerType targetLayer;
...
public:
...
Texture & getTarget();
void setTarget(Texture & texture);
...
};
class Renderer
{
private:
...
map<GAME::LayerType, Texture> layers;
...
public:
...
void drawTexture(Texture & texture);
...
};
When the constructor for the renderer is called i set for each enum type an entry in the map, so each layer has its own texture (and the targetLayer is set to LayerTypeTarget). The idea is it that i dont have to draw all textures in the correct order, but on the correct target texture.
The present function of the renderer looks like this.
void Renderer::present()
{
this->setRendererEmptyTarget();
for(map<LayerType, Texture>::reverse_iterator it = this->layers.rbegin(); it != this->layers.rend(); it++)
{
this->drawTexture(it->second);
}
SDL_RenderPresent(this->ren);
}
and the drawing function like this:
void Renderer::drawTexture(Texture & texture)
{
if(texture.getTargetLayer() != LayerType::Target)
{
this->setRendererTarget(this->layers.find(texture.getTargetLayer())->second);
}
SDL_RenderCopy(this->ren, texture.getTexture(), NULL, NULL);
}
The problem is that all textures are drawn in the order of the function calls and not in the order of the target textures. I think that the problem might be the missing referencing for the target textures.
EDIT:
The left image shows how it should look like and the right shows how it actually looks:
http://imgur.com/QafTPuT
The code for the left image:
// s1, s2 are surfaces
// the targetTexture contains the sdl flags for a target texture
Texture targetTexture = Texture(this->renderer, LayerType::Target);
Texture t1 = Texture(this->renderer, s1, LayerType::Menu);
Texture t2 = Texture(this->renderer, s2, LayerType::Terrain);
while(running)
{
this->renderer.setTarget(targetTexture);
this->renderer.drawTexture(t1);
this->renderer.setEmptyTarget();
this->renderer.drawTexture(t2);
this->renderer.present();
}
And the code for the right image:
// s1, s2 are surfaces
Texture t1 = Texture(this->renderer, s1, LayerType::Menu);
Texture t2 = Texture(this->renderer, s2, LayerType::Terrain);
while(running)
{
this->renderer.drawTexture(t1);
this->renderer.drawTexture(t2);
this->renderer.present();
}
I solved my problem with pointers. I replaced
map<GAME::LayerType, Texture> layers;
with
map<GAME::LayerType, Texture*> layers;
So, i have now my reference :-)
I'm trying to create a 2D background for my ogre scene that renders the camera frames for the QCAR SDK. This is on an iPad with iOS 6.
At the moment I'm retrieving the pixel data like so in renderFrameQCAR:
const QCAR::Image *image = camFrame.getImage(1);
if(image) {
pixels = (unsigned char *)image->getPixels();
}
This returns pixels in the RGB888 format, then passing it to my ogre scene in the renderOgre() functions:
if(isUpdated)
scene.setCameraFrame(pixels);
scene.m_pRoot->renderOneFrame();
The setCameraFrame(pixels) function consists of:
void CarScene::setCameraFrame(const unsigned char *pixels)
{
HardwarePixelBufferSharedPtr pBuffer = m_pBackgroundTexture->getBuffer();
pBuffer->lock(HardwareBuffer::HBL_DISCARD);
const PixelBox& pBox = pBuffer->getCurrentLock();
PixelBox *tmp = new PixelBox(screenWidth, screenHeight, 0, PF_R8G8B8, &pixels);
pBuffer->blit(pBuffer, *tmp, pBox);
pBuffer->unlock();
delete tmp;
}
In this function I'm attempting to create a new PixelBox, copy the pixels into it and the copy that over the the pixelBuffer.
When I first create my Ogre3D scene, I set up the m_pBackgroundTexture & background rect2d like so:
void CarScene::createBackground()
{
m_pBackgroundTexture = TextureManager::getSingleton().createManual("DynamicTexture", ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME, TEX_TYPE_2D, m_pViewport->getActualWidth(), m_pViewport->getActualHeight(), 0, PF_R8G8B8, TU_DYNAMIC_WRITE_ONLY_DISCARDABLE);
m_pBackgroundMaterial = MaterialManager::getSingleton().create("Background", ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME);
m_pBackgroundMaterial->getTechnique(0)->getPass(0)->createTextureUnitState("DynamicTexture");
m_pBackgroundMaterial->getTechnique(0)->getPass(0)->setSceneBlending(SBT_TRANSPARENT_ALPHA);
m_pBackgroundMaterial->getTechnique(0)->getPass(0)->setDepthCheckEnabled(false);
m_pBackgroundMaterial->getTechnique(0)->getPass(0)->setDepthWriteEnabled(false);
m_pBackgroundMaterial->getTechnique(0)->getPass(0)->setLightingEnabled(false);
m_pBackgroundRect = new Rectangle2D(true);
m_pBackgroundRect->setCorners(-1.0, 1.0, 1.0, -1.0);
m_pBackgroundRect->setMaterial("Background");
m_pBackgroundRect->setRenderQueueGroup(RENDER_QUEUE_BACKGROUND);
AxisAlignedBox aabInf;
aabInf.setInfinite();
m_pBackgroundRect->setBoundingBox(aabInf);
SceneNode* node = m_pSceneManager->getRootSceneNode()->createChildSceneNode();
node->attachObject(m_pBackgroundRect);
}
After this all I get is a white background with no texture, and I have no idea why this is not displaying the output! My goal for this is just to have the camera rendering in the background so I can project my 3d model onto it.
Thanks,
Harry.