I'm trying to create a 2D background for my ogre scene that renders the camera frames for the QCAR SDK. This is on an iPad with iOS 6.
At the moment I'm retrieving the pixel data like so in renderFrameQCAR:
const QCAR::Image *image = camFrame.getImage(1);
if(image) {
pixels = (unsigned char *)image->getPixels();
}
This returns pixels in the RGB888 format, then passing it to my ogre scene in the renderOgre() functions:
if(isUpdated)
scene.setCameraFrame(pixels);
scene.m_pRoot->renderOneFrame();
The setCameraFrame(pixels) function consists of:
void CarScene::setCameraFrame(const unsigned char *pixels)
{
HardwarePixelBufferSharedPtr pBuffer = m_pBackgroundTexture->getBuffer();
pBuffer->lock(HardwareBuffer::HBL_DISCARD);
const PixelBox& pBox = pBuffer->getCurrentLock();
PixelBox *tmp = new PixelBox(screenWidth, screenHeight, 0, PF_R8G8B8, &pixels);
pBuffer->blit(pBuffer, *tmp, pBox);
pBuffer->unlock();
delete tmp;
}
In this function I'm attempting to create a new PixelBox, copy the pixels into it and the copy that over the the pixelBuffer.
When I first create my Ogre3D scene, I set up the m_pBackgroundTexture & background rect2d like so:
void CarScene::createBackground()
{
m_pBackgroundTexture = TextureManager::getSingleton().createManual("DynamicTexture", ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME, TEX_TYPE_2D, m_pViewport->getActualWidth(), m_pViewport->getActualHeight(), 0, PF_R8G8B8, TU_DYNAMIC_WRITE_ONLY_DISCARDABLE);
m_pBackgroundMaterial = MaterialManager::getSingleton().create("Background", ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME);
m_pBackgroundMaterial->getTechnique(0)->getPass(0)->createTextureUnitState("DynamicTexture");
m_pBackgroundMaterial->getTechnique(0)->getPass(0)->setSceneBlending(SBT_TRANSPARENT_ALPHA);
m_pBackgroundMaterial->getTechnique(0)->getPass(0)->setDepthCheckEnabled(false);
m_pBackgroundMaterial->getTechnique(0)->getPass(0)->setDepthWriteEnabled(false);
m_pBackgroundMaterial->getTechnique(0)->getPass(0)->setLightingEnabled(false);
m_pBackgroundRect = new Rectangle2D(true);
m_pBackgroundRect->setCorners(-1.0, 1.0, 1.0, -1.0);
m_pBackgroundRect->setMaterial("Background");
m_pBackgroundRect->setRenderQueueGroup(RENDER_QUEUE_BACKGROUND);
AxisAlignedBox aabInf;
aabInf.setInfinite();
m_pBackgroundRect->setBoundingBox(aabInf);
SceneNode* node = m_pSceneManager->getRootSceneNode()->createChildSceneNode();
node->attachObject(m_pBackgroundRect);
}
After this all I get is a white background with no texture, and I have no idea why this is not displaying the output! My goal for this is just to have the camera rendering in the background so I can project my 3d model onto it.
Thanks,
Harry.
Related
I use Qt3D in my project, I need to change texture of the plane dynamicly. For this I use my implementation of QAbstractTextureImage.
I do:
auto planeMaterial = new Qt3DExtras::QTextureMaterial();
Qt3DRender::QTexture2D *planeTexture = new Qt3DRender::QTexture2D();
auto *planeTextureImage = new PaintedTextureImage();
planeTextureImage->update();
planeTexture->addTextureImage(planeTextureImage);
planeMaterial->setTexture(planeTexture);
Qt3DCore::QTransform *planeTransform = new Qt3DCore::QTransform();
planeTransform->setRotationX(90);
planeTransform->setTranslation(QVector3D(0.0f, 0.0f, 15.0f));
auto planeEntity = new Qt3DCore::QEntity(this->rootEntity);
planeEntity->addComponent(mPlane);
planeEntity->addComponent(planeMaterial);
planeEntity->addComponent(planeTransform);
planeEntity->setEnabled(true);
In my scenemodifier. So it adds plane to the scene with material using texture. mPlane has width and height: 4.0 and 3.0. The image for texture has resolution 640x480, so it's 4:3 too.
void PaintedTextureImage::paint(QPainter *painter)
{
...
current = QImage((uchar*)color.data, color.cols, color.rows, color.step, QImage::Format_RGB888);
painter->drawImage(0, 0, current);
}
This is how 'current' looks if I save it to file:
And this is how it looks painted as textue:
So image quality became VERY bad and I can't understand why.
Solution:
planeTextureImage->setWidth(640);
planeTextureImage->setHeight(480);
Default was 256x256.
My task: Calculate the pixel coordinates (e.g. make a snapshot) of a 3D mesh to find the 2D shape of this mesh from a specific camera angle.
I'm currently using Qt3D with a QGeometryRenderer to render a scene containing a mesh to a QWidget which works fine.
I tried to render the content of the QWidget into a Pixmap with QWidget::render() as proposed by this post How to create screenshot of QWidget?. Saving the pixmap to a .jpg results in a blank image with a default background color which makes sense because the QWidget is not holding the mesh object itself.
Here is how the scene is set in my mainwindow.cpp
// sets the scene objects, camera, lights,...
void MainWindow::setScene() {
scene = custommesh->createScene(mesh->getVertices(),
mesh->getVerticesNormals(),
mesh->getFaceNormals(),
mesh->getVerticesIndex(),
mesh->getFacesIndex()); // QEntity*
custommesh->setMaterial(scene); // CustomMeshRenderer object
camera = custommesh->setCamera(view);
custommesh->setLight(scene, camera);
custommesh->setCamController(scene, camera);
view->setRootEntity(scene); // Qt3DExtras::Qt3DWindow object
// Setting up a QWiget working as a container for the view
QWidget *container = QWidget::createWindowContainer(view);
container->setMinimumSize(QSize(500, 500));
QSizePolicy policy = QSizePolicy(QSizePolicy::Policy(5), QSizePolicy::Policy(5));
policy.setHorizontalStretch(1);
policy.setVerticalStretch(1);
container->setSizePolicy(policy);
container->setObjectName("meshWidget");
this->ui->meshLayout->insertWidget(0, container);
}
As for the rendering here is the custommeshrenderer class where the QGeometryRenderer is defined and a QEntity* is returned when initializing the mesh.
#include "custommeshrenderer.h"
#include <Qt3DRender/QAttribute>
#include <Qt3DExtras>
#include <Qt3DRender/QGeometryRenderer>
CustommeshRenderer::CustommeshRenderer()
{
rootEntity = new Qt3DCore::QEntity;
customMeshEntity = new Qt3DCore::QEntity(rootEntity);
transform = new Qt3DCore::QTransform;
customMeshRenderer = new Qt3DRender::QGeometryRenderer;
customGeometry = new Qt3DRender::QGeometry(customMeshRenderer);
m_pVertexDataBuffer = new Qt3DRender::QBuffer(Qt3DRender::QBuffer::VertexBuffer, customGeometry);
m_pNormalDataBuffer = new Qt3DRender::QBuffer(Qt3DRender::QBuffer::VertexBuffer, customGeometry);
m_pColorDataBuffer = new Qt3DRender::QBuffer(Qt3DRender::QBuffer::VertexBuffer, customGeometry);
m_pIndexDataBuffer = new Qt3DRender::QBuffer(Qt3DRender::QBuffer::IndexBuffer, customGeometry);
}
/**
Set vertices and their normals for the scene
#param vertices List with all vertices of the mesh
#param vertices_normals List with all vertice normals
#param face_normals List with all face normals
#param vertice_idx List with the indices for the vertices
#param face_idx List with all indices for the faces
#return Entity where some components were added
*/
Qt3DCore::QEntity *CustommeshRenderer::createScene(QList<QVector3D> vertices, QList<QVector3D> vertices_normals, QList<QVector3D> face_normals, QList<int> vertices_idx, QList<QVector3D> faces_idx) {
// Setting scale to 8.0
transform->setScale(8.0f);
// Setting all the colors to (200, 0, 0)
QList<QVector3D> color_list;
for(int i = 0; i < vertices.length(); i++) {
color_list.append(QVector3D(200.0f, 0.0f, 0.0f));
}
// Fill vertexBuffer with data which hold the vertices, normals and colors
// Build structure: Size of Verticles List * 3 (x,y,z) * 4 (since x,y,z are floats, which needs 4 bytes each)
vertexBufferData.resize(vertices.length() * 3 * (int)sizeof(float));
float *rawVertexArray = reinterpret_cast<float *>(vertexBufferData.data());
normalBufferData.resize(vertices_normals.length() * 3 * (int)sizeof(float));
float *rawNormalArray = reinterpret_cast<float *>(normalBufferData.data());
colorBufferData.resize(color_list.length() * 3 * (int)sizeof(float));
float *rawColorArray = reinterpret_cast<float *>(colorBufferData.data());
setRawVertexArray(rawVertexArray, vertices);
setRawNormalArray(rawNormalArray, vertices_normals);
setRawColorArray(rawColorArray, color_list);
//Fill indexBufferData with data which holds the triangulation information (patches/tris/lines)
indexBufferData.resize(faces_idx.length() * 3 * (int)sizeof(uint));
uint *rawIndexArray = reinterpret_cast<uint *>(indexBufferData.data());
setRawIndexArray(rawIndexArray, faces_idx);
//Set data to buffers
m_pVertexDataBuffer->setData(vertexBufferData);
m_pNormalDataBuffer->setData(normalBufferData);
m_pColorDataBuffer->setData(colorBufferData);
m_pIndexDataBuffer->setData(indexBufferData);
// Attributes
Qt3DRender::QAttribute *positionAttribute = new Qt3DRender::QAttribute();
positionAttribute->setAttributeType(Qt3DRender::QAttribute::VertexAttribute);
positionAttribute->setBuffer(m_pVertexDataBuffer);
// positionAttribute->setBuffer(m_pVertexDataBuffer.data());
positionAttribute->setDataType(Qt3DRender::QAttribute::Float);
positionAttribute->setDataSize(3);
positionAttribute->setByteOffset(0);
positionAttribute->setByteStride(3 * sizeof(float));
positionAttribute->setCount(vertices.length());
positionAttribute->setName(Qt3DRender::QAttribute::defaultPositionAttributeName());
Qt3DRender::QAttribute *normalAttribute = new Qt3DRender::QAttribute();
normalAttribute->setAttributeType(Qt3DRender::QAttribute::VertexAttribute);
normalAttribute->setBuffer(m_pNormalDataBuffer);
//normalAttribute->setBuffer(m_pNormalDataBuffer.data());
normalAttribute->setDataType(Qt3DRender::QAttribute::Float);
normalAttribute->setDataSize(3);
normalAttribute->setByteOffset(0);
normalAttribute->setByteStride(3 * sizeof(float));
normalAttribute->setCount(vertices.length());
normalAttribute->setName(Qt3DRender::QAttribute::defaultNormalAttributeName());
Qt3DRender::QAttribute* colorAttribute = new Qt3DRender::QAttribute();
colorAttribute->setAttributeType(Qt3DRender::QAttribute::VertexAttribute);
colorAttribute->setBuffer(m_pColorDataBuffer);
//colorAttribute->setBuffer(m_pColorDataBuffer.data());
colorAttribute->setDataType(Qt3DRender::QAttribute::Float);
colorAttribute->setDataSize(3);
colorAttribute->setByteOffset(0);
colorAttribute->setByteStride(3 * sizeof(float));
colorAttribute->setCount(vertices.length());
colorAttribute->setName(Qt3DRender::QAttribute::defaultColorAttributeName());
Qt3DRender::QAttribute *indexAttribute = new Qt3DRender::QAttribute();
indexAttribute->setAttributeType(Qt3DRender::QAttribute::IndexAttribute);
indexAttribute->setBuffer(m_pIndexDataBuffer);
//indexAttribute->setBuffer(m_pIndexDataBuffer.data());
indexAttribute->setDataType(Qt3DRender::QAttribute::UnsignedInt);
indexAttribute->setDataSize(3);
indexAttribute->setByteOffset(0);
indexAttribute->setByteStride(3 * sizeof(uint));
indexAttribute->setCount(face_normals.length());
customGeometry->addAttribute(positionAttribute);
customGeometry->addAttribute(normalAttribute);
/*customGeometry->addAttribute(colorAttribute);*/
customGeometry->addAttribute(indexAttribute);
//Set the final geometry and primitive type
customMeshRenderer->setPrimitiveType(Qt3DRender::QGeometryRenderer::Triangles);
customMeshRenderer->setVerticesPerPatch(3);
customMeshRenderer->setGeometry(customGeometry);
customMeshRenderer->setVertexCount(faces_idx.length()*3);
customMeshEntity->addComponent(customMeshRenderer);
customMeshEntity->addComponent(transform);
setMaterial(customMeshEntity);
return rootEntity;
}
What is the best way access the framebuffer or is there any other method to take a snapshot of the mesh?
My last hope would be to implement the rendering pipeline (at least from projected coords to pixel coords) myself, but i would prefer another solution. Unfortunately I have to rely on Qt3D and can't switch to other classes like QOpenGLWidget. At least I haven't found a possibility to integrate it yet.
I'm pretty new to Qt3D and the lack of detailed documentation doesn't make it easier.
You can use QRenderCapture for this. This essentially does a glReadPixels for you. The documentation is a bit sparse on this one, but there is an example online.
Alternatively, I implemented an offline renderer, which could help you in case that you don't want a whole 3D window.
I'm not sure what you mean by
Calculate the pixel coordinates (e.g. make a snapshot) of a 3D mesh to find the 2D shape of this mesh from a specific camera angle
but if you e.g. want to render the whole mesh in only one color (without highlights), you could try QPerVertexColorMaterial, which gave me exactly that result.
I am trying to display a point cloud, consisting of vertices and color with OSG. A static point cloud to display is rather easy with this guide.
But I am not capable of updating such a point cloud. My intention is to create a geometry and attach it to my viewer class once.
This is the mentioned method which is called once in the beginning.
The OSGWidget strongly depends on this OpenGLWidget based approach.
void OSGWidget::attachGeometry(osg::ref_ptr<osg::Geometry> geom)
{
osg::Geode* geode = new osg::Geode;
geom->setDataVariance(osg::Object::DYNAMIC);
geom->setUseDisplayList(false);
geom->setUseVertexBufferObjects(true);
bool addDrawSuccess = geode->addDrawable(geom.get()); // Adding Drawable Shape to the geometry node
if (!addDrawSuccess)
{
throw "Adding Drawable failed!";
}
{
osg::StateSet* stateSet = geode->getOrCreateStateSet();
stateSet->setMode(GL_LIGHTING, osg::StateAttribute::OFF);
}
float aspectRatio = static_cast<float>(this->width()) / static_cast<float>(this->height());
// Setting up the camera
osg::Camera* camera = new osg::Camera;
camera->setViewport(0, 0, this->width(), this->height());
camera->setClearColor(osg::Vec4(0.f, 0.f, 0.f, 1.f)); // Kind of Backgroundcolor, clears the buffer and sets the default color (RGBA)
camera->setProjectionMatrixAsPerspective(30.f, aspectRatio, 1.f, 1000.f); // Create perspective projection
camera->setGraphicsContext(graphicsWindow_); // embed
osgViewer::View* view = new osgViewer::View;
view->setCamera(camera); // Set the defined camera
view->setSceneData(geode); // Set the geometry
view->addEventHandler(new osgViewer::StatsHandler);
osgGA::TrackballManipulator* manipulator = new osgGA::TrackballManipulator;
manipulator->setAllowThrow(false);
view->setCameraManipulator(manipulator);
///////////////////////////////////////////////////
// Set the viewer
//////////////////////////////////////////////////
viewer_->addView(view);
viewer_->setThreadingModel(osgViewer::CompositeViewer::SingleThreaded);
viewer_->realize();
this->setFocusPolicy(Qt::StrongFocus);
this->setMinimumSize(100, 100);
this->setMouseTracking(true);
}
After I have 'attached' the geometry, I am trying to update the geometry like this
void PointCloudViewOSG::processData(DepthDataSet depthData)
{
if (depthData.points()->empty())
{
return; // empty cloud, cannot do anything
}
const DepthDataSet::IndexPtr::element_type& index = *depthData.index();
const size_t nPixel = depthData.points().get()->points.size();
if (depthData.intensity().isValid() && !index.empty() )
{
for (int i = 0; i < nPixel; i++)
{
float x = depthData.points().get()->points[i].x;
float y = depthData.points().get()->points[i].y;
float z = depthData.points().get()->points[i].z;
m_vertices->push_back(osg::Vec3(x
, y
, z));
// 32 bit integer variable containing the rgb (8 bit per channel) value
uint32_t rgb_val_;
memcpy(&rgb_val_, &(depthData.points().get()->points[i].rgb), sizeof(uint32_t));
uint32_t red, green, blue;
blue = rgb_val_ & 0x000000ff;
rgb_val_ = rgb_val_ >> 8;
green = rgb_val_ & 0x000000ff;
rgb_val_ = rgb_val_ >> 8;
red = rgb_val_ & 0x000000ff;
m_colors->push_back(
osg::Vec4f((float)red / 255.0f,
(float)green / 255.0f,
(float)blue / 255.0f,
1.0f)
);
}
m_geometry->setVertexArray(m_vertices.get());
m_geometry->setColorArray(m_colors.get());
m_geometry->setColorBinding(osg::Geometry::BIND_PER_VERTEX);
m_geometry->addPrimitiveSet(new osg::DrawArrays(osg::PrimitiveSet::POINTS, 0, m_vertices->size()));
}
}
My guess is that the
addPrimitiveSet(...)
Shall not be called every time I update the geometry.
Or can it be the attachment of the geometry, so that I have to reattach it every time?
PointCloudlibrary (PCL) is unfortunately not an alternative since of some incompatibilities with my application.
Update: When I am reattaching the geometry to the OSGWidget class,
calling
this->attachGeometry(m_geometry)
after
m_geometry->addPrimitiveSet(new osg::DrawArrays(osg::PrimitiveSet::POINTS, 0, m_vertices->size()));
I get my point cloud visible, but this procedure is definitely wrong since I am losing way too much performance and the display driver crashes.
You need to set the array and add the primitive set only once, after that you can update the vertices like this:
osg::Vec3Array* vx = static_cast<osg::Vec3Array*>(m_vertices);
for (int i = 0; i < nPixel; i++)
{
float x, y, z;
// fill with your data...
(*vx)[i].set(x, y, z);
}
m_vertices->dirty();
The same goes for colors and other arrays.
As you're using VBO, you don't need to call dirtyDisplayList()
If you need instead to recompure the bounding box of the geometry, call
m_geometry->dirtyBound()
In case the number of points changes between updates, you can push new vertices into the array if its size is too small, and update the PrimitiveSet count like this:
osg::DrawArrays* drawArrays = static_cast<osg::DrawArrays*>(m_geometry->getPrimitiveSet(0));
drawArrays->setCount(nPixel);
drawArrays->dirty();
rickvikings solution works - I only had one issue... (OSG 3.6.1 on OSX)
I had to modify the m_vertices array directly, it would cause OSG to crash if I used the static_cast method above to modify the vertices array:
osg::Vec3Array* vx = static_cast(m_vertices);
For some reason OSG would not create a buffer object in the vertices array class if using the static_cast approach.
I am a newbie with gameplay3d and went through all tutorials, however I cant manage to display this simple(not much polygons and material) model that I encoded from Fbx. I checked the model with unity3D, and a closed source software that uses gameplay3d and all seems to fine. I guess I am missing some detail loading the scene.
This is the model file including also the original fbx file. I suspect if it has something to do with light
https://www.dropbox.com/sh/ohgpsfnkm3iv24s/AACApRcxwtbmpKu4_5nnp8rZa?dl=0
This is the class that loads the scene.
#include "Demo.h"
// Declare our game instance
Demo game;
Demo::Demo()
: _scene(NULL), _wireframe(false)
{
}
void Demo::initialize()
{
// Load game scene from file
Bundle* bundle = Bundle::create("KGN56AI30N.gpb");
_scene = bundle->loadScene();
SAFE_RELEASE(bundle);
// Get the box model and initialize its material parameter values and bindings
Camera* camera = Camera::createPerspective(45.0f, getAspectRatio(), 1.0f, 20.0f);
Node* cameraNode = _scene->addNode("camera");
// Attach the camera to a node. This determines the position of the camera.
cameraNode->setCamera(camera);
// Make this the active camera of the scene.
_scene->setActiveCamera(camera);
SAFE_RELEASE(camera);
// Move the camera to look at the origin.
cameraNode->translate(0,0, 10);
cameraNode->rotateX(MATH_DEG_TO_RAD(0.25f));
// Update the aspect ratio for our scene's camera to match the current device resolution
_scene->getActiveCamera()->setAspectRatio(getAspectRatio());
// Set the aspect ratio for the scene's camera to match the current resolution
_scene->getActiveCamera()->setAspectRatio(getAspectRatio());
Light* directionalLight = Light::createDirectional(Vector3::one());
_directionalLightNode = Node::create("directionalLight");
_directionalLightNode->setLight(directionalLight);
SAFE_RELEASE(directionalLight);
_scene->addNode(_directionalLightNode);
_scene->setAmbientColor(1.0, 1.0, 1.0);
_scene->visit(this, &Demo::initializeMaterials);
}
bool Demo::initializeMaterials(Node* node)
{
Model* model = dynamic_cast<Model*>(node->getDrawable());
if (model)
{
for(int i=0;i<model->getMeshPartCount();i++)
{
Material* material = model->getMaterial(i);
if(material)
{
// For this sample we will only bind a single light to each object in the scene.
MaterialParameter* colorParam = material->getParameter("u_directionalLightColor[0]");
colorParam->setValue(Vector3(0.75f, 0.75f, 0.75f));
MaterialParameter* directionParam = material->getParameter("u_directionalLightDirection[0]");
directionParam->setValue(Vector3(1, 1, 1));
}
}
}
return true;
}
void Demo::finalize()
{
SAFE_RELEASE(_scene);
}
void Demo::update(float elapsedTime)
{
// Rotate model
//_scene->findNode("box")->rotateY(MATH_DEG_TO_RAD((float)elapsedTime / 1000.0f * 180.0f));
}
void Demo::render(float elapsedTime)
{
// Clear the color and depth buffers
clear(CLEAR_COLOR_DEPTH, Vector4::zero(), 1.0f, 0);
// Visit all the nodes in the scene for drawing
_scene->visit(this, &Demo::drawScene);
}
bool Demo::drawScene(Node* node)
{
// If the node visited contains a drawable object, draw it
Drawable* drawable = node->getDrawable();
if (drawable)
drawable->draw(_wireframe);
return true;
}
void Demo::keyEvent(Keyboard::KeyEvent evt, int key)
{
if (evt == Keyboard::KEY_PRESS)
{
switch (key)
{
case Keyboard::KEY_ESCAPE:
exit();
break;
}
}
}
void Demo::touchEvent(Touch::TouchEvent evt, int x, int y, unsigned int contactIndex)
{
switch (evt)
{
case Touch::TOUCH_PRESS:
_wireframe = !_wireframe;
break;
case Touch::TOUCH_RELEASE:
break;
case Touch::TOUCH_MOVE:
break;
};
}
I can't download your dropbox .fbx file. How many models do you have in the scene? Here's a simple way of doing what you want to do -- not optimal, but it'll get you started...
So first off, I can't see where in your code you actually assign a Shader to be used with the material. I use something like this:
material = model->setMaterial("Shaders/Animation/ADSVertexViewAnim.vsh", "Shaders/Animation/ADSVertexViewAnim.fsh");
You need to assign a Shader, and the above code will take the vertex and fragment shaders and use that when the object needs to be drawn.
I went about it a slightly different way by not loading the scene file automatically, but creating an empty scene and then extracting my model from the bundle and adding it to the scene manually. That way, I can see exactly what is happening and I'm in control of each step. GamePlay3D has some fancy property files, but use them only once you know how the process works manually..
Initially, I created a simple cube in a scene, and created a scene manually, and added the monkey to the node graph, as follows:
void GameMain::ExtractFromBundle()
{
/// Create a new empty scene.
_scene = Scene::create();
// Create the Model and its Node
Bundle* bundle = Bundle::create("res/monkey.gpb"); // Create the bundle from GPB file
/// Create the Cube
{
Mesh* meshMonkey = bundle->loadMesh("Character_Mesh"); // Load the mesh from the bundle
Model* modelMonkey = Model::create(meshMonkey);
Node* nodeMonkey = _scene->addNode("Monkey");
nodeMonkey->setTranslation(0,0,0);
nodeMonkey->setDrawable(modelMonkey);
}
}
Then I want to search the scene graph and only assign a material to the object that I want to draw (the monkey). Use this if you want to assign different materials to different objects manually...
bool GameMain::initializeScene(Node* node)
{
Material* material;
std::cout << node->getId() << std::endl;
// find the node in the scene
if (strcmp(node->getId(), "Monkey") != 0)
return false;
Model* model = dynamic_cast<Model*>(node->getDrawable());
if( !model )
return false;
material = model->setMaterial("Shaders/Animation/ADSVertexViewAnim.vsh", "Shaders/Animation/ADSVertexViewAnim.fsh");
material->getStateBlock()->setCullFace(true);
material->getStateBlock()->setDepthTest(true);
material->getStateBlock()->setDepthWrite(true);
// The World-View-Projection Matrix is needed to be able to see view the 3D world thru the camera
material->setParameterAutoBinding("u_worldViewProjectionMatrix", "WORLD_VIEW_PROJECTION_MATRIX");
// This matrix is necessary to calculate normals properly, but the WORLD_MATRIX would also work
material->setParameterAutoBinding("u_worldViewMatrix", "WORLD_VIEW_MATRIX");
material->setParameterAutoBinding("u_viewMatrix", "VIEW_MATRIX");
return true;
}
Now the object is ready to be drawn.... so I use these functions:
void GameMain::render(float elapsedTime)
{
// Clear the color and depth buffers
clear(CLEAR_COLOR_DEPTH, Vector4(0.0, 0.0, 0.0, 0.0), 1.0f, 0);
// Visit all the nodes in the scene for drawing
_scene->visit(this, &GameMain::drawScene);
}
bool GameMain::drawScene(Node* node)
{
// If the node visited contains a drawable object, draw it
Drawable* drawable = node->getDrawable();
if (drawable)
drawable->draw(_wireframe);
return true;
}
I use my own shaders, so I don't have to worry about Light and DirectionalLight and all that stuff. Once I can see the object, then I'll add dynamic lights, etc, but for starters, start simple.
Regards.
So i have some original image.
I need to get and display some part of this image using specific mask. Mask is not a rectangle or a shape. It contains different polygons and shapes.
Are there any methods or tutorials how to implement that? Or from where to start to make this? Shall i write some small shader or compare appropriate pixels of the images or something like this?
Will be glad for any advises.
Thank you
You can put your image in a UIImageView and add a mask to that image view with something like
myImageView.layer.mask = myMaskLayer;
To draw your custom shapes, myMaskLayer should be an instance of your own subclass of CALayer that implements the drawInContext method. The method should draw the areas to be shown in the image with full alpha and the areas to be hidden with zero alpha (and can also use in between alphas if you like). Here's an example implementation of a CALayer subclass that, when used as a mask on the image view, will cause only an oval area in the top left corner of the image to be shown:
#interface MyMaskLayer : CALayer
#end
#implementation MyMaskLayer
- (void)drawInContext:(CGContextRef)ctx {
static CGColorSpaceRef rgbColorSpace;
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
rgbColorSpace = CGColorSpaceCreateDeviceRGB();
});
const CGRect selfBounds = self.bounds;
CGContextSetFillColorSpace(ctx, rgbColorSpace);
CGContextSetFillColor(ctx, (CGFloat[4]){0.0, 0.0, 0.0, 0.0});
CGContextFillRect(ctx, selfBounds);
CGContextSetFillColor(ctx, (CGFloat[4]){0.0, 0.0, 0.0, 1.0});
CGContextFillEllipseInRect(ctx, selfBounds);
}
#end
To apply such a mask layer to an image view, you might use code like this
MyMaskLayer *maskLayer = [[MyMaskLayer alloc] init];
maskLayer.bounds = self.view.bounds;
[maskLayer setNeedsDisplay];
self.imageView.layer.mask = maskLayer;
Now, if you want to get pixel data from a rendering like this it's best to render into a CGContext backed by a bitmap where you control the pixel buffer. You can inspect the pixel data in your buffer after rendering. Here's an example where I create a bitmap context and render a layer and mask into that bitmap context. Since I supply my own buffer for the pixel data, I can then go poking around in the buffer to get RGBA values after rendering.
const CGRect imageViewBounds = self.imageView.bounds;
const size_t imageWidth = CGRectGetWidth(imageViewBounds);
const size_t imageHeight = CGRectGetHeight(imageViewBounds);
const size_t bytesPerPixel = 4;
// Allocate your own buffer for the bitmap data. You can pass NULL to
// CGBitmapContextCreate and then Quartz will allocate the memory for you, but
// you won't have a pointer to the resulting pixel data you want to inspect!
unsigned char *bitmapData = (unsigned char *)malloc(imageWidth * imageHeight * bytesPerPixel);
CGContextRef context = CGBitmapContextCreate(bitmapData, imageWidth, imageHeight, 8, bytesPerPixel * imageWidth, rgbColorSpace, kCGImageAlphaPremultipliedLast);
// Render the image into the bitmap context.
[self.imageView.layer renderInContext:context];
// Set the blend mode to destination in, pixels become "destination * source alpha"
CGContextSetBlendMode(context, kCGBlendModeDestinationIn);
// Render the mask into the bitmap context.
[self.imageView.layer.mask renderInContext:context];
// Whatever you like here, I just chose the middle of the image.
const size_t x = imageWidth / 2;
const size_t y = imageHeight / 2;
const size_t pixelIndex = y * imageWidth + x;
const unsigned char red = bitmapData[pixelIndex];
const unsigned char green = bitmapData[pixelIndex + 1];
const unsigned char blue = bitmapData[pixelIndex + 2];
const unsigned char alpha = bitmapData[pixelIndex + 3];
NSLog(#"rgba: {%u, %u, %u, %u}", red, green, blue, alpha);