Qt3D QAbstractTextureImage pixelated - c++

I use Qt3D in my project, I need to change texture of the plane dynamicly. For this I use my implementation of QAbstractTextureImage.
I do:
auto planeMaterial = new Qt3DExtras::QTextureMaterial();
Qt3DRender::QTexture2D *planeTexture = new Qt3DRender::QTexture2D();
auto *planeTextureImage = new PaintedTextureImage();
planeTextureImage->update();
planeTexture->addTextureImage(planeTextureImage);
planeMaterial->setTexture(planeTexture);
Qt3DCore::QTransform *planeTransform = new Qt3DCore::QTransform();
planeTransform->setRotationX(90);
planeTransform->setTranslation(QVector3D(0.0f, 0.0f, 15.0f));
auto planeEntity = new Qt3DCore::QEntity(this->rootEntity);
planeEntity->addComponent(mPlane);
planeEntity->addComponent(planeMaterial);
planeEntity->addComponent(planeTransform);
planeEntity->setEnabled(true);
In my scenemodifier. So it adds plane to the scene with material using texture. mPlane has width and height: 4.0 and 3.0. The image for texture has resolution 640x480, so it's 4:3 too.
void PaintedTextureImage::paint(QPainter *painter)
{
...
current = QImage((uchar*)color.data, color.cols, color.rows, color.step, QImage::Format_RGB888);
painter->drawImage(0, 0, current);
}
This is how 'current' looks if I save it to file:
And this is how it looks painted as textue:
So image quality became VERY bad and I can't understand why.

Solution:
planeTextureImage->setWidth(640);
planeTextureImage->setHeight(480);
Default was 256x256.

Related

Qt3d: Artifacts of displaying when applied Qt3DRender::QLayerFilter

I am trying to use layer filtering as shown in this answer. For this I wrote a simple test (see below). This is a continuation of the question.
At a certain position of the red sphere, an artifact appears, which looks like display from the another camera in coordinates (0.0, 0.0, 0.0).
See screen:
In my example, the red sphere can be moved with the WSAD buttons.
See (-7, 0, -14) red sphere position. How to remove these artifacts?
The full test project can be viewed here.
main.cpp
int main(int argc, char *argv[])
{
QGuiApplication application(argc, argv);
My3DWindow window;
auto sphere1 = new Qt3DCore::QEntity(window.Scene());
auto sphere2 = new Qt3DCore::QEntity(window.Scene());
// material, transform, mesh initialisation
sphere1->addComponent(material1);
sphere1->addComponent(spheremesh1);
sphere1->addComponent(transform1);
sphere1->addComponent(window.OpaqueLayer());
sphere2->addComponent(material2);
sphere2->addComponent(spheremesh2);
sphere2->addComponent(transform2);
sphere2->addComponent(window.TransparentLayer());
window.show();
return application.exec();
}
My3DWindow class:
My3DWindow::My3DWindow(QScreen *screen):
Qt3DExtras::Qt3DWindow(screen)
{
m_Scene = new Qt3DCore::QEntity;
setRootEntity(m_Scene);
auto renderSurfaceSelector = new Qt3DRender::QRenderSurfaceSelector(m_Scene);
renderSurfaceSelector->setSurface(this);
auto clearBuffers = new Qt3DRender::QClearBuffers(renderSurfaceSelector);
clearBuffers->setBuffers(Qt3DRender::QClearBuffers::AllBuffers);
clearBuffers->setClearColor(Qt::gray);
auto viewport = new Qt3DRender::QViewport(renderSurfaceSelector);
auto cameraSelector = new Qt3DRender::QCameraSelector(viewport);
m_Camera = new Qt3DRender::QCamera(cameraSelector);
m_Camera->lens()->setPerspectiveProjection(45.0f, 16.0f/9.0f, 0.1f, 1000.0f);
m_Camera->setPosition(QVector3D(0.0f, 0.0f, 100.0f));
m_Camera->setViewCenter(QVector3D(0.0f, 0.0f, 0.0f));
cameraSelector->setCamera(m_Camera);
auto cameraController = new Qt3DExtras::QFirstPersonCameraController(m_Scene);
cameraController->setCamera(m_Camera);
m_OpaqueLayer = new Qt3DRender::QLayer;
auto opaqueFilter = new Qt3DRender::QLayerFilter(m_Camera);
opaqueFilter->addLayer(m_OpaqueLayer);
m_TransparentLayer = new Qt3DRender::QLayer;
auto transparentFilter = new Qt3DRender::QLayerFilter(m_Camera);
transparentFilter->addLayer(m_TransparentLayer);
setActiveFrameGraph(renderSurfaceSelector);
}
You can fix that by adding a QNoDraw node as a child of clearBuffers, as shown in this answer. The "artifact" is not caused by the layer filters, it is a problem of QClearBuffers itself.
Making clearBuffers a child of cameraSelector may seem to work on the surface, but what's actually happening is that everything is being rendered twice, so the transparent sphere will appear darker. You can verify this by commenting out either one of the filters: the objects in the corresponding layer will get rendered anyway.
By leaving clearBuffers as a child of renderSurfaceSelector and adding the QNoDraw, you don't get undesired stuff drawn on top of your viewport, and the filters behave as expected.
Fixed. In the original example, an error found. I don't fully understand why it's the right thing to do:
auto clearBuffers = new Qt3DRender::QClearBuffers(cameraSelector);
insted
auto clearBuffers = new Qt3DRender::QClearBuffers(renderSurfaceSelector);

How to create polygons to display running number in cocos2dx

I'm trying to create a node that is simply a rectangle with a number in it. And this is how I'm doing it now:
int size = 100, fontSize = 64;
auto node = DrawNode::create();
Vec2 vertices[] =
{
Vec2(0,size),
Vec2(size,size),
Vec2(size,0),
Vec2(0,0)
};
node->drawPolygon(vertices, 4, Color4F(1.0f,0.3f,0.3f,1), 0, Color4F(1.0f,1.0f,1.0f,1));
auto texture = new Texture2D();
int numberToDisplay = 2000;
std::string s = std::to_string(numberToDisplay);
texture -> initWithString(s.c_str(), "fonts/Marker Felt.ttf", fontSize, Size(size, size), TextHAlignment::CENTER, TextVAlignment::CENTER);
auto textSprite = Sprite::createWithTexture(texture);
node -> addChild(textSprite);
textSprite -> setPosition(size/2, size/2);
Every time I want to change the number I have to re-create a textureSprite, remove the current child and add the new one. Is there a better way to do it?
i wonder whether you want some special features, so why not use LayerColor and labelTTF?
LayerColor* node = LayerColor::create(Color4B(255, 85, 85, 255), 100, 100);
LabelTTF* label = LabelTTF::create(s, "fonts/Marker Felt.ttf", fontSize);
node->addChild(label);
just change content of labelttf,no need to create sprite
You could use two different techniques for achieve this, to me both of them are good
1:- Use texture cache to cache texture and change image texture at run time(good if u know how many exact textures are there and texture has same Size). in your .h file define no of textures like:-
Texture2D *startTexture, *endTexture, *midTexture;
in you .cpp file do it like:-
startTexture = Director::getInstance()->getTextureCache()->addImage(
"start.png");
endTexture = Director::getInstance()->getTextureCache()->addImage(
"end.png");
middleTexture = Director::getInstance()->getTextureCache()->addImage(
"middle.png");
after that when you want to change texture of any Sprite, simply do it like:-
textSprite->setTexture(startTexture);
for this to work with you, declare "textSprite" in your .h file aswell for quick access.
Pit-fall:- changing texture doesn't change sprite initial bounding box, if initial sprite texture was 32*32 and changed texture was 50*50, then extra texture of 20*20 will be cropped automatically starting from origin point, which might look bad. to over come this you need to change rect also using
textSprite->setTextureRect(
Rect(0, 0, startTexture->getContentSize().width,
startTexture->getContentSize().height));
2:- Using Sprite Frame Cache, put all your texture in a spriteframe, load it into memory like :-
SpriteFrameCache *spriteCache = SpriteFrameCache::getInstance();
spriteCache->addSpriteFramesWithFile("test.plist", "test.png");
now when ever you want to change you texture do it like this
testSprite->setSpriteFrame(
(SpriteFrameCache::getInstance())->getSpriteFrameByName(
"newImage.png"));
this will first check sprite cache for a image named "newImage.png", if it found it in memory then it will return that texture or else it will return nullptr.

Ambiguous results with Frame Buffers in libgdx

I am getting the following weird results with the FrameBuffer class in libgdx.
Here is the code that is producing this result:
// This is the rendering code
#Override
public void render(float delta) {
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
stage.act();
stage.draw();
fbo.begin();
batch.begin();
batch.draw(heart, 0, 0);
batch.end();
fbo.end();
test = new Image(fbo.getColorBufferTexture());
test.setPosition(256, 256);
stage.addActor(test);
}
//This is the initialization code
#Override
public void show() {
stage = new Stage(Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), false);
atlas = Assets.getAtlas();
batch = new SpriteBatch();
background = new Image(atlas.findRegion("background"));
background.setFillParent(true);
heart = atlas.findRegion("fluttering");
fbo = new FrameBuffer(Pixmap.Format.RGBA8888, heart.getRegionWidth(), heart.getRegionHeight(), false);
stage.addActor(background);
Image temp = new Image(new TextureRegion(heart));
stage.addActor(temp);
}
Why is it that I am getting the heart that I drew on the frame buffer to get flipped and be smaller than the original one though the frame buffer width and height are the same as that of the image (71 x 72).
Your SpriteBatch is using the wrong projection matrix. Since you are rendering to a custom sized FrameBuffer you will have to manually set one.
projectionMatrix = new Matrix4();
projectionMatrix.setToOrtho2D(0, 0, heart.getRegionWidth(), heart.getRegionHeight());
batch.setProjectionMatrix(projectionMatrix);
To solve this, the frame buffer has to have a width and height equal to that of stage, like this:
fbo = new FrameBuffer(Pixmap.Format.RGBA8888, Gdx.graphics.getWidth(), Gdx.graphics.getHeight(), false);

Dynamic 2D Ogre3D Texutre

I'm trying to create a 2D background for my ogre scene that renders the camera frames for the QCAR SDK. This is on an iPad with iOS 6.
At the moment I'm retrieving the pixel data like so in renderFrameQCAR:
const QCAR::Image *image = camFrame.getImage(1);
if(image) {
pixels = (unsigned char *)image->getPixels();
}
This returns pixels in the RGB888 format, then passing it to my ogre scene in the renderOgre() functions:
if(isUpdated)
scene.setCameraFrame(pixels);
scene.m_pRoot->renderOneFrame();
The setCameraFrame(pixels) function consists of:
void CarScene::setCameraFrame(const unsigned char *pixels)
{
HardwarePixelBufferSharedPtr pBuffer = m_pBackgroundTexture->getBuffer();
pBuffer->lock(HardwareBuffer::HBL_DISCARD);
const PixelBox& pBox = pBuffer->getCurrentLock();
PixelBox *tmp = new PixelBox(screenWidth, screenHeight, 0, PF_R8G8B8, &pixels);
pBuffer->blit(pBuffer, *tmp, pBox);
pBuffer->unlock();
delete tmp;
}
In this function I'm attempting to create a new PixelBox, copy the pixels into it and the copy that over the the pixelBuffer.
When I first create my Ogre3D scene, I set up the m_pBackgroundTexture & background rect2d like so:
void CarScene::createBackground()
{
m_pBackgroundTexture = TextureManager::getSingleton().createManual("DynamicTexture", ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME, TEX_TYPE_2D, m_pViewport->getActualWidth(), m_pViewport->getActualHeight(), 0, PF_R8G8B8, TU_DYNAMIC_WRITE_ONLY_DISCARDABLE);
m_pBackgroundMaterial = MaterialManager::getSingleton().create("Background", ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME);
m_pBackgroundMaterial->getTechnique(0)->getPass(0)->createTextureUnitState("DynamicTexture");
m_pBackgroundMaterial->getTechnique(0)->getPass(0)->setSceneBlending(SBT_TRANSPARENT_ALPHA);
m_pBackgroundMaterial->getTechnique(0)->getPass(0)->setDepthCheckEnabled(false);
m_pBackgroundMaterial->getTechnique(0)->getPass(0)->setDepthWriteEnabled(false);
m_pBackgroundMaterial->getTechnique(0)->getPass(0)->setLightingEnabled(false);
m_pBackgroundRect = new Rectangle2D(true);
m_pBackgroundRect->setCorners(-1.0, 1.0, 1.0, -1.0);
m_pBackgroundRect->setMaterial("Background");
m_pBackgroundRect->setRenderQueueGroup(RENDER_QUEUE_BACKGROUND);
AxisAlignedBox aabInf;
aabInf.setInfinite();
m_pBackgroundRect->setBoundingBox(aabInf);
SceneNode* node = m_pSceneManager->getRootSceneNode()->createChildSceneNode();
node->attachObject(m_pBackgroundRect);
}
After this all I get is a white background with no texture, and I have no idea why this is not displaying the output! My goal for this is just to have the camera rendering in the background so I can project my 3d model onto it.
Thanks,
Harry.

OpenSceneGraph, HUDs, & Textured Qt Widget Problems

So I have program which is a Qt App. I have some basic Qt GUI on the outside but then I have a Qt widget that makes use of OpenSceneGraph to render a 3D scene. To make things more complicated inside that screen I have a HUD that toggles on and off. This HUD consists of some graphic elements and then a Qt Widget rendered to a texture.
I basically have that working however I am having some size issues within the HUD/Qt Widget. When I first toggle the HUD to visible the Qt Widget is there but way too big. I can change the size but regardless, the first time I press a key the Qt Widget is auto re-sized to fit in the textured area I give it (which is what I expect) but this auto re-sized widget doesn't fit the area correctly. Its impossible to read the table that the widget contains.
To help I have two screen shots. The first is before I type a key:
http://cerrnim.com/wp-content/uploads/2012/12/before.png
And the second is after I type a key:
http://cerrnim.com/wp-content/uploads/2012/12/after.png
Additionally here are some code fragments showing how I create the HUD. Its of course a part of a much larger program but hopefully this is enough information.
/* AUX function to create HUD geo. */
osg::Geode* HUDGeometry( int x1, int y1, int x2, int y2,
std::string model, HUDEvents event, NetworkViewer* viewer ) {
osg::Geometry* quad = osg::createTexturedQuadGeometry(osg::Vec3(x1,y1,0),
osg::Vec3(x2-x1,0,0), osg::Vec3(0,y2-y1,0), 1, 1);
osg::Geode* geode = new osg::Geode( ) ;
geode->setName( model ) ;
geode->setUserData( new HUDEvent( event, viewer ) ) ;
osg::Texture2D* HUDTexture = new osg::Texture2D();
HUDTexture->setDataVariance(osg::Object::DYNAMIC);
osg::Image* hudImage = osgDB::readImageFile(model);
HUDTexture->setImage(hudImage);
geode->getOrCreateStateSet()->setTextureAttributeAndModes(
0, HUDTexture, osg::StateAttribute::ON);
geode->addDrawable( quad ) ;
return geode ;
}
/* Creates the HUD but does not display it yet. */
void NetworkViewer::initHUD( ) {
osg::MatrixTransform* mt = new osg::MatrixTransform;
osg::Camera* hudCamera = new osg::Camera;
hudCamera->setReferenceFrame(osg::Transform::ABSOLUTE_RF);
hudCamera->setViewMatrix(osg::Matrix::identity());
//hudCamera->setProjectionResizePolicy(osg::Camera::FIXED);
hudCamera->setProjectionMatrixAsOrtho2D(0,100,0,100);
hudCamera->setClearMask(GL_DEPTH_BUFFER_BIT);
hudCamera->setRenderOrder(osg::Camera::POST_RENDER);
hudCamera->setAllowEventFocus(true);
QWidget* widget = new QWidget;
layout = new QVBoxLayout( ) ;
widget->setLayout(layout);
widget->layout()->addWidget(((GUI*)view)->getTabs( ));
//widget->setGeometry(0, 0, 500, 400);
osg::ref_ptr<osgQt::QWidgetImage> widgetImage = new osgQt::QWidgetImage(widget);
osg::Geometry* quad = osg::createTexturedQuadGeometry(osg::Vec3(30,32,0),
osg::Vec3(40,0,0), osg::Vec3(0,35,0), 1, 1);
osg::Geode* geode = new osg::Geode;
geode->addDrawable(quad);
osg::Texture2D* texture = new osg::Texture2D(widgetImage.get());
texture->setResizeNonPowerOfTwoHint(false);
texture->setFilter(osg::Texture::MIN_FILTER,osg::Texture::LINEAR);
texture->setWrap(osg::Texture::WRAP_S, osg::Texture::CLAMP_TO_EDGE);
texture->setWrap(osg::Texture::WRAP_T, osg::Texture::CLAMP_TO_EDGE);
mt->getOrCreateStateSet()->setTextureAttributeAndModes(0, texture, osg::StateAttribute::ON);
mt->addChild(hudCamera);
osgViewer::InteractiveImageHandler* handler =
new osgViewer::InteractiveImageHandler(widgetImage.get(), texture, hudCamera);
mt->getOrCreateStateSet()->setMode(GL_LIGHTING, osg::StateAttribute::OFF);
mt->getOrCreateStateSet()->setMode(GL_BLEND, osg::StateAttribute::ON);
mt->getOrCreateStateSet()->setRenderingHint(osg::StateSet::TRANSPARENT_BIN);
mt->getOrCreateStateSet()->setAttribute(new osg::Program);
quad->setEventCallback(handler);
quad->setCullCallback(handler);
hudCamera->addChild( geode ) ;
hudCamera->addChild( HUDGeometry(73,73,75,75,
"models/quit.png", EXIT_OBJ, this ));
hudCamera->addChild( HUDGeometry(25,25,75,75,
"models/hud.png", NO_EVENT, this ));
osg::Group* overlay = new osg::Group;
overlay->addChild(mt);
root->addChild(overlay);
HUD = hudCamera ;
disableHUD( ) ;
}
I think the main problem is that you do not adapt the dimensions of the widget to the dimensions of the quad you render it on.
I'm not sure what QWidgetImage is doing internally, but I guess it's just rendering the widget onto a canvas of appropriate size and convert the result into an image. In your code you map the complete image (regardless of its dimension or aspect ration) onto that quad. If you want the widget to fit the quad you need to resize the widget before creating an image of it.