VTK: rotate a 3D model and capture its depth map - c++

Currently, based on the VTK ZBuffer example, I iteratively rotate the 3D model and capture each time the depth map. The issue is that although the model rotates the output images contain all the first depth map.
int main(int argc, char *argv[]){
...variable declaration/initialization
//read off file
offReader->SetFileName(argv[1]);
offReader->Update();
int step = 30; std::string out;
for (int i = 0; i < 3; i++) {
mapper->NewInstance();
actor->NewInstance();
renWin->NewInstance();
renderer->NewInstance();
mapper->SetInputData(polyData);
actor->SetMapper(mapper);
out = std::to_string(i);
actor->RotateZ(step*i);
renWin->AddRenderer(renderer);
renderer->AddActor(actor);
renderer->SetBackground(1, 1, 1);
renWin->Render();
// Create Depth Map
filter->NewInstance();
scale->NewInstance();
imageWriter->NewInstance();
filter->SetInput(renWin);
filter->SetMagnification(3);
filter->SetInputBufferTypeToZBuffer(); //Extract z buffer value
filter->Update();
scale->SetOutputScalarTypeToUnsignedChar();
scale->SetInputConnection(filter->GetOutputPort());
scale->SetShift(0);
scale->SetScale(-255);
scale->Update();
std::string out1 = out + "_depth.bmp";
std::cout << " " << out1 << std::endl;
// Write surface map as a .bmp image
imageWriter->SetFileName(out1.c_str());
imageWriter->SetInputConnection(scale->GetOutputPort());
imageWriter->Update();
imageWriter->Write();
filter->RemoveAllInputs();
scale->RemoveAllInputs();
imageWriter->RemoveAllInputs();
renderer->RemoveActor(actor);
renWin->RemoveRenderer(renderer);
.... remaining script
}
The output depth maps are all identical. 0_depth.bmp, 1_depth.bmp & 2_depth.bmp
Has anyone encountered the same issue? If yes, what could be a potential solution.

Problem solved by introducing a function within the rotation took place. Apparently it was a matter of a variable content update issue, that could be solved in a more straight forward way.

Related

Copy Freetype Bitmap into Magick::Image at specific offset

In my game engine, I have a texture loading API which wraps low level libraries like OpenGL, DirectX, etc. This API uses Magick++ because I found it to be a convenient cross-platform solution and allows me to create procedural textures fairly easily.
I'm now adding a text rendering system using freetype where I want to use this texture API to dynamically generate a texture atlas for any given font where all the glyphs are stored horizontally adjacent.
I have been able to get this to work in the past by buffering the bitmaps directly into OpenGL. But now I want to accomplish this in a platform independent way, using this API.
I've looked around for a few examples but I can't find anything quite like what I'm after so if there are any magick++ experts around, I'd really appreciate some pointers.
So in simple terms: I've got a freetype bitmap and I want to be able to copy its pixel buffer to a specific offset inside a Magick::Image.
This code might help to clarify:
auto texture = e6::textures->create(e6::texture::specification{}, [name, totalWidth, maxHeight](){
// Initialises Freetype
FT_Face face;
FT_Library ft;
if (FT_Init_FreeType(&ft)) {
std::cout << "ERROR::FREETYPE: Could not init FreeType Library" << std::endl;
}
if (int error = FT_New_Face(ft, path(name.c_str()).c_str(), 0, &face)) {
std::cout << "Failed to initialise fonts: " << name << std::endl;
throw std::exception();
}
// Sets the size of the font
FT_Set_Pixel_Sizes(face, 0, 100);
unsigned int cursor = 0; // Keeps track of the horizontal offset.
// Preallocate an image buffer
// totalWidth and maxHeight is the size of the entire atlas
Magick::Image image(Magick::Geometry(totalWidth, maxHeight), "BLACK");
image.type(Magick::GrayscaleType);
image.magick("BMP");
image.depth(8);
image.modifyImage();
Magick::Pixels view(image);
// Loops through a subset of the ASCII codes
for (uint8_t c = 32; c < 128; c++) {
if (FT_Load_Char(face, c, FT_LOAD_RENDER)) {
std::cout << "Failed to load glyph: " << c << std::endl;
continue;
}
// Just for clarification...
unsigned int width = face->glyph->bitmap.width;
unsigned int height = face->glyph->bitmap.rows;
unsigned char* image_data = face->glyph->bitmap.buffer;
// This is the problem part.
// How can I copy the image_data into `image` at the cursor position?
cursor += width; // Advance the cursor
}
image.write(std::string(TEXTURES) + "font-test.bmp"); // Write to filesystem
// Clean up freetype
FT_Done_Face(face);
FT_Done_FreeType(ft);
return image;
}, "font-" + name);
I tried using a pixel cache which the documentation demonstrates:
Magick::Quantum *pixels = view.get(cursor, 0, width, height);
*pixels = *image_data;
view.sync();
But this leaves me with a completely black image, I think because the image_data goes out of scope.
I was hoping there'd be a way to modify the image data directly but after a lot of trial and error, I ended up just creating an image for each glyph and compositing them together:
...
Magick::Image glyph (Magick::Geometry(), "BLACK");
glyph.type(MagickCore::GrayscaleType);
glyph.magick("BMP");
glyph.depth(8);
glyph.read(width, height, "R", Magick::StorageType::CharPixel, image_data);
image.composite(glyph, cursor, 0);
cursor += width;
At the very least, I hope this helps to prevent someone else going down the same rabbit hole I did.

OSG: Why there is texture coordinate array but not texture itself?

I am trying to get texture file name from an osg::Geometry I get the texture coordinates like this:
osg::Geometry* geom = dynamic_cast<osg::Geometry*> (drawable);
const osg::Geometry::ArrayList& texCoordArrayList = dynamic_cast<const osg::Geometry::ArrayList&>(geom->getTexCoordArrayList());
auto texCoordArrayListSize = texCoordArrayList.size();
auto sset = geom->getOrCreateStateSet();
processStateSet(sset);
for (size_t k = 0; k < texCoordArrayListSize; k++)
{
const osg::Vec2Array* texCoordArray = dynamic_cast<const osg::Vec2Array*>(geom->getTexCoordArray(k));
//doing sth with vertexarray, normalarray and texCoordArray
}
But I am not able to get texture file name in processStateSet() function. I take the processStateSet function code from OSG examples (specifically from osganalysis example). Even though there is a texture file, Sometimes it works and gets the name but sometimes not. Here is my processStateSet function
void processStateSet(osg::StateSet* stateset)
{
if (!stateset) return;
for (unsigned int ti = 0; ti < stateset->getNumTextureAttributeLists(); ++ti)
{
osg::StateAttribute* sa = stateset->getTextureAttribute(ti, osg::StateAttribute::TEXTURE);
osg::Texture* texture = dynamic_cast<osg::Texture*>(sa);
if (texture)
{
LOG("texture! ");
//TODO: something with this.
for (unsigned int i = 0; i < texture->getNumImages(); ++i)
{
auto img (texture->getImage(i));
auto texturefname (img->getFileName());
LOG("image ! image no: " + IntegerToStr(i) + " file: " + texturefname);
}
}
}
}
EDIT:
I just realized that: if the model that I load is ".3ds", texturefname is exist but if model is ".flt" there is not texture name.
Is it about loading different types? But I know that they both have textures. What is the difference? I confused.
Some 3D models don't have texture names. Your choices are to deal with it, or use model files that do. It also depends on the format. Some formats can't have texture names. Some Blender export scripts can't write texture names even though the format supports it. And so on.
3D model formats are not interchangeable - every one is different.

My code adds the same frame to a vector while it doesn't when the frame is being rotated

I have an extremely strange situation in which the code adds the same frame to a vector while it doesn't when there is a rotation before the addition. Let me show you :
#include <chrono>
#include <opencv2/opencv.hpp>
#include <vector>
/* Write all the images in a certain directory. All the images with the same name present
in the directory will be overwritten. */
void CameraThread::writeAllFrames(std::vector<cv::Mat> vectorFrame) {
std::string path;
for (size_t i = 0; i < vectorFrame.size(); ++i) {
path = "./Images/image" + std::to_string(i) + ".png";
imwrite(path, vectorFrame.at(i));
}
capturing = 0;
}
int main(){
std::string window_name = "Left Cam";
cv::VideoCapture* videoCapture = new cv::VideoCapture(0);
cv::namedWindow(window_name, CV_WINDOW_NORMAL); //create a window for the camera
std::vector <cv::Mat> capturedFrame; // Vector in which the frames are going to be saved
int i = 0; // Counts how many images are saved
bool capturing = 0;
int amountFramesCaptured = 10;
int periodCapture = 250; // ms
while(1){
bool bSuccess = videoCapture->read(frame); // It captures the frame.
/*The next 2 lines take around 25ms. They turn the frame 90° to the left.*/
cv::transpose(frame, frame);
cv::flip(frame, frame, 0);
if (capturing == 0) {
/* If there is no frame capture, we just display the frames in a window.*/
imshow(window_name, frame);
} else if (capturing == 1) { // We capture the frames here.
capturedFrame.push_back(frame);
Sleep(periodCapture);
++i;
if (i == amountFramesCaptured) {
writeAllFrames(capturedFrame); // Write all frames in a directory.
puts("Frames copied in the directory.");
capturedFrame.clear(); // Clear the vector in case we recapture an other time.
i = 0;
capturing = 0;
}
}
}
return 0;
}
Here, we capture a frame thanks to videoCapture->read(frame). I wanted to rotate the frame so i used the next two lines. Then I tested the capture of the images and it worked well (I know it because I have a motion object in front of the camera). Lastly, I decided not to rotate the frames after some tests because the rotation takes too much resources (around 25 ms) and I needed to synchronize the capture with some blinking LEDs. So I took off the two lines that permitted the rotation and that's when suddenly, the code decided to add the same frame to the vector.
In conclusion, the writing on the hard drive works well when there is a rotation and it doesn't when there isn't (because of the vector). It confuses me so much, tell me if you see something I don't.

how can i draw a triangle on the osgearth with osg api

I want to draw triangle on the earth.
If I draw the triangle by the class osgEarth::Features::Feature there is no problem.
for example:
void DrawGeometryByFeature(ListVec3d& vecList, std::vector<unsigned int>& lstIndices)
{
osgEarth::Symbology::Style shapeStyle;
shapeStyle.getOrCreate<osgEarth::Symbology::PolygonSymbol>()->fill()->color() = osgEarth::Symbology::Color::Green;
_polyFeature = new osgEarth::Features::Feature(new osgEarth::Symbology::MultiGeometry, s_mapNode->getMapSRS(), shapeStyle);
_polyNode = new osgEarth::Annotation::FeatureNode(s_mapNode, _polyFeature);
osgEarth::Symbology::MultiGeometry* pGeometry = (MultiGeometry*)_polyNode->getFeature()->getGeometry();
pGeometry->clear();
_polyNode->setStyle(shapeStyle);
int index = 0;
for (std::vector<unsigned int>::iterator iit = lstIndices.begin();
iit != lstIndices.end(); iit++) {
index++;
if ((index + 1) % 3 == 0) {
osgEarth::Symbology::Geometry* polygen = new osgEarth::Symbology::Geometry();
polygen->push_back(vecList[lstIndices[index - 2]]);
polygen->push_back(vecList[lstIndices[index - 1]]);
polygen->push_back(vecList[lstIndices[index]]);
pGeometry->add(polygen);
}
}
_polyNode->init();
BBoxNodes.push_back(_polyNode);
s_mapNode->addChild(_polyNode);
}
but I want to draw it more efficient, so I try to draw it by the osg API
for example:
void DrawGeometryByOsg(std::vector<osg::Vec3d> vecList, std::vector<unsigned int>& lstIndices, int color, long type)
{
// create Geometry object to store all the vertices and lines primitive.
osg::Geometry* polyGeom = new osg::Geometry();
// note, first coord at top, second at bottom, reverse to that buggy OpenGL image..
const size_t numCoords = lstIndices.size();
osg::Vec3* myCoords = new osg::Vec3[numCoords];
unsigned int index = 0;
osg::Vec3Array* normals = new osg::Vec3Array(/*numCoords/3*/);
for (std::vector<unsigned int>::iterator it = lstIndices.begin(); it != lstIndices.end(); it++){
myCoords[index++] = vecList[*it];
if(index%3 == 2){
//
osg::Vec3d kEdge1 = myCoords[index-1] - myCoords[index-2];
osg::Vec3d kEdge2 = myCoords[index] - myCoords[index - 2];
osg::Vec3d normal = kEdge1^kEdge2;
//normal.normalize();
normals->push_back(normal);
//
}
}
osg::Vec3Array* vertices = new osg::Vec3Array(numCoords, myCoords);
polyGeom->setVertexArray(vertices);
osg::Vec4Array* colors = new osg::Vec4Array;
colors->push_back(osg::Vec4(0.0f, 0.8f, 0.0f, 1.0f));
polyGeom->setColorArray(colors, osg::Array::BIND_OVERALL);
polyGeom->addPrimitiveSet(new osg::DrawArrays(osg::PrimitiveSet::TRIANGLES, 0, numCoords));
osg::Geode* geode = new osg::Geode();
geode->addDrawable(polyGeom);
s_mapNode->addChild(geode);
}
but the gemotry which i draw by Osg API is always shaking....( ̄﹏ ̄;)
could you tell me where is the mistake in my code?
Any time you have "shaking" geometry you are likely running into a floating-point precision problem. OpenGL deals in 32-bit floating point coordinates. So if your geometry uses large coordinate values (as it does in a geocentric map like osgEarth), the values will get cropped when they are sent to the GPU and you get shaking/jittering when the camera moves.
To solve this problem, express your data relative to a local origin. Pick a double-precision point somewhere -- the centroid of the geometry is usually a good place -- and make that your local origin. Then translate all your double-precision coordinates so they are relative to that origin. Finally, parent the geometry with a MatrixTransform that translates the localized data to the actual double-precision location.
Hope this helps!

OpenCV video stabilization

I'm trying to implement video stabilization using OpenCV videostab module. I need to do it in stream, so I'm trying to get motion between two frames. After learning documentation, I decide to do it this way:
estimator = new cv::videostab::MotionEstimatorRansacL2(cv::videostab::MM_TRANSLATION);
keypointEstimator = new cv::videostab::KeypointBasedMotionEstimator(estimator);
bool res;
auto motion = keypointEstimator->estimate(this->firstFrame, thisFrame, &res);
std::vector<float> matrix(motion.data, motion.data + (motion.rows*motion.cols));
Where firstFrame and thisFrame are fully initialized frames. The problem is, that estimate method always return the matrix like that:
In this matrix only last value(matrix[8]) is changing from frame to frame. Am I correctly use videostab objects and how can I apply this matrix on frame to get result?
I am new to OpenCV but here is how I have solved this issue.
The problem lies in the line:
std::vector<float> matrix(motion.data, motion.data + (motion.rows*motion.cols));
For me, the motion matrix is of type 64-bit double (check yours from here) and copying it into std::vector<float> matrix of type 32-bit float messes-up the values.
To solve this issue, try replacing above line with:
std::vector<float> matrix;
for (auto row = 0; row < motion.rows; row++) {
for (auto col = 0; col < motion.cols; col++) {
matrix.push_back(motion.at<float>(row, col));
}
}
I have tested it with running the estimator on duplicate set of points and it gives expected results with most entries close to 0.0 and matrix[0], matrix[4] and matrix[8] being 1.0 (using author's code with this setting was giving the same erroneous values as author's picture displays).