I am trying to use the PCL library (which I am new to) to get an image out of an unorganized point cloud obtained from a .las file which is later on translated into a .pcd file for the PCL library to use. For visualizing the point cloud I made use of the sample code found here: https://github.com/UnaNancyOwen/Tutorials/blob/master/tutorials/range_image_border_extraction/range_image_border_extraction.cpp
The point cloud can be retrieved from: https://senseflycom.s3.amazonaws.com/datasets/concrete-bridge/concrete-bridge-column_densified_point_cloud.las
For .las to .pcd I used https://github.com/murtiad/las2pcd
The camera position is not right by default (I would need to interact with the visualizer with the mouse to reach the correct positioning) but the quality is correct and I am capable of dumping a photo by using the saveScreenshot method.
I would appreciate any advice, I am running Ubuntu 18.04 and pcl 1.8. Have also gone through every example and existing post I have been able to find on pcl-users.org.
I have already tried OpenCV but its quality is not good for unorganized point clouds as far as I can tell.
The situation I am facing is:
a. If I modify the slightest of the camera parameters by calling any (or all) of these functions, the quality drops and it seems like unfocused:
pcl::visualization::PCLVisualizer viewer ("3D Viewer");
viewer.setBackgroundColor (1, 1, 1);
viewer.addCoordinateSystem (1.0f, "reference");
/* Evil functions */
viewer.initCameraParameters();
viewer.setCameraPosition(0, -30, 0, 0, 0, 0, 0, 0, 1);
viewer.setCameraFieldOfView(0.523599);
viewer.setCameraClipDistances(0.00522511, 50);
b. If I don't modify any of the parameters, quality remains but I need to interact with the mouse which I intend to avoid.
c. Even after interacting with the mouse and modifying the view, camera parameters remain unchanged (actually used this post on the loop PCL: Visualize a point cloud ):
viewer.getCameras(cam);
//--------------------
// -----Main loop-----
//--------------------
while (!viewer.wasStopped()) {
// range_image_borders_widget->spinOnce ();
viewer.spinOnce();
pcl_sleep(0.5);
cout << "Cam: " << endl
<< " - pos: (" << cam[0].pos[0] << ", " << cam[0].pos[1] << ", " << cam[0].pos[2] << ")" << endl
<< " - view: (" << cam[0].view[0] << ", " << cam[0].view[1] << ", " << cam[0].view[2] << ")" << endl
<< " - focal: (" << cam[0].focal[0] << ", " << cam[0].focal[1] << ", " << cam[0].focal[2] << ")" << endl
<< " - fovy: (" << cam[0].fovy << " - clip: (" << cam[0].clip[0] << " , " << cam[0].clip[1] << ")" << endl;
}
The problem was positioning the camera 180ยบ and having and ordered point cloud which led to height = 1 and in effect same colors and shape from both angles.
Effectively modifying the "Evil Code" to this code below , "fixed the issue":
viewer.initCameraParameters();
viewer.setCameraPosition(0, 30, 0, 0, 0, 0, 0, 0, 1);
viewer.setCameraFieldOfView(0.523599);
viewer.setCameraClipDistances(0.00522511, 50);
Related
I am trying to convert a RGB value to the LAB color space using OpenCV. While doing some research I found someone with a similar goal here and have tried to replicate this approach.
After some messing around I was able to get the following code to compile and run:
int main(){
int r_a = 168, g_a = 93, b_a = 201, r_b = 60, g_b = 117, b_b = 59;
cv::Mat3f rgb_a (cv::Vec3f(r_a, g_a, b_a));
cv::Mat3f rgb_b (cv::Vec3f(r_b, g_b, b_b));
cv::Mat3f lab_a;
cv::Mat3f lab_b;
cv::cvtColor(rgb_a,lab_a,cv::COLOR_RGB2Lab);
cv::cvtColor(rgb_b,lab_b,cv::COLOR_RGB2Lab);
std::cerr << ">> rgb_a = " << rgb_a << "\n";
std::cerr << ">> rgb_b = " << rgb_b << "\n";
std::cerr << ">> lab_a = " << lab_a << "\n";
std::cerr << ">> lab_b = " << lab_b << "\n";
return 0;
}
When I run this, both LAB values are calculated as [100, 0, 0].
After a bit more browsing I found someone else had a similar issue when using OpenCV in python, see this question.
I was able to replicate this working solution in Python, but am still unable to find a fix for c++.
Any idea on how I can fix this? Is it a matrix shape issue? I am quite unfamiliar with the exact image formats for OpenCV in c++.
Posting an answer here in case anyone in the future runs into the same issue.
As #M. Spiller pointed out I needed to scale my vector.
I divided each value by 255.0 and then the conversion was able to execute correctly!
I'm trying to run the example in the apriltags library, and I keep getting this error:
OpenCV Error: Assertion failed (mtype == type0 || (CV_MAT_CN(mtype) == 1 && ((1 << type0) & fixedDepthMask) != 0)) in create, file /Users/Vijin/PersInq/opencv-3.2.0/modules/core/src/matrix.cpp, line 2559
I narrowed it down to a function call
detection.getRelativeTranslationRotation(m_tagSize, m_fx, m_fy, m_px, m_py,
translation, rotation);
I'm not an expert in opencv, so I'd appreciate some help with this. The exception is thrown the moment a marker is detected. Otherwise, it runs fine. Here's the whole function:
void print_detection(AprilTags::TagDetection& detection) const {
cout << " Id: " << detection.id
<< " (Hamming: " << detection.hammingDistance << ")";
// recovering the relative pose of a tag:
// NOTE: for this to be accurate, it is necessary to use the
// actual camera parameters here as well as the actual tag size
// (m_fx, m_fy, m_px, m_py, m_tagSize)
Eigen::Vector3d translation;
Eigen::Matrix3d rotation;
try{
detection.getRelativeTranslationRotation(m_tagSize, m_fx, m_fy, m_px, m_py,
translation, rotation);
}
catch (const std::exception& e)
{
cout<<"print_detection failing";
}
Eigen::Matrix3d F;
F <<
1, 0, 0,
0, -1, 0,
0, 0, 1;
Eigen::Matrix3d fixed_rot = F*rotation;
double yaw, pitch, roll;
wRo_to_euler(fixed_rot, yaw, pitch, roll);
cout << " distance=" << translation.norm()
<< "m, x=" << translation(0)
<< ", y=" << translation(1)
<< ", z=" << translation(2)
<< ", yaw=" << yaw
<< ", pitch=" << pitch
<< ", roll=" << roll
<< endl;
// Also note that for SLAM/multi-view application it is better to
// use reprojection error of corner points, because the noise in
// this relative pose is very non-Gaussian; see iSAM source code
// for suitable factors.
}
This issue persists with the newer versions of OpenCV. It can be easily fixed by changing line 95 of src/TagDetection.cc from cv::Matx33f cameraMatrix( to cv::Matx33d cameraMatrix(.
Note, this is simply converting from float to double. Alternatively, you can use this library (https://github.com/PrieureDeSion/apriltags-cpp), which I have made changes to and tested with Ubuntu 16 and OpenCV.
I'm trying to get the color of a pixel in a jpg image and all the tutorials I can find online say to use pixelpacket which doesnt exist anymore. searching and searching through google i've found that getvirtualpixels might be the new way but that doesnt seem to exist in the newest version either. so how do I get the color of pixels from an image with an image cache? I do not need to set any pixels btw.
Using windows10 and visual studio 2015
PixelPacket structures have been replaced with a pointer to Quantum data type. From Porting to ImageMagick Version 7, the key thing to understand is that a pixel can now have dynamic parts (or "traits" previously called as "channels").
ImageMagick version 7 supports any number of channels from 1 to 32 (and beyond) and simplifies access with a single method that returns an array of pixel channels of type Quantum.
You are now responsible for authoring code that handles deterministic (is that the right term?) pixel channels. For example...
Magick::Image rose("rose:");
const Magick::Quantum * q = rose.getConstPixels(5, 5, 1, 1);
size_t channel_length = rose.channels();
for (size_t k = 0; k < channel_length; k++ ) {
std::cout << "Channel " << k << " = ";
std::cout << q[k] << " of " << QuantumRange << std::endl;
}
/** Outputs
-------
Channel 0 = 13107 of 65535
Channel 1 = 12079 of 65535
Channel 2 = 11308 of 65535
*/
Without seeing your code, or understanding what your finial goal would be, I would guess that you really just want RGB info. If that's the case, the Magick::Image.getColor will work,
Magick::Image rose("rose:");
Magick::Color pixelColor = rose.pixelColor(5, 5);
std::cout << "Channel 0 = ";
std::cout << pixelColor.quantumRed() << " of " << QuantumRange << std::endl;
std::cout << "Channel 1 = ";
std::cout << pixelColor.quantumGreen() << " of " << QuantumRange << std::endl;
std::cout << "Channel 2 = ";
std::cout << pixelColor.quantumBlue() << " of " << QuantumRange << std::endl;
/** Outputs
-------
Channel 0 = 13107 of 65535
Channel 1 = 12079 of 65535
Channel 2 = 11308 of 65535
*/
But! Sometimes it's just easier to export the image data into a format you're already working with. For example, I might only need Green values as single-point float list.
Magick::Image rose("rose:");
Magick::PixelData pixelBlob(rose, "G", Magick::FloatPixel);
float * buffer = (float *)pixelBlob.data();
// ...
I have a large, complicated graphics view that takes a lot of processing to draw, and so I only want to redraw part of it (an OpenGL video display) each frame, leaving the rest (various child widgets etc) unchanged.
However, when I call update() with a QRegion, the paintEvent's region is the full size of the widget, so the whole screen gets redrawn every time. There doesn't appear to be any call to update() with no parameters (at least, not my version thereof)
What might be causing the paint event to have the full screen in it?
void GLGraphicsView::updateVideoRegion(const QRegion & r)
{
//this function called from outside...
LOG4CXX_DEBUG(_logger, "updateVideoRegion " << r.boundingRect().x() << " " << r.boundingRect().y() << " " << r.boundingRect().width() << " " << r.boundingRect().height());
update(r);
}
void GLGraphicsView::update()
{
LOG4CXX_DEBUG(_logger, "update(all)");
QGraphicsView::update();
}
void GLGraphicsView::update(const QRegion& region)
{
LOG4CXX_DEBUG(_logger, "update(region) " << region.boundingRect().x() << " " << region.boundingRect().y() << " " << region.boundingRect().width() << " " << region.boundingRect().height());
QGraphicsView::update(region);
}
void GLGraphicsView::paintEvent(QPaintEvent *e)
{
LOG4CXX_DEBUG(_logger, "repaint region " << e->region().boundingRect().x() << " " << e->region().boundingRect().y() << " " << e->region().boundingRect().width() << " " << e->region().boundingRect().height());
/* Do the rest of the painting */
}
Output:
updateVideoRegion 19 19 1446 568
update(region) 19 19 1446 568
repaint region 0 0 1920 1201
Edit:
I've also tried using updateScene(QList) instead of update(QRegion), with viewportUpdateMode set to SmartViewportUpdate, but the paintEvent's region is still the full size of the screen. The CacheMode is set to CacheNone, and setting it to CacheBackground causes the OpenGL stuff to not be displayed.
After further investigation, it appears that this is because I am using OpenGL - the 'fullUpdate' flag in updateScene is always set in this case - qgraphicsView has "accelerateScrolling = !isGlWidget" [in setupViewport, line 2759] and "fullUpdate = !d->accelerateScrolling" [in updateScene, line 2675]
Sorry for the basic question, I am just starting to use QImage for reading pixel data from an image file.
To understand the member functions, I tried to load an image file and tried to output the functions return values:
QString fileName = "pic1.bmp";
QImage myImage;
myImage.load( fileName );
std::cout << "width = " << myImage.width() << std::endl;
std::cout << "height = " << myImage.height() << std::endl;
std::cout << "dotspermeterX = " << myImage.dotsPerMeterX() << std::endl;
std::cout << "dotspermeterY = " << myImage.dotsPerMeterY() << std::endl;
QRectF myRect = myImage.rect();
std::cout << "rect = " << myRect.bottomLeft().x() << "," << myRect.bottomLeft().y()
<< " " << myRect.topRight().x() << "," << myRect.topRight().y() << std::endl;
The output I got was:
width = 858
height = 608
dotspermeterX = 4724
dotspermeterY = 4724
rect = 0,608 858,0
My questions are:
1. What is the difference between dots and pixels?
2. Does QImage work only with int pixels? Can't I read sub-pixel data for better precision?
To clarify my question, Following is a zoomed bitmap image of a diagonal line and I want to read all the small pixels/dots in this line. Is this possible?
As for the "dots per meter", you probably heard of "dots per inch" (or DPI). It's the same. If, for example, you have a 20 inch monitor with the horizontal resolution of X pixels, you will have Y "dots per inch" (or pixels per inch). If you then switch to a 40 inch monitor but with the same horizontal resolution X, then you have half the number of DPI, as the screen is now double as wide. So DPI (or PPI) can be seens as a measurement of the size of the pixels.
And no, I seriously doubt that QImage have any support for sub-pixel data.