I am trying to convert a RGB value to the LAB color space using OpenCV. While doing some research I found someone with a similar goal here and have tried to replicate this approach.
After some messing around I was able to get the following code to compile and run:
int main(){
int r_a = 168, g_a = 93, b_a = 201, r_b = 60, g_b = 117, b_b = 59;
cv::Mat3f rgb_a (cv::Vec3f(r_a, g_a, b_a));
cv::Mat3f rgb_b (cv::Vec3f(r_b, g_b, b_b));
cv::Mat3f lab_a;
cv::Mat3f lab_b;
cv::cvtColor(rgb_a,lab_a,cv::COLOR_RGB2Lab);
cv::cvtColor(rgb_b,lab_b,cv::COLOR_RGB2Lab);
std::cerr << ">> rgb_a = " << rgb_a << "\n";
std::cerr << ">> rgb_b = " << rgb_b << "\n";
std::cerr << ">> lab_a = " << lab_a << "\n";
std::cerr << ">> lab_b = " << lab_b << "\n";
return 0;
}
When I run this, both LAB values are calculated as [100, 0, 0].
After a bit more browsing I found someone else had a similar issue when using OpenCV in python, see this question.
I was able to replicate this working solution in Python, but am still unable to find a fix for c++.
Any idea on how I can fix this? Is it a matrix shape issue? I am quite unfamiliar with the exact image formats for OpenCV in c++.
Posting an answer here in case anyone in the future runs into the same issue.
As #M. Spiller pointed out I needed to scale my vector.
I divided each value by 255.0 and then the conversion was able to execute correctly!
Related
I am trying to use the PCL library (which I am new to) to get an image out of an unorganized point cloud obtained from a .las file which is later on translated into a .pcd file for the PCL library to use. For visualizing the point cloud I made use of the sample code found here: https://github.com/UnaNancyOwen/Tutorials/blob/master/tutorials/range_image_border_extraction/range_image_border_extraction.cpp
The point cloud can be retrieved from: https://senseflycom.s3.amazonaws.com/datasets/concrete-bridge/concrete-bridge-column_densified_point_cloud.las
For .las to .pcd I used https://github.com/murtiad/las2pcd
The camera position is not right by default (I would need to interact with the visualizer with the mouse to reach the correct positioning) but the quality is correct and I am capable of dumping a photo by using the saveScreenshot method.
I would appreciate any advice, I am running Ubuntu 18.04 and pcl 1.8. Have also gone through every example and existing post I have been able to find on pcl-users.org.
I have already tried OpenCV but its quality is not good for unorganized point clouds as far as I can tell.
The situation I am facing is:
a. If I modify the slightest of the camera parameters by calling any (or all) of these functions, the quality drops and it seems like unfocused:
pcl::visualization::PCLVisualizer viewer ("3D Viewer");
viewer.setBackgroundColor (1, 1, 1);
viewer.addCoordinateSystem (1.0f, "reference");
/* Evil functions */
viewer.initCameraParameters();
viewer.setCameraPosition(0, -30, 0, 0, 0, 0, 0, 0, 1);
viewer.setCameraFieldOfView(0.523599);
viewer.setCameraClipDistances(0.00522511, 50);
b. If I don't modify any of the parameters, quality remains but I need to interact with the mouse which I intend to avoid.
c. Even after interacting with the mouse and modifying the view, camera parameters remain unchanged (actually used this post on the loop PCL: Visualize a point cloud ):
viewer.getCameras(cam);
//--------------------
// -----Main loop-----
//--------------------
while (!viewer.wasStopped()) {
// range_image_borders_widget->spinOnce ();
viewer.spinOnce();
pcl_sleep(0.5);
cout << "Cam: " << endl
<< " - pos: (" << cam[0].pos[0] << ", " << cam[0].pos[1] << ", " << cam[0].pos[2] << ")" << endl
<< " - view: (" << cam[0].view[0] << ", " << cam[0].view[1] << ", " << cam[0].view[2] << ")" << endl
<< " - focal: (" << cam[0].focal[0] << ", " << cam[0].focal[1] << ", " << cam[0].focal[2] << ")" << endl
<< " - fovy: (" << cam[0].fovy << " - clip: (" << cam[0].clip[0] << " , " << cam[0].clip[1] << ")" << endl;
}
The problem was positioning the camera 180ยบ and having and ordered point cloud which led to height = 1 and in effect same colors and shape from both angles.
Effectively modifying the "Evil Code" to this code below , "fixed the issue":
viewer.initCameraParameters();
viewer.setCameraPosition(0, 30, 0, 0, 0, 0, 0, 0, 1);
viewer.setCameraFieldOfView(0.523599);
viewer.setCameraClipDistances(0.00522511, 50);
Good day everyone,
I have this problem which I need to convert the Magick++ Image to OpenCV Mat. Though I successfully converted it however the problem occurs which the color of the Mat is not correct (Please see attached Image). In my case, I have a PDF file which I converted it to Magick++ Image page by page and to use my Image Processing methods, I need to convert it to OpenCV Mat. When I use other PDF file, the colors are correct. I really appreciate any help in this issue, Thanks.
My working code is displayed below, thanks.
Note: I use ImageMagick-7.0.6-Q16 and OpenCV 2.4.11
try {
string fName = "";
vector<Magick::Image> imageList;
cout << "Please Input the File name of the PDF." << endl;
cin >> fName;
cout << "Please wait while converting the PDF to Images...." << endl;
readImages(&imageList, fName);
bool isDecoded = false;
for (int i = 0; i < imageList.size(); i++){
if (!isDecoded){
int w = imageList[i].columns();
int h = imageList[i].rows();
Mat opencvImage(h,w,CV_8UC4);
imageList[i].write(0, 0, w, h, "RGBA" , Magick::CharPixel, opencvImage.data);
string decoded = QRScanner(opencvImage);
imshow("opencvImage", opencvImage);
if (decoded != ""){
cout << "Result: " << decoded << endl;
isDecoded = true;
}
waitKey();
}
}
}
catch (Magick::Exception &error_)
{
cout << "Caught exception: " << error_.what() << endl;
}
cout << "Convert Complete!" << endl;
system("pause");
Sample Code
Image Problem
I think the majority of the problem has been addressed in the comments above, but I believe this can be quickly fixed by adjusting the channel map to what OpenCV is expecting.
imageList[i].write(0, 0, w, h, "BGRA" , Magick::CharPixel, opencvImage.data);
^^^^
No need to worry about cvtColor or negation as you already can control how the data stream is exported.
I'm beginner in OpenCV and currently I'm using Visual Studio 2013 (64-bit) and OpenCV 3.2 (C++) to construct a two view geometry and try to display those matched 3D points in MeshLab. I use the triangulatePoints() to get the Points4D, which is a 4*N matrix that contains the coordinates of the matched points from two images. This the documentation of writeCloud().
triangulatePoints(CameraMatrix_1, CameraMatrix_2, matchpoints_1, matchpoints_2, Points4D);
writeCloud("twoview.ply", cloud, noArray(), noArray(), false);
My question is, what should be the cloud input of writeCloud() so that I could save those 3D points into a .ply file and display them? Assume that I do not assign color to the point cloud first.
Also, I have tried to use the MATLAB to generate a pointcloud.ply file and analyse it with the readCloud(), then I find out the following code successfully read a point cloud and save it into another one. But strangely, the cv::Mat twoviewcloud here is a 1*N matrix, how could you construct a point cloud form one dimensional array? I am totally confused.
Mat twoviewcloud = readCloud("pointcloud.ply");
writeCloud("trial.ply", twoviewcloud, noArray(), noArray(), false);
I would sincerely thank you if someone could give me some hint!
Ok, so I am still confused to use the original OpenCV function writeCloud(), however, I could just implement my own function to write the .ply file. Here is the code, it is quite simple actually and you could read the wiki page for the detailed .ply format.
struct dataType { Point3d point; int red; int green; int blue; };
typedef dataType SpacePoint;
vector<SpacePoint> pointCloud;
ofstream outfile("pointcloud.ply");
outfile << "ply\n" << "format ascii 1.0\n" << "comment VTK generated PLY File\n";
outfile << "obj_info vtkPolyData points and polygons : vtk4.0\n" << "element vertex " << pointCloud.size() << "\n";
outfile << "property float x\n" << "property float y\n" << "property float z\n" << "element face 0\n";
outfile << "property list uchar int vertex_indices\n" << "end_header\n";
for (int i = 0; i < pointCloud.size(); i++)
{
Point3d point = pointCloud.at(i).point;
outfile << point.x << " ";
outfile << point.y << " ";
outfile << point.z << " ";
outfile << "\n";
}
outfile.close();
I notice that this simple ArrayFire program is causing loaded TIFF images to be heavily distorted:
#include <iostream>
#include <arrayfire.h>
int main( int argc, char** argv ) {
af::array img = af::loadImage( argv[1] );
double mn, mx;
unsigned idxn, idxx;
af::min( &mn, &idxn, img );
af::max( &mx, &idxx, img );
std::cout << "Image size = " << img.dims()[0] << ", " << img.dims()[1] << '\n';
std::cout << "Data type = " << img.type() << '\n';
std::cout << "Min = " << mn << " (at " << idxn << ")\n";
std::cout << "Max = " << mx << " (at " << idxx << ")\n";
af::saveImage( argv[2], img );
return 0;
}
I then compile and run on a simple (monochrome) image:
./a.out orig.tif out.tif
with the following output:
Image size = 256, 256
Data type = 0
Min = 0 (at 65535)
Max = 81.5025 (at 31356)
When I visualize these images I get the following result:
which of course is not what ArrayFire is expected to do; I would expect it to dump the exact same image out since I didn't make any changes to it. Unfortunately I don't know enough about the TIFF image format or the graphics backend of ArrayFire to understand what is going on. Am I doing something wrong while loading the image? (I followed the ArrayFire documentation for loadImage and saveImage).
I also tried using loadImageNative and saveImageNative alternatively, but the latter returns a 4-layer TIFF image while the original image is only a 1-layer TIFF.
Any help at all from ArrayFire experts would be great.
Thanks!
I'm trying to get the color of a pixel in a jpg image and all the tutorials I can find online say to use pixelpacket which doesnt exist anymore. searching and searching through google i've found that getvirtualpixels might be the new way but that doesnt seem to exist in the newest version either. so how do I get the color of pixels from an image with an image cache? I do not need to set any pixels btw.
Using windows10 and visual studio 2015
PixelPacket structures have been replaced with a pointer to Quantum data type. From Porting to ImageMagick Version 7, the key thing to understand is that a pixel can now have dynamic parts (or "traits" previously called as "channels").
ImageMagick version 7 supports any number of channels from 1 to 32 (and beyond) and simplifies access with a single method that returns an array of pixel channels of type Quantum.
You are now responsible for authoring code that handles deterministic (is that the right term?) pixel channels. For example...
Magick::Image rose("rose:");
const Magick::Quantum * q = rose.getConstPixels(5, 5, 1, 1);
size_t channel_length = rose.channels();
for (size_t k = 0; k < channel_length; k++ ) {
std::cout << "Channel " << k << " = ";
std::cout << q[k] << " of " << QuantumRange << std::endl;
}
/** Outputs
-------
Channel 0 = 13107 of 65535
Channel 1 = 12079 of 65535
Channel 2 = 11308 of 65535
*/
Without seeing your code, or understanding what your finial goal would be, I would guess that you really just want RGB info. If that's the case, the Magick::Image.getColor will work,
Magick::Image rose("rose:");
Magick::Color pixelColor = rose.pixelColor(5, 5);
std::cout << "Channel 0 = ";
std::cout << pixelColor.quantumRed() << " of " << QuantumRange << std::endl;
std::cout << "Channel 1 = ";
std::cout << pixelColor.quantumGreen() << " of " << QuantumRange << std::endl;
std::cout << "Channel 2 = ";
std::cout << pixelColor.quantumBlue() << " of " << QuantumRange << std::endl;
/** Outputs
-------
Channel 0 = 13107 of 65535
Channel 1 = 12079 of 65535
Channel 2 = 11308 of 65535
*/
But! Sometimes it's just easier to export the image data into a format you're already working with. For example, I might only need Green values as single-point float list.
Magick::Image rose("rose:");
Magick::PixelData pixelBlob(rose, "G", Magick::FloatPixel);
float * buffer = (float *)pixelBlob.data();
// ...