Magick++ Issue when converting to Mat Opencv - c++

Good day everyone,
I have this problem which I need to convert the Magick++ Image to OpenCV Mat. Though I successfully converted it however the problem occurs which the color of the Mat is not correct (Please see attached Image). In my case, I have a PDF file which I converted it to Magick++ Image page by page and to use my Image Processing methods, I need to convert it to OpenCV Mat. When I use other PDF file, the colors are correct. I really appreciate any help in this issue, Thanks.
My working code is displayed below, thanks.
Note: I use ImageMagick-7.0.6-Q16 and OpenCV 2.4.11
try {
string fName = "";
vector<Magick::Image> imageList;
cout << "Please Input the File name of the PDF." << endl;
cin >> fName;
cout << "Please wait while converting the PDF to Images...." << endl;
readImages(&imageList, fName);
bool isDecoded = false;
for (int i = 0; i < imageList.size(); i++){
if (!isDecoded){
int w = imageList[i].columns();
int h = imageList[i].rows();
Mat opencvImage(h,w,CV_8UC4);
imageList[i].write(0, 0, w, h, "RGBA" , Magick::CharPixel, opencvImage.data);
string decoded = QRScanner(opencvImage);
imshow("opencvImage", opencvImage);
if (decoded != ""){
cout << "Result: " << decoded << endl;
isDecoded = true;
}
waitKey();
}
}
}
catch (Magick::Exception &error_)
{
cout << "Caught exception: " << error_.what() << endl;
}
cout << "Convert Complete!" << endl;
system("pause");
Sample Code
Image Problem

I think the majority of the problem has been addressed in the comments above, but I believe this can be quickly fixed by adjusting the channel map to what OpenCV is expecting.
imageList[i].write(0, 0, w, h, "BGRA" , Magick::CharPixel, opencvImage.data);
^^^^
No need to worry about cvtColor or negation as you already can control how the data stream is exported.

Related

Converting single pixel from RGB to LAB with OpenCV (C++)

I am trying to convert a RGB value to the LAB color space using OpenCV. While doing some research I found someone with a similar goal here and have tried to replicate this approach.
After some messing around I was able to get the following code to compile and run:
int main(){
int r_a = 168, g_a = 93, b_a = 201, r_b = 60, g_b = 117, b_b = 59;
cv::Mat3f rgb_a (cv::Vec3f(r_a, g_a, b_a));
cv::Mat3f rgb_b (cv::Vec3f(r_b, g_b, b_b));
cv::Mat3f lab_a;
cv::Mat3f lab_b;
cv::cvtColor(rgb_a,lab_a,cv::COLOR_RGB2Lab);
cv::cvtColor(rgb_b,lab_b,cv::COLOR_RGB2Lab);
std::cerr << ">> rgb_a = " << rgb_a << "\n";
std::cerr << ">> rgb_b = " << rgb_b << "\n";
std::cerr << ">> lab_a = " << lab_a << "\n";
std::cerr << ">> lab_b = " << lab_b << "\n";
return 0;
}
When I run this, both LAB values are calculated as [100, 0, 0].
After a bit more browsing I found someone else had a similar issue when using OpenCV in python, see this question.
I was able to replicate this working solution in Python, but am still unable to find a fix for c++.
Any idea on how I can fix this? Is it a matrix shape issue? I am quite unfamiliar with the exact image formats for OpenCV in c++.
Posting an answer here in case anyone in the future runs into the same issue.
As #M. Spiller pointed out I needed to scale my vector.
I divided each value by 255.0 and then the conversion was able to execute correctly!

Thin Plate Spline shape transformation run-time error [exited with code -1073741819]

I have been trying to warp an image my using opencv 3.1.0, Shape Tranformation class. Specifically, Thin Plate Sline Algorithm
(I actually tried a block of code from Shape Transformers and Interfaces OpenCV3.0 )
But the problem is that I keep gettting runtime time error with the console saying
D:\Project\TPS_Transformation\x64\Debug\TPS_Transformation.exe (process 13776) exited with code -1073741819
I figured out the code that caused the error is
tps->estimateTransformation(source, target, matches);
which is the part that executes the transformation algorithm for the first time.
I searched the runtime error saying that it could be the dll problem, but I have no problem running opencv in general. I get the error when I run the Shape Transformation algorithm, specifically estimateTranformation function.
#include <iostream>
#include <opencv2\opencv.hpp>
#include <opencv2\imgproc.hpp>
#include "opencv2\shape\shape_transformer.hpp"
using namespace std;
using namespace cv;
int main()
{
Mat img1 = imread("D:\\Project\\library\\opencv_3.1.0\\sources\\samples\\data\\graf1.png");
std::vector<cv::Point2f> sourcePoints, targetPoints;
sourcePoints.push_back(cv::Point2f(0, 0));
sourcePoints.push_back(cv::Point2f(399, 0));
sourcePoints.push_back(cv::Point2f(0, 399));
sourcePoints.push_back(cv::Point2f(399, 399));
targetPoints.push_back(cv::Point2f(100, 0));
targetPoints.push_back(cv::Point2f(399, 0));
targetPoints.push_back(cv::Point2f(0, 399));
targetPoints.push_back(cv::Point2f(399, 399));
Mat source(sourcePoints, CV_32FC1);
Mat target(targetPoints, CV_32FC1);
Mat respic, resmat;
std::vector<cv::DMatch> matches;
for (unsigned int i = 0; i < sourcePoints.size(); i++)
matches.push_back(cv::DMatch(i, i, 0));
Ptr<ThinPlateSplineShapeTransformer> tps = createThinPlateSplineShapeTransformer(0);
tps->estimateTransformation(source, target, matches);
std::vector<cv::Point2f> transPoints;
tps->applyTransformation(source, target);
cout << "sourcePoints = " << endl << " " << sourcePoints << endl << endl;
cout << "targetPoints = " << endl << " " << targetPoints << endl << endl;
//cout << "transPos = " << endl << " " << transPoints << endl << endl;
cout << img1.size() << endl;
imshow("img1", img1); // Just to see if I have a good picture
tps->warpImage(img1, respic);
imshow("Tranformed", respic); //Always completley grey ?
waitKey(0);
return 0;
}
I just want to be able to run the algorithm so that I can check if it is the algorithm that I want.
Please help.
Thank you.
opencv-version 3.1.0
IDE: Visual Studio 2015
OS : Windows 10
Try adding
transpose(source, source);
transpose(target, target);
before estimateTransformation().
See https://answers.opencv.org/question/69384/shape-transformers-and-interfaces/.

Find the minimum value and it's location from the depth images in opencv c++

I am working depth data which is in the format of 16UC1. I want to find out the min value (greater than 0) with location from the image. I am using the minMaxLoc function but I am getting the error. It may be because of short values. It will be great , if you suggest the way.
int main()
{
Mat abc = imread("depth272.tiff");
cout << abc.size() << endl;
imshow("depth_image",abc);
Mat xyz = abc > 0;
cout << "abc type: " << abc.type() << "xyz type " << xyz.type() << endl;
double rmin, rmax;
Point rMinPoint, pMaxPoint;
minMaxLoc(abc, &rmin, &rmax, &rMinPoint, &pMaxPoint, xyz);
int row = rMinPoint.x;
int col = rMinPoint.y;
waitKey(0);
return 0;
}
The image is loaded as a 3-channel 8UC3 image.
The function minMaxLoc() only works on single channel images.
As #Miki suggests, you should use imread(..., IMREAD_UNCHANGED) to load as CV_16UC1.

Storing output data (numbers) into XML/json file format - to develop graph

I am working in Computer Vision domain, coding completely in C++ using OpenCV API, I have found results and printing the values in cmd prompt. I wanna save these results(basically integers and floating point numbers) in XML file and then develop a graph (bar charts or line graphs)- basically a web dashboard(GUI). Now I am in a stage, such that I use, ofstream and save the data in a csv/xml file. But it just prints the value same as such in cmd prompt.
Can someone kindly help me with a technique to store the values in XML tree structure? so that I will create a web dashboard(bar graphs) with that xml data
I have also came across msxml6,tinyxml,libxml++ but have not got any fruitful results.
Thanks in advance - please provide link to the other question if this is a duplicate.
The code sample :
#include<opencv2/core/core.hpp>
#include<iostream>
#include<fstream>
int main ()
{
cv::VideoCapture capVideo;
capVideo.open("video.mp4");
cv::Mat imgFrame1;
cv::Mat imgFrame2;
double fps = capVideo.get(CV_CAP_PROP_FPS);
std::cout << "FPS = " << fps <<std::endl;
double fc = capVideo.get(CV_CAP_PROP_FRAME_COUNT);
std::cout << "Total Framecount = " << fc <<std::endl;
std::ofstream outfile;
outfile.open("theBigDataSheet.xml");
capVideo.read(imgFrame1);
int frameCount = 1;
while (true)
{
int divisor = fps*15;
if (frameCount%divisor == 0 || frameCount==fc-1)
{
if (frameCount<fc-1)
{
outfile << frameCount/fps << std::endl;
outfile << frameCount << std::endl;
}
else{
outfile << frameCount/fps << std::endl;
outfile << frameCount << std::endl;
}
}
if ((capVideo.get(CV_CAP_PROP_POS_FRAMES) + 1) <
capVideo.get(CV_CAP_PROP_FRAME_COUNT))
{
capVideo.read(imgFrame2);
frameCount++;
}
else {
std::cout << "end of video\n";
break;
}
cv::waitKey(33);
}
outfile.close();
return(0);
}
see the code : at every 15th second, it will print the framecount, and the seconds - finally it will print the final number of frames and seconds - i need to plot this as a graph (which will be a linear line)

How to use the writeCloud() OpenCV function to contruct point cloud given 3D point coordinates?

I'm beginner in OpenCV and currently I'm using Visual Studio 2013 (64-bit) and OpenCV 3.2 (C++) to construct a two view geometry and try to display those matched 3D points in MeshLab. I use the triangulatePoints() to get the Points4D, which is a 4*N matrix that contains the coordinates of the matched points from two images. This the documentation of writeCloud().
triangulatePoints(CameraMatrix_1, CameraMatrix_2, matchpoints_1, matchpoints_2, Points4D);
writeCloud("twoview.ply", cloud, noArray(), noArray(), false);
My question is, what should be the cloud input of writeCloud() so that I could save those 3D points into a .ply file and display them? Assume that I do not assign color to the point cloud first.
Also, I have tried to use the MATLAB to generate a pointcloud.ply file and analyse it with the readCloud(), then I find out the following code successfully read a point cloud and save it into another one. But strangely, the cv::Mat twoviewcloud here is a 1*N matrix, how could you construct a point cloud form one dimensional array? I am totally confused.
Mat twoviewcloud = readCloud("pointcloud.ply");
writeCloud("trial.ply", twoviewcloud, noArray(), noArray(), false);
I would sincerely thank you if someone could give me some hint!
Ok, so I am still confused to use the original OpenCV function writeCloud(), however, I could just implement my own function to write the .ply file. Here is the code, it is quite simple actually and you could read the wiki page for the detailed .ply format.
struct dataType { Point3d point; int red; int green; int blue; };
typedef dataType SpacePoint;
vector<SpacePoint> pointCloud;
ofstream outfile("pointcloud.ply");
outfile << "ply\n" << "format ascii 1.0\n" << "comment VTK generated PLY File\n";
outfile << "obj_info vtkPolyData points and polygons : vtk4.0\n" << "element vertex " << pointCloud.size() << "\n";
outfile << "property float x\n" << "property float y\n" << "property float z\n" << "element face 0\n";
outfile << "property list uchar int vertex_indices\n" << "end_header\n";
for (int i = 0; i < pointCloud.size(); i++)
{
Point3d point = pointCloud.at(i).point;
outfile << point.x << " ";
outfile << point.y << " ";
outfile << point.z << " ";
outfile << "\n";
}
outfile.close();