Convert Magick::Image to cv::Mat - c++

I am trying to convert an image loaded in from a GIF via Magick++ into a cv::Mat. I have already converted from cv::Mat to Magick::Image but cannot seem to find how to pull the data out of an Image in Magick in order to load it into a Mat. What's the best way to do this?
For reference, in reverse: Convert cv::Mat to Magick::Image

Updated Answer
This is the best I can get it, I think!
#include <opencv2/opencv.hpp>
#include <Magick++.h>
#include <iostream>
using namespace std;
using namespace Magick;
using namespace cv;
int main(int argc,char **argv)
{
// Initialise ImageMagick library
InitializeMagick(*argv);
// Create Magick++ Image object and read image file
Image image("image.gif");
// Get dimensions of Magick++ Image
int w=image.columns();
int h=image.rows();
// Make OpenCV Mat of same size with 8-bit and 3 channels
Mat opencvImage(h,w,CV_8UC3);
// Unpack Magick++ pixels into OpenCV Mat structure
image.write(0,0,w,h,"BGR",Magick::CharPixel,opencvImage.data);
// Save opencvImage
imwrite("result.png",opencvImage);
}
For my own future reference, the other Magick++ StorageTypes and my assumed OpenCV equivalents in brackets are:
Magick::CharPixel (CV_8UC3)
Magick::ShortPixel (CV_16UC3)
Magick::IntegerPixel (CV_32SC3)
Magick::FloatPixel (CV_32FC3)
Magick::DoublePixel (CV_64FC3)
Previous Answer
This is a work in progress - it works but may not be optimal as I am still learning myself.
#include <opencv2/opencv.hpp>
#include <Magick++.h>
#include <iostream>
using namespace std;
using namespace Magick;
using namespace cv;
int main(int argc,char **argv)
{
// Initialise ImageMagick library
InitializeMagick(*argv);
// Create Magick++ Image object and read image file
Image image("image.gif");
// Get pointer to the Magick++ pixel data in OpenCV "BGR" format
Magick::PixelData pData(image,"BGR",Magick::CharPixel);
// Get dimensions of the Magick++ image
int w=image.columns();
int h=image.rows();
// Make OpenCV Mat of same size with 8-bit and 3 channels
Mat opencvImage(h,w,CV_8UC3);
// Copy Magick++ data into OpenCV Mat
std::memcpy(opencvImage.data,pData.data(),w*h*3);
// Save opencvImage
imwrite("result.png",opencvImage);
}
Actually, Magick++ has the ability to write a buffer of pixels to some memory you have already allocated, which we could do if we declared the Mat sooner.
It looks like this:
image.write(const ssize_t x_,
const ssize_t y_,
const size_t columns_,
const size_t rows_,
const std::string &map_,
const StorageType type_, void *pixels_)
At the moment, we are temporarily at least, using double memory because we copy the pixel data out of Magick++ into a buffer and from the buffer into the Mat, so we should maybe do something like this (not yet tested):
// Create Magick++ Image object and read image file
Image image("image.gif");
// Get dimensions of the Magick++ image
int w=image.columns();
int h=image.rows();
// Make OpenCV Mat of same size with 8-bit and 3 channels
Mat opencvImage(h,w,CV_8UC3);
// Write the Magick++ image data into the Mat structure
image.write(const ssize_t x_, # testing this param
const ssize_t y_, # testing this param
const size_t columns_, # testing this param
const size_t rows_, # testing this param
const std::string &map_, # testing this param
Magick::CharPixel, opencvImage.data);

Complementing Marks fantastic answer (which should be accepted).
cv::Mat has a constructer for byte arrays.
Mat(int rows,
int cols,
int type,
void* data,
size_t step=AUTO_STEP)
This would require you to allocate a byte array; as opposed to Magick::Image.write directly to cv::Mat.
#include <Magick++.h>
#include <opencv2/opencv.hpp>
bool copyImageToMat(Magick::Image & im_image, cv::Mat & cv_image)
{
// Get size of image.
size_t
w = im_image.columns(),
h = im_image.rows();
// Allocate enough bytes for image data.
unsigned char blob[w * h * 3];
// Write image data to blob.
im_image.write(0, 0, w, h, "BGR", Magick::CharPixel, &blob);
// Construct new Mat image.
cv::Mat cv_temp((int)h, (int)w, CV_8UC3, blob);
// Was any work done?
bool dataWasCopied = !cv_temp.empty();
if (dataWasCopied) {
// Copy data to destination.
cv_image = cv_temp.clone();
}
return dataWasCopied;
}
int main(int argc, const char * argv[]) {
cv::Mat destination;
Magick::Image source("rose:");
if(copyImageToMat(source, destination)) {
cv::imwrite("/tmp/rose.png", destination);
}
return 0;
}

Related

How to copy a rectangular area of a Mat a new Mat of the same size?

How can I save an area of one image in a new image with the same size as the first image?
For example if I had an image like this:
I want to create another image like this:
This is what I tried:
#include <opencv2/opencv.hpp>
#include "iostream"
using namespace cv;
using namespace std;
int main()
{
Mat src = imread("1.png");
Mat dst;
src(Rect(85, 45, 100, 100)).copyTo(dst);
imshow("tmask", dst);
waitKey(0);
return 0;
}
But the result will be like this:
which is not what I wanted.
It is necessary for the program to not initialize the size of Mat dst for reasons that are too long to write here.
How can I generate the second image above (dst) without initializing the size of it?
create a new image and copy the subimage to roi
cv:: Mat img = cv::imread(...);
cv::Rect roi(x,y,w,h);
cv::Mat subimage= img(roi); // embedded
cv::Mat subimageCopied = subimage.clone(); // copied
cv::Mat newImage=cv::Mat::zeros(img.size(), img.type);
img(roi).copyTo(newImage(roi)); // this line is what you want.
If you have access to the original image, but are not allowed to use its siute information, you can use .copyTo with a mask, but then you have to use the size information to create the mask...

How to transform a 64-bit floats image into unsigned char using OpenCV

I have a single channel of 64-bit floats image that I am trying to transform into an unsigned char using OpenCV. I can successfully visualize the image and resize it as it is too big. However when I am trying to transform the resized image into an unsigned char I don't see anything.
I am doing the transformation using the following function as advised here.
I initially tried const uchar* inBuffer = desc.data; to transform it but according to the same source it seems to be unsafe and therefore opted for a recasting method. That also didn't work but that it seemed at my best understanding the best choice. The code is below:
#include <opencv2/core.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/opencv.hpp>
#include <iostream>
#include <string>
using namespace cv;
using namespace std;
int main(int argc, char** argv)
{
String imageName( "/home/to/Desktop/Myexample.tif" );
if( argc > 1)
{
imageName = argv[1];
}
Mat image;
Mat outImage;
Mat corrected;
// Read the file
image = cv::imread( imageName, IMREAD_UNCHANGED );
// Check for invalid input
if(image.empty())
{
cout << "Could not open or find the image" << std::endl ;
return -1;
}
cv::resize(image, outImage, Size(800,800));
cv::namedWindow("Resized", WINDOW_AUTOSIZE);
cv::imshow("Resized", outImage+220);
// Transformation of the resized image into a unsigned char for better visualization
cv::resize(outImage, corrected, Size(800,800));
cv::namedWindow("Corrected", WINDOW_AUTOSIZE);
// From here nothing is showing up
unsigned char const* inBuffer = reinterpret_cast<unsigned char const*>(outImage.data);
cv::imshow("Corrected", *inBuffer);
cv::waitKey(0);
return 0;
}
Another thing I thought could have been useful is from the following source where it was advised to use a double conversion. I understand that it is fast in terms of computation but at the same time this didn't give me any useful result.
Thank you in advance for shedding light on this matter.

Blurring images of an image pyramid - Vector subscript out of range

I am trying to load an image, calculate the image pyramid (save every image) and then blur every single image of the pyramid with opencv 3.2 in C++. When I run my program I receive the error:
vector Line:1740 Expression: vector subscript out of range
Here is my code:
#include "stdafx.h"
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/xfeatures2d.hpp>
#include <iostream>
#include <opencv2/shape/shape.hpp>
using namespace std;
using namespace cv;
using namespace cv::xfeatures2d;
int main(int argc, char** argv)
{
// Read the image
Mat img_1;
img_1 = imread(argv[1], CV_LOAD_IMAGE_GRAYSCALE);
// Build an image pyramid and save it in a vector of Mat
vector<Mat> img_1_pyramid;
int pyramid_octaves = 3;
buildPyramid(img_1, img_1_pyramid, pyramid_octaves);
/* void cv::buildPyramid (InputArray src, OutputArrayOfArrays dst, int
maxlevel, int borderType = BORDER_DEFAULT) */
// Initialize parameters for the first image pyramid
vector<Mat> reduced_noise_1;
blur(img_1_pyramid[0], reduced_noise_1[0], Size(3,3));
/* void cv::blur (InputArray src, OutputArray dst, Size ksize, Point
anchor=Point(-1,-1), int borderType=BORDER_DEFAULT)*/
return 0;
}
I also tried it with a Mat object: Mat reduced_noise_1; or a vector of predefined size vector<Mat> reduced_noise(4);and I can draw img_1_pyramid[0] with imshow and receive the right image...
When I debug the program I receive an error in Line 621 of cvstd.hpp:
String::String(const char* s)
: cstr_(0), len_(0)
{
if (!s) return;
size_t len = strlen(s); // Here appears the error (only in German;))
memcpy(allocate(len), s, len);
}

watershed segmentation opencv xcode

I am now learning a code from the opencv codebook (OpenCV 2 Computer Vision Application Programming Cookbook): Chapter 5, Segmenting images using watersheds, page 131.
Here is my main code:
#include "opencv2/opencv.hpp"
#include <string>
using namespace cv;
using namespace std;
class WatershedSegmenter {
private:
cv::Mat markers;
public:
void setMarkers(const cv::Mat& markerImage){
markerImage.convertTo(markers, CV_32S);
}
cv::Mat process(const cv::Mat &image){
cv::watershed(image,markers);
return markers;
}
};
int main ()
{
cv::Mat image = cv::imread("/Users/yaozhongsong/Pictures/IMG_1648.JPG");
// Eliminate noise and smaller objects
cv::Mat fg;
cv::erode(binary,fg,cv::Mat(),cv::Point(-1,-1),6);
// Identify image pixels without objects
cv::Mat bg;
cv::dilate(binary,bg,cv::Mat(),cv::Point(-1,-1),6);
cv::threshold(bg,bg,1,128,cv::THRESH_BINARY_INV);
// Create markers image
cv::Mat markers(binary.size(),CV_8U,cv::Scalar(0));
markers= fg+bg;
// Create watershed segmentation object
WatershedSegmenter segmenter;
// Set markers and process
segmenter.setMarkers(markers);
segmenter.process(image);
imshow("a",image);
std::cout<<".";
cv::waitKey(0);
}
However, it doesn't work. How could I initialize a binary image? And how could I make this segmentation code work?
I am not very clear about this part of the book.
Thanks in advance!
There's a couple of things that should be mentioned about your code:
Watershed expects the input and the output image to have the same size;
You probably want to get rid of the const parameters in the methods;
Notice that the result of watershed is actually markers and not image as your code suggests; About that, you need to grab the return of process()!
This is your code, with the fixes above:
// Usage: ./app input.jpg
#include "opencv2/opencv.hpp"
#include <string>
using namespace cv;
using namespace std;
class WatershedSegmenter{
private:
cv::Mat markers;
public:
void setMarkers(cv::Mat& markerImage)
{
markerImage.convertTo(markers, CV_32S);
}
cv::Mat process(cv::Mat &image)
{
cv::watershed(image, markers);
markers.convertTo(markers,CV_8U);
return markers;
}
};
int main(int argc, char* argv[])
{
cv::Mat image = cv::imread(argv[1]);
cv::Mat binary;// = cv::imread(argv[2], 0);
cv::cvtColor(image, binary, CV_BGR2GRAY);
cv::threshold(binary, binary, 100, 255, THRESH_BINARY);
imshow("originalimage", image);
imshow("originalbinary", binary);
// Eliminate noise and smaller objects
cv::Mat fg;
cv::erode(binary,fg,cv::Mat(),cv::Point(-1,-1),2);
imshow("fg", fg);
// Identify image pixels without objects
cv::Mat bg;
cv::dilate(binary,bg,cv::Mat(),cv::Point(-1,-1),3);
cv::threshold(bg,bg,1, 128,cv::THRESH_BINARY_INV);
imshow("bg", bg);
// Create markers image
cv::Mat markers(binary.size(),CV_8U,cv::Scalar(0));
markers= fg+bg;
imshow("markers", markers);
// Create watershed segmentation object
WatershedSegmenter segmenter;
segmenter.setMarkers(markers);
cv::Mat result = segmenter.process(image);
result.convertTo(result,CV_8U);
imshow("final_result", result);
cv::waitKey(0);
return 0;
}
I took the liberty of using Abid's input image for testing and this is what I got:
Below is the simplified version of your code, and it works fine for me. Check it out :
#include "opencv2/objdetect/objdetect.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
using namespace cv;
using namespace std;
int main ()
{
Mat image = imread("sofwatershed.jpg");
Mat binary = imread("sofwsthresh.png",0);
// Eliminate noise and smaller objects
Mat fg;
erode(binary,fg,Mat(),Point(-1,-1),2);
// Identify image pixels without objects
Mat bg;
dilate(binary,bg,Mat(),Point(-1,-1),3);
threshold(bg,bg,1,128,THRESH_BINARY_INV);
// Create markers image
Mat markers(binary.size(),CV_8U,Scalar(0));
markers= fg+bg;
markers.convertTo(markers, CV_32S);
watershed(image,markers);
markers.convertTo(markers,CV_8U);
imshow("a",markers);
waitKey(0);
}
Below is my input image :
Below is my output image :
See the code explanation here : Simple watershed Sample in OpenCV
I had the same problem as you, following the exact same code sample of the cookbook (great book btw).
Just to place the matter I was coding under Visual Studio 2013 and OpenCV 2.4.8. After a lot of searching and no solutions I decided to change the IDE.
It's still Visual Studio BUT it's 2010!!!! And boom it works!
Becareful of how you configure Visual Studio with OpenCV. Here's a great tutorial for installation here
Good day to all

WebP encoding - Segmentation Fault

So I'm trying to use the webp API to encode images. Right now I'm going to be using openCV to open and manipulate the images, then I want to save them off as webp. Here's the source I'm using:
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include <cv.h>
#include <highgui.h>
#include <webp/encode.h>
int main(int argc, char *argv[])
{
IplImage* img = 0;
int height,width,step,channels;
uchar *data;
int i,j,k;
if (argc<2) {
printf("Usage:main <image-file-name>\n\7");
exit(0);
}
// load an image
img=cvLoadImage(argv[1]);
if(!img){
printf("could not load image file: %s\n",argv[1]);
exit(0);
}
// get the image data
height = img->height;
width = img->width;
step = img->widthStep;
channels = img->nChannels;
data = (uchar *)img->imageData;
printf("processing a %dx%d image with %d channels \n", width, height, channels);
// create a window
cvNamedWindow("mainWin", CV_WINDOW_AUTOSIZE);
cvMoveWindow("mainWin",100,100);
// invert the image
for (i=0;i<height;i++) {
for (j=0;j<width;j++) {
for (k=0;k<channels;k++) {
data[i*step+j*channels+k] = 255-data[i*step+j*channels+k];
}
}
}
// show the image
cvShowImage("mainWin", img);
// wait for a key
cvWaitKey(0);
// release the image
cvReleaseImage(&img);
float qualityFactor = .9;
uint8_t** output;
FILE *opFile;
size_t datasize;
printf("encoding image\n");
datasize = WebPEncodeRGB((uint8_t*)data,width,height,step,qualityFactor,output);
printf("writing file out\n");
opFile=fopen("output.webp","w");
fwrite(output,1,(int)datasize,opFile);
}
When I execute this, I get this:
nato#ubuntu:~/webp/webp_test$ ./helloWorld ~/Pictures/mars_sunrise.jpg
processing a 2486x1914 image with 3 channels
encoding image
Segmentation fault
It displays the image just fine, but segfaults on the encoding. My initial guess was that it's because I'm releasing the img before I try to write out the data, but it doesn't seem to matter whether I release it before or after I try the encoding. Is there something else I'm missing that might cause this problem? Do I have to make a copy of the image data or something?
The WebP api docs are... sparse. Here's what the README says about WebPEncodeRGB:
The main encoding functions are available in the header src/webp/encode.h
The ready-to-use ones are:
size_t WebPEncodeRGB(const uint8_t* rgb, int width, int height,
int stride, float quality_factor, uint8_t** output);
The docs specifically do not say what the 'stride' is, but I'm assuming that it's the same as the 'step' from opencv. Is that reasonable?
Thanks in advance!
First, don't release the image if you use it later. Second, your output argument is pointing to non-initialized address. This is how to use initialized memory for the output address:
uint8_t* output;
datasize = WebPEncodeRGB((uint8_t*)data, width, height, step, qualityFactor, &output);
You release the image with cvReleaseImage before you try to use the pointer to the image data for the encoding. Probably that release function frees the image buffer and your data pointer now doesn't point to valid memory anymore.
This might be the reason for your segfault.
so it looks like the problem was here:
// load an image
img=cvLoadImage(argv[1]);
The function cvLoadImage takes an extra parameter
cvLoadImage(const char* filename, int iscolor=CV_LOAD_IMAGE_COLOR)
and when I changed to
img=cvLoadImage(argv[1],1);
the segfault went away.