Fast Pixels Access opencv - c++

i use this code to convert image to matrix ,so someone have any idea how can i convert this matrix to 1D one -->vector
i want to have image data as a 1D array ,in row major order that is all pixel values in the first row are listed first ,followed by pixel values in the second row and so on.
IplImage *img = cvLoadImage( "lena.jpg", CV_LOAD_IMAGE_COLOR);
CvMat *mat = cvCreateMat(img->height,img->width,CV_32FC3 );
cvConvert( img, mat );
for(int i=0;i<10;i++)
{
for(int j=0;j<10;j++){
CvScalar scal = cvGet2D( mat,j,i);
printf( "(%.f,%.f,%.f) ",scal.val[0], scal.val[1], scal.val[2] );}
printf("\n");}
cvNamedWindow("une_window");
cvShowImage("une_window", img);
cvWaitKey();
cvDestroyWindow("une_window");

Using the C++ API:
cv::Mat img = cv::imread("a.jpg");
std::vector<uchar> pixels;
pixels.reserve(img.rows * img.cols * 3);
if(img.isContinuous()) {
pixels = std::vector<uchar>(img.ptr(0), img.ptr(0) + img.rows * img.cols * 3 );
}
else {
for(int i = 0; i != img.rows; ++i) {
uchar* p = img.ptr(i);
for(int j = 0; j != img.cols * 3; ++j) {
pixels.push_back(p[j]);
}
}
}

I believe the fastest way for continuous Mats is to use the reshape command:
Mat colVec = img.reshape(1, img.rows*img.cols); // change to a Nx3 column vector
The reshape command just changes the header, so it does not require pixel access and therefore runs in O(1) time.

I think you should observe from video decoder output to know the video size information, other information collected from metadata in container parser might be not so accurate.

In C++ this is actually a one-liner:
cv::Mat_<float> img = cv::imread("a.jpg", 1);
std::vector<float> dest;
std::copy(img.begin(), img.end(), dest.begin());

Related

Matrix assignement value error in opencv C++ with mat.at<uchar>(i,j)

I am learning image processing with OpenCV in C++. To implement a basic down-sampling algorithm I need to work on the pixel level -to remove rows and columns. However, when I assign values with mat.at<>(i,j) other values are assign - things like 1e-38.
Here is the code :
Mat src, dst;
src = imread("diw3.jpg", CV_32F);//src is a 479x359 grayscale image
//dst will contain src low-pass-filtered I checked by displaying it works fine
Mat kernel;
kernel = Mat::ones(3, 3, CV_32F) / (float)(9);
filter2D(src, dst, -1, kernel, Point(-1, -1), 0, BORDER_DEFAULT);
// Now I try to remove half the rows/columns result is stored in downsampled
Mat downsampled = Mat::zeros(240, 180, CV_32F);
for (int i =0; i<downsampled.rows; i ++){
for (int j=0; j<downsampled.cols; j ++){
downsampled.at<uchar>(i,j) = dst.at<uchar>(2*i,2*j);
}
}
Since I read here OpenCV outputing odd pixel values that for cout I needed to cast, I wrote downsampled.at<uchar>(i,j) = (int) before dst.at<uchar> but it does not work also.
The second argument to cv::imread is cv::ImreadModes, so the line:
src = imread("diw3.jpg", CV_32F);
is not correct; it should probably be:
cv::Mat src_8u = imread("diw3.jpg", cv::IMREAD_GRAYSCALE);
src_8u.convertTo(src, CV_32FC1);
which will read the image as 8-bit grayscale image, and will convert it to floating point values.
The loop should look something like this:
Mat downsampled = Mat::zeros(240, 180, CV_32FC1);
for (int i = 0; i < downsampled.rows; i++) {
for (int j = 0; j < downsampled.cols; j++) {
downsampled.at<float>(i,j) = dst.at<float>(2*i,2*j);
}
}
note that the argument to cv::Mat::zeros is CV_32FC1 (1 channel, with 32-bit floating values), so Mat::at<float> method should be used.

How to pass an image buffer to an OpenCV Mat object?

I am currently programming with a PixeLINK USB3 machine vision camera along with OpenCV in C. I have had some success passing camera images in Mat format with the following code:
PXL_RETURN_CODE rc = PxLInitialize(0, &hCamera);
if (!API_SUCCESS(rc))
{
printf("Error: Unable to initialize a camera. \n");
return EXIT_FAILURE;
}
vector<U8> frameBuffer(3000 * 3000 * 2);
FRAME_DESC frameDesc;
if (API_SUCCESS(PxLSetStreamState(hCamera, START_STREAM)))
{
while (true)
{
frameDesc.uSize = sizeof(frameDesc);
rc = GetNextFrame(hCamera, (U32)frameBuffer.size(), &frameBuffer[0],
&frameDesc, 5);
Mat image(2592, 2048, CV_8UC1);
Mat imageCopy;
// Where passing of image data occurs
int k = 0;
for (int row = 0; row < 2048; row++)
{
for (int col = 0; col < 2592; col++)
{
image.at<uchar>(row, col) = frameBuffer[k];
k++;
}
}...
As I mentioned this works, but it seems very sloppy. I have looked online but haven't found too much detail.
I have tried:
Mat image(2592, 2048, CV_8UC1, &frameBuffer, size_t step=AUTO_STEP);
as well as,
Mat image(2592, 2048, CV_8UC1, frameBuffer, size_t step=AUTO_STEP).
The former is the only one that compile successfully, and displays gibberish - I mean, it doesn't form an image.
Have you tried switching the row and col of your Mat?
You initialized your Mat with row = 2592, col = 2048,
but you're using switched row and col in your for() loop.
I think this code should work properly:
Mat image(2048, 2592, CV_8UC1, &frameBuffer[0]);
Or, if you're using C++11,
Mat image(2048, 2592, CV_8UC1, frameBuffer.data());

OpenCV Image Mat to 1D CHW(RR...R, GG..G, BB..B) vector

Nvidia's cuDNN for deep learning has a rather interesting format for images called CHW. I have a cv::Mat img; that I want to convert to a one-dimensional vector of floats. The problem that I'm having is that the format of the 1D vector for CHW is (RR...R, GG..G,BB..B).
So I'm curious as to how I can extract the channel values for each pixel and order them for this format.
I faced with same problem and and solve it in that way:
#include <opencv2/opencv.hpp>
cv::Mat hwc2chw(const cv::Mat &image){
std::vector<cv::Mat> rgb_images;
cv::split(image, rgb_images);
// Stretch one-channel images to vector
cv::Mat m_flat_r = rgb_images[0].reshape(1,1);
cv::Mat m_flat_g = rgb_images[1].reshape(1,1);
cv::Mat m_flat_b = rgb_images[2].reshape(1,1);
// Now we can rearrange channels if need
cv::Mat matArray[] = { m_flat_r, m_flat_g, m_flat_b};
cv::Mat flat_image;
// Concatenate three vectors to one
cv::hconcat( matArray, 3, flat_image );
return flat_image;
}
P.S. If input image isn't in RGB format, you can change channel order in matArray creation line.
Use cv::dnn::blobFromImage:
cv::Mat bgr_image = cv::imread(imageFileName);
cv::Mat chw_image = cv::dnn::blobFromImage
(
bgr_image,
1.0, // scale factor
cv::Size(), // spatial size for output image
cv::Scalar(), // mean
true, // swapRB: BGR to RGB
false, // crop
CV_32F // Depth of output blob. Choose CV_32F or CV_8U.
);
const float* data = reinterpret_cast<const float*>(chw_image.data);
int data_length = 1 * 3 * bgr_image.rows * bgr_image.cols;
You can either iterate over the image manually and copy the values into the right place, or you can use something like cv::extractChannel to copy the channels one by one like so:
#include <opencv2/opencv.hpp>
int main()
{
//create dummy 3 channel float image
cv::Mat sourceRGB(cv::Size(100,100),CV_32FC3);
auto size = sourceRGB.size();
for (int y = 0; y < size.height; ++y)
{
for (int x = 0; x < size.width; ++x)
{
float* pxl = sourceRGB.ptr<float>(x, y);
*pxl = x / 100.0f;
*(pxl+1) = y / 100.0f;
*(pxl + 2) = (y / 100.0f) * (x / 100.0f);
}
}
cv::imshow("test", sourceRGB);
cv::waitKey(0);
//create single image with all 3 channels one after the other
cv::Size newsize(size.width,size.height*3);
cv::Mat destination(newsize,CV_32FC1);
//copy the channels from the source image to the destination
for (int i = 0; i < sourceRGB.channels(); ++i)
{
cv::extractChannel(
sourceRGB,
cv::Mat(
size.height,
size.width,
CV_32FC1,
&(destination.at<float>(size.height*size.width*i))),
i);
}
cv::imshow("test", destination);
cv::waitKey(0);
return 0;
}

outputting image back to matlab Mex

I am trying to output an image from my mex file back to my matlab file, but when i open it in matlab it is not correct.
The output image withing the mex file is correct
I have tried switching the orientation of the mwSize as well as swapping i and j in new_img.at<int>(j, i);
Mat image = imread(mxArrayToString(prhs[0]));
Mat new_img(H,W, image.type(), Scalar(0));
// some operations on new_img
imshow( "gmm image", image ); //shows the original image
imshow( "gmm1 image", new_img ); //shows the output image
waitKey( 200 ); //both images are the same size as desired
mwSize nd = 2;
mwSize dims[] = {W, H};
plhs[0] = mxCreateNumericArray(nd, dims, mxUINT8_CLASS, mxREAL);
if(plhs == NULL) {
mexErrMsgTxt("Could not create mxArray.\n");
}
char* outMat = (char*) mxGetData( plhs[0]);
for (int i= 0; i < H; i++)
{
for (int j = 0; j < W; j++)
{
outMat[i +j*image.rows] = new_img.at<int>(j, i);
}
}
this is in the mat file
gmmMask = GmmMex2(imgName,rect);
imshow(gmmMask); % not the same as the output image. somewhat resembles it, but not correct.
Because you have alluded to this being a colour image, this means that you have three slices of the matrix to consider. Your code only considers one slice. First off you need to make sure that you declare the right size of the image. In MATLAB, the first dimension is always the number of rows while the second dimension is the number of columns. Now you have to add the number of channels too on top of this. I'm assuming this is an RGB image so there are three channels.
Therefore, change your dims to:
mwSize nd = 3;
mwSize dims[] = {H, W, nd};
Changing nd to 3 is important as this will allow you to create a 3D matrix. You only have a 2D matrix. Next, make sure that you are accessing the image pixels at the right location in the cv::Mat object. The way you are accessing the image pixels in the nested pair of for loops assumes a row-major fashion (iterating over the columns first, then the rows). As such, you need to interchange i and j as i accesses the rows and j accesses the columns. You will also need to access the channel of the colour image so you'll need another for loop to compensate. For the grayscale case, you have properly compensated for the column-major memory configuration for the MATLAB MEX matrix though. This is verified because j accesses the columns and you need to skip over by rows amount in order to access the next column. However, to accommodate for a colour image, you must also skip over by image.rows*image.cols to go to the next layer of pixels.
Therefore your for loop should now be:
for (int k = 0; k < nd; k++) {
for (int i = 0; i < H; i++) {
for (int j = 0; j < W; j++) {
outMat[k*image.rows*image.cols + i + j*image.rows] = new_img.at<uchar>(i, j, k);
}
}
}
Take note that the container of pixels is most likely 8-bit unsigned character, and so you must change the template to uchar not int. This may also explain why your program is crashing.

OpenCV-2.4.8.2: imshow differs from imwrite

I'm using OpenCV2.4.8.2 on Mac OS 10.9.5.
I have the following snippet of code:
static void compute_weights(const vector<Mat>& images, vector<Mat>& weights)
{
weights.clear();
for (int i = 0; i < images.size(); i++) {
Mat image = images[i];
Mat mask = Mat::zeros(image.size(), CV_32F);
int x_start = (i == 0) ? 0 : image.cols/2;
int y_start = 0;
int width = image.cols/2;
int height = image.rows;
Mat roi = mask(Rect(x_start,y_start,width,height)); // Set Roi
roi.setTo(1);
weights.push_back(mask);
}
}
static void blend(const vector<Mat>& inputImages, Mat& outputImage)
{
int maxPyrIndex = 6;
vector<Mat> weights;
compute_weights(inputImages, weights);
// Find the fused pyramid:
vector<Mat> fused_pyramid;
for (int i = 0; i < inputImages.size(); i++) {
Mat image = inputImages[i];
// Build Gaussian Pyramid for Weights
vector<Mat> weight_gaussian_pyramid;
buildPyramid(weights[i], weight_gaussian_pyramid, maxPyrIndex);
// Build Laplacian Pyramid for original image
Mat float_image;
inputImages[i].convertTo(float_image, CV_32FC3, 1.0/255.0);
vector<Mat> orig_guassian_pyramid;
vector<Mat> orig_laplacian_pyramid;
buildPyramid(float_image, orig_guassian_pyramid, maxPyrIndex);
for (int j = 0; j < orig_guassian_pyramid.size() - 1; j++) {
Mat sized_up;
pyrUp(orig_guassian_pyramid[j+1], sized_up, Size(orig_guassian_pyramid[j].cols, orig_guassian_pyramid[j].rows));
orig_laplacian_pyramid.push_back(orig_guassian_pyramid[j] - sized_up);
}
// Last Lapalcian layer is the same as the Gaussian layer
orig_laplacian_pyramid.push_back(orig_guassian_pyramid[orig_guassian_pyramid.size()-1]);
// Convolve laplacian original with guassian weights
vector<Mat> convolved;
for (int j = 0; j < maxPyrIndex + 1; j++) {
// Create 3 channels for weight gaussian pyramid as well
vector<Mat> gaussian_3d_vec;
for (int k = 0; k < 3; k++) {
gaussian_3d_vec.push_back(weight_gaussian_pyramid[j]);
}
Mat gaussian_3d;
merge(gaussian_3d_vec, gaussian_3d);
//Mat convolved_result = weight_gaussian_pyramid[j].clone();
Mat convolved_result = gaussian_3d.clone();
multiply(gaussian_3d, orig_laplacian_pyramid[j], convolved_result);
convolved.push_back(convolved_result);
}
if (i == 0) {
fused_pyramid = convolved;
} else {
for (int j = 0; j < maxPyrIndex + 1; j++) {
fused_pyramid[j] += convolved[j];
}
}
}
// Blending
for (int i = (int)fused_pyramid.size()-1; i > 0; i--) {
Mat sized_up;
pyrUp(fused_pyramid[i], sized_up, Size(fused_pyramid[i-1].cols, fused_pyramid[i-1].rows));
fused_pyramid[i-1] += sized_up;
}
Mat final_color_bgr;
fused_pyramid[0].convertTo(final_color_bgr, CV_32F, 255);
final_color_bgr.copyTo(outputImage);
imshow("final", outputImage);
waitKey(0);
imwrite(outputImagePath, outputImage);
}
This code is doing some basic pyramid blending for 2 images. The key issues are related to imshow and imwrite in the last line. They gave me drastically different results. I apologize for displaying such a long/messy code, but I am afraid this difference is coming from some other parts of the code that can subsequently affect the imshow and imwrite.
The first image shows the result from imwrite and the second image shows the result from imshow, based on the code given. I'm quite confused about why this is the case.
I also noticed that when I do these:
Mat float_image;
inputImages[i].convertTo(float_image, CV_32FC3, 1.0/255.0);
imshow("float image", float_image);
imshow("orig image", image);
They show exactly the same thing, that is they both show the same picture in the original rgb image (in image).
IMWRITE functionality
By default, imwrite, converts the input image into Only 8-bit (or 16-bit unsigned (CV_16U) in case of PNG, JPEG 2000, and TIFF) single-channel or 3-channel (with ‘BGR’ channel order) images can be saved using this function.
So whatever format you feed in for imwrite, it blindly converts into CV_8U with a range 0(black) - 255(white) in BGR format.
IMSHOW - problem
So when noticed your function, fused_pyramid[0].convertTo(final_color_bgr, CV_32F, 255); fused_pyramid is already under mat type 21 (floating point CV_32F). You tried to convert into floating point with a scale factor 255. This scaling factor 255 caused the problem # imshow. Instead to visualize, you can directly feed in fused_pyramid without conversion as already it is scaled to floating point between 0.0(black) - 1.0(white).
Hope it helps.