Related
Question: How to upload an image in OpenCV using the pointer.
Input: Pointer to image
Required Output: cv::Mat image
Explanation: you can do this (below) if the picture is in a directory;
String imageName("C:/Images/1.jpg");
Mat image;
image = imread(samples::findFile(imageName), IMREAD_COLOR);
I try to get the same, but using pointer.
Thank you in advance for your attention to my question :)
There are a few cv::Mat constructors, that create/initialize matrix headers for already existing user data, e.g. this:
cv::Mat::Mat(int rows, int cols, int type, void* data, size_t step = AUTO_STEP)
Please see the given link for the complete parameter description. Nevertheless, regarding the user data data, you must pay attention to the following:
Pointer to the user data. Matrix constructors that take data and step parameters do not allocate matrix data. Instead, they just initialize the matrix header that points to the specified data, which means that no data is copied. This operation is very efficient and can be used to process external data using OpenCV functions. The external data is not automatically deallocated, so you should take care of it.
A very small example is given by this code snippet:
// Set up byte array: 3 rows, 3 columns, BGR values.
uint8_t data[3][3][3] = {
{
{ 100, 110, 120 }, // row 0, col 0, BGR
{ 130, 140, 150 }, // row 0, col 1, BGR
{ 160, 170, 180 } // row 0, col 2, BGR
},
{
{ 190, 200, 210 }, // row 1, col 0, BGR
{ 220, 230, 240 }, // row 1, col 1, BGR
{ 100, 120, 140 } // row 1, col 2, BGR
},
{
{ 160, 180, 200 }, // row 2, col 0, BGR
{ 100, 130, 160 }, // row 2, col 1, BGR
{ 190, 220, 250 } // row 2, col 2, BGR
},
};
// Create cv::Mat header pointing to the image data.
cv::Mat image = Mat(3, 3, CV_8UC3, *data);
Inspecting image at runtime (here: Visual Studio, Image Watch) shows a proper result:
I need to use OpenCV in order to read an image, convert it into a vector of Vec3f, work with the pixels and then convert it back to Mat in order to visualize it.
I'm using C++17.
Here the code so far:
Mat* in = new Mat;
*in = imread(filepath);
int rows = in->rows;
int cols = in->cols;
//MAT -> VECTOR
vector<Vec3f>* src = new vector<Vec3f>(rows * cols);
if (in->isContinuous()) {
src->assign(in->datastart, in->dataend);
}
else {
for (int i = 0; i < rows; ++i) {
src->insert(src->end(), in->ptr<Vec3f>(i), in->ptr<Vec3f>(i)+cols);
}
}
//---USE THE VECTOR TO TRASFORM EVERY PIXEL GRAY---
//SHOW
imshow("out", cv::Mat(rows, cols, CV_8U, src, cv::Mat::AUTO_STEP));
The result is a corrupted image, TV static noise like, even if i don't do the pixel processing phase
Thank you for the help
Let's use a small random image for demonstration:
// Generate random input image
cv::Mat image(5, 5, CV_8UC3);
cv::randu(image, 0, 256);
Option 1
Since the input is CV_8UC3 (i.e. each element is a cv::Vec3b) and we want the elements as cv::Vec3f, we first need to use convertTo, to convert the Mat to CV_32FC3. We store the result in a temporary matrix, and for convenience (since we know the element type) we can explicitly use cv::Mat3f.
// First convert to 32bit floats
cv::Mat3f temp;
image.convertTo(temp, CV_32FC3);
Now we can just use Mat iterators to initialize the vector.
// Use Mat iterators to construct the vector.
std::vector<cv::Vec3f> v1(temp.begin(), temp.end());
Option 2
The previous option ends up allocating a temporary array. With a little creativity, we can avoid this.
As it turns out, it is possible to create a cv:Mat header wrapping a vector, sharing the underlying data storage.
We begin by crating an adequately sized vector:
std::vector<cv::Vec3f> v2(image.total());
The Mat created from such vector will have 1 column, and as many rows as there are elements. Therefore, we'll reshape our input matrix to identical shape, and then use convertTo, to write directly to the vector.
image.reshape(3, static_cast<int>(image.total())).convertTo(v2, CV_32FC3);
Whole program:
#include <opencv2/opencv.hpp>
#include <vector>
template<typename T>
void dump(std::string const& label, T const& data)
{
std::cout << label << ":\n";
for (auto const& v : data) {
std::cout << v << " ";
}
std::cout << "\n";
}
int main()
{
// Generate random input image
cv::Mat image(5, 5, CV_8UC3);
cv::randu(image, 0, 256);
// Option 1
// ========
// First convert to 32bit floats
cv::Mat3f temp;
image.convertTo(temp, CV_32FC3);
// Use Mat iterators to construct the vector.
std::vector<cv::Vec3f> v1(temp.begin(), temp.end());
// Option 2
// ========
std::vector<cv::Vec3f> v2(image.total());
image.reshape(3, static_cast<int>(image.total())).convertTo(v2, CV_32FC3);
// Output
// ======
dump("Input", cv::Mat3b(image));
dump("Vector 1", v1);
dump("Vector 2", v2);
return 0;
}
Sample output:
Input:
[246, 156, 192] [7, 165, 166] [2, 179, 231] [212, 171, 230] [93, 138, 123] [80, 105, 242] [231, 239, 174] [174, 176, 191] [134, 54, 234] [69, 25, 147] [24, 67, 124] [158, 203, 206] [89, 144, 210] [51, 31, 132] [123, 250, 234] [246, 204, 74] [111, 208, 249] [149, 234, 37] [55, 147, 143] [29, 214, 169] [215, 84, 190] [204, 110, 239] [216, 103, 137] [248, 173, 53] [221, 251, 29]
Vector 1:
[246, 156, 192] [7, 165, 166] [2, 179, 231] [212, 171, 230] [93, 138, 123] [80, 105, 242] [231, 239, 174] [174, 176, 191] [134, 54, 234] [69, 25, 147] [24, 67, 124] [158, 203, 206] [89, 144, 210] [51, 31, 132] [123, 250, 234] [246, 204, 74] [111, 208, 249] [149, 234, 37] [55, 147, 143] [29, 214, 169] [215, 84, 190] [204, 110, 239] [216, 103, 137] [248, 173, 53] [221, 251, 29]
Vector 2:
[246, 156, 192] [7, 165, 166] [2, 179, 231] [212, 171, 230] [93, 138, 123] [80, 105, 242] [231, 239, 174] [174, 176, 191] [134, 54, 234] [69, 25, 147] [24, 67, 124] [158, 203, 206] [89, 144, 210] [51, 31, 132] [123, 250, 234] [246, 204, 74] [111, 208, 249] [149, 234, 37] [55, 147, 143] [29, 214, 169] [215, 84, 190] [204, 110, 239] [216, 103, 137] [248, 173, 53] [221, 251, 29]
Issues with your Code
In src->assign(in->datastart, in->dataend);
Elements of src are Vec3f, however datastart and dataend are pointers to uchar.
This will have several consequences. First of all, since in is CV_8UC3, there will be 3x as many elements. Also, each of the Vec3f instances will only have the first entry set, the other 2 will be 0.
In src->insert(src->end(), in->ptr<Vec3f>(i), in->ptr<Vec3f>(i)+cols);
Recall that you have already initialized src as vector<Vec3f>(rows * cols); -- i.e. the vector already has as many elements as there are pixels in the source image. However, in the loop you keep adding further elements at the end. This means that the resulting vector will have twice as many elements, with the first half of them being zeros.
Furthermore, in is CV_8UC3, but you interpret the data as cv::Vec3f. This means you take the byte values of 4 consecutive pixels and intepret this as a sequence of 3 32bit floating point numbers. The result can't be anything else than garbage.
It also means that you end up accessing data outside the valid area, potentially past the end of the buffer.
In cv::Mat(rows, cols, CV_8U, src, cv::Mat::AUTO_STEP)...
First of all, src holds Vec3f elements, but you're creating the Mat as CV_8U (which is also an issue, since you need to provide channel count here as well, so it's actually interpreted asCV_8UC1). So not only would you have the wrong number of channels, they would contain garbage due to type mismatch.
Even bigger issue is that you pass src as the 4th parameter. Now, this is a pointer to the std::vector instance, not to the actual data it holds. (It compiles, since the 4th parameter is void*). That means you're actually interpreting the metadata of the vector, along with a lot of other unknown data. Result is garbage at best (Or as you found out, SEGFAULTs, or potentially nasty security bugs).
Back to Mat
Note that it is possible to imshow a floating point Mat, assuming the values are normalized in range [0,1].
We can take advantage of the Mat constructor that takes a vector, and just reshape the resulting matrix back to the original shape.
cv::Mat result(cv::Mat(v2).reshape(3, image.rows));
Note that in this case, the underlying data storage is shared with the source vector, hence you need to assure it remains in scope as long the the Mat does. If you do not wish to share the data, simply pass true as a second parameter to the constructor.
cv::Mat result(cv::Mat(v2, true).reshape(3, image.rows));
Of course, if you want to go back to CV_8UC3, that's as simple as adding a convertTo. In this case there's no need to copy the vector data, since the data type changes and new storage array will allocated automatically.
cv::Mat result;
cv::Mat(v2).reshape(3, image.rows).convertTo(result, CV_8UC3);
Here's the version with .assign and .insert, similar to your given code. It also covers a unit test and the way from vector to Mat. And a way to test for non-continuous Mats, too.
I don't know which version ist faster, this one or the one from Dan Masek. Feel free to try.
int main()
{
cv::Mat in = cv::imread("C:/StackOverflow/Input/Lenna.png"); // this is a CV_8UC3 image, which is cv::Vec3b format
cv::Mat inFloat;
in.convertTo(inFloat, CV_32F);
// choose this line if you want to test non-continuous:
//inFloat = inFloat(cv::Rect(0, 0, 100, 100));
int rows = inFloat.rows;
int cols = inFloat.cols;
std::vector<cv::Vec3f> src;
if (inFloat.isContinuous())
{
std::cout << "continuous image data" << std::endl;
src.assign((cv::Vec3f*)inFloat.datastart, (cv::Vec3f*)inFloat.dataend);
}
else
{
std::cout << "non-continuous image data" << std::endl;
for (int i = 0; i < inFloat.rows; ++i)
{
src.insert(src.end(), inFloat.ptr<cv::Vec3f>(i), inFloat.ptr<cv::Vec3f>(i) + inFloat.cols);
}
}
// UNIT TEST:
bool testSuccess = true;
//const float epsilon = 0.01;
for(int j=0; j<rows; ++j)
for (int i = 0; i < cols; ++i)
{
cv::Vec3b & pixelIn = in.at<cv::Vec3b>(j, i);
cv::Vec3f & pixelInFloat = inFloat.at<cv::Vec3f>(j, i);
cv::Vec3f & pixelSrc = src.at(j*cols + i);
if (pixelInFloat != pixelSrc)
{
std::cout << "different values in: [" << i << "," << j << "]: " << pixelInFloat << " vs. " << pixelSrc << std::endl;
testSuccess = false;
}
}
if (testSuccess)
{
std::cout << "conversion from imread to vector<cv::Vec3f> successful." << std::endl;
}
else
{
std::cout << "Conversion failed." << std::endl;
}
// now test converting the vector back to a cv::Mat:
cv::Mat outFloat = cv::Mat(rows, cols, CV_32FC3, src.data());
// if you want to give the vector memory free later, choose this deep copy version instead:
// cv::Mat outFloat = cv::Mat(rows, cols, CV_32FC3, src.data()).clone();
cv::Mat out;
outFloat.convertTo(out, CV_8U);
cv::imshow("out", out);
cv::imshow("in", in);
cv::waitKey(0);
//std::cin.get();
return 0;
}
I want to implement SVM algorithm using OpenCV in iOS but can not call some methods in Objective-C. How to call OpenCV CvSVMParams in Objective-C. When I tried this it shows error 'Unknown type name CvSVMParams'
Edited: I understand my mistake I was using old version of OpenCV now I fixed that. But compiler says
OpenCV Error: Assertion failed (samples.cols == var_count && samples.type() == CV_32F) in predict, file /Volumes/Linux/builds/precommit_ios/opencv/modules/ml/src/svm.cpp, line 1919
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /Volumes/Linux/builds/precommit_ios/opencv/modules/ml/src/svm.cpp:1919: error: (-215) samples.cols == var_count && samples.type() == CV_32F in function predict
#import "CustomObject.h"
#import <opencv2/opencv.hpp>
#import <CoreGraphics/CoreGraphics.h>
#import <UIKit/UIKit.h>
using namespace cv;
#implementation CustomObject
- (void) supportVectorMachine {
float labels[10] = { 0.0, 0.0, 1.0, 1.0, 0.0, 1.0, 1.0, 1.0, 1.0, 0.0 };
cv::Mat labelsMat(10, 1, CV_32FC1, labels);
float trainingData[10][2] = { { 100, 10 }, { 150, 10 }, { 600, 200 }, { 600, 10 }, { 10, 100 }, { 455, 10 }, { 345, 255 }, { 10, 501 }, { 401, 255 }, { 30, 150 } };
cv::Mat trainDataMat(10, 2, CV_32FC1, trainingData);
//opencv 3.0
Ptr<ml::SVM> svm = ml::SVM::create();
// edit: the params struct got removed,
// we use setter/getter now:
svm->setType(ml::SVM::C_SVC);
svm->setKernel(ml::SVM::LINEAR);
svm->setTermCriteria(TermCriteria(TermCriteria::MAX_ITER, 100, 1e-6));
Ptr<TrainData> td = TrainData::create(trainDataMat, ROW_SAMPLE, labelsMat);
//Create test features
float testData[2] = { 150, 15 };
cv::Mat testDataMat(2, 1, CV_32FC1, testData);
//Predict the class labele for test data sample
float predictLable = svm->predict(testDataMat);
NSLog(#"%f", predictLable);
}
end
Problem was solved. I forgot to call train method for Trained Data and change labels from float to integers.
- (void) supportVectorMachine {
int labels[10] = {1, 1, 1};
cv::Mat labelsMat(10, 1, CV_32S, labels);
float trainingData[3][3] = { { 84, 191, 19 }, { 24, 186, 17}, { 22, 157, 21} };
//float trainingData[10][1] = { 100, 150, 600, 600, 100, 455, 345, 501, 401, 150};
cv::Mat trainDataMat(3, 3, CV_32FC1, trainingData);
//opencv 3.0
Ptr<ml::SVM> svm = ml::SVM::create();
// edit: the params struct got removed,
// we use setter/getter now:
svm->setType(ml::SVM::C_SVC);
svm->setKernel(ml::SVM::LINEAR);
svm->setTermCriteria(TermCriteria(TermCriteria::MAX_ITER, 100, 1e-6));
svm->setGamma(3.0);
Ptr<TrainData> td = TrainData::create(trainDataMat, ROW_SAMPLE, labelsMat);
svm->train(td);
//Create test features
float testData[1] = {500};
cv::Mat testDataMat(1, 1, CV_32FC1, testData);
//Predict the class labele for test data sample
float predictLable = svm->predict(testDataMat);
NSLog(#"%f", predictLable);
}
In OpenCV, I can multiply an RGB 1920 x 1080 mat by a 3 x 3 Mat to change the color composition of my source Mat. Once my source mat is properly shaped, I can use the '*' operator to perform the multiplication. This operator is not available when using a cv::gpu::GpuMat.
My question is how would I format my input source Mat to use cv::gpu::gemm?Can I even use cv::gpu::gemm?
This is the only call that performs matrix multiplication in the OpenCV library from what I can tell. cv::gpu::gemm wants to see a CV_32FC1 , CV_64FC1 type Mat. The type I normally use with the CPU is CV_32FC3.
//sourceMat is CV_32FC3 1920 x 1080 Mat
Mat sourceMat = matFromBuffer(data->bufferA, data->widthA, data->heightA);
//This is the color Matrix
float matrix[3][3] = {{1.057311, -0.204043, 0.055648},
{ 0.041556, 1.875992, -0.969256},
{-0.498535,-1.537150, 3.240479}};
Mat colorMatrixMat = Mat(3, 3, CV_32FC1, matrix).t();
//Color Correct the Mat
Mat linearSourceMat = sourceMat.reshape(1, 1080*1920);
Mat multipliedMatrix = linearSourceMat * colorMatrixMat;
Mat recoloredMat = multipliedMatrix.reshape(3, 1080);
Update:
As a test, I created the test routine:
static int gpuTest(){
float matrix[9] = {1.057311, -0.204043, 0.055648, 0.041556, 1.875992, -0.969256, -0.498535,-1.537150, 3.240479};
Mat matrixMat = Mat(1, 9, CV_32FC1, matrix).t();
cv::gpu::GpuMat gpuMatrixMat;
gpuMatrixMat.upload(matrixMat);
float matrixDest[9] = {1,1,1,1,1,1,1,1,1};
Mat matrixDestMat = Mat(1, 9, CV_32FC1, matrixDest).t();
cv::gpu::GpuMat destMatrixMat;
destMatrixMat.upload(matrixDestMat);
cv::gpu::GpuMat nextMat;
cv::gpu::gemm(gpuMatrixMat, destMatrixMat, 1, cv::gpu::GpuMat(), 0, nextMat);
return 0;
};
and the error I receive is:
OpenCV Error: Assertion failed (src1Size.width == src2Size.height) in gemm, file /Users/myuser/opencv-2.4.12/modules/gpu/src/arithm.cpp, line 109
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /Users/myuser/opencv-2.4.12/modules/gpu/src/arithm.cpp:109: error: (-215) src1Size.width == src2Size.height in function gemm
Now how can the src1Size.width be equal to src2Size.height? The width and height are different.
Here's a minimum working example using OpenCV 3.1.
#include <opencv2/opencv.hpp>
#include <opencv2/cudaarithm.hpp>
int main()
{
cv::Mat sourceMat = cv::Mat::ones(1080, 1920, CV_32FC3);
//This is the color Matrix
float matrix[3][3] = {
{ 1.057311, -0.204043, 0.055648 }
, { 0.041556, 1.875992, -0.969256 }
, { -0.498535, -1.537150, 3.240479 }
};
cv::Mat colorMatrixMat = cv::Mat(3, 3, CV_32FC1, matrix).t();
cv::Mat linearSourceMat = sourceMat.reshape(1, 1080 * 1920);
cv::Mat multipliedMatrix = linearSourceMat * colorMatrixMat;
try {
cv::Mat dummy, gpuMultipliedMatrix;
// Regular gemm
cv::gemm(linearSourceMat, colorMatrixMat, 1.0, dummy, 0.0, gpuMultipliedMatrix);
// CUDA gemm
// cv::cuda::gemm(linearSourceMat, colorMatrixMat, 1.0, dummy, 0.0, gpuMultipliedMatrix);
std::cout << (cv::countNonZero(multipliedMatrix != gpuMultipliedMatrix) == 0);
} catch (cv::Exception& e) {
std::cerr << e.what();
return -1;
}
}
Note that when the beta parameter to gemm(...) is zero, the third input matrix is ignored (based on the code).
Unfortunately I don't have a build of OpenCV compiled with CUBLAS available to try it, but it should work.
Following is somewhat speculative...
To make this work with OpenCV 2.4, you will need to add a little bit more. Before calling gemm(...), you need to create GpuMat objects and upload the data.
cv::gpu::GpuMat gpuLinSrc, gpuColorMat, dummy, gpuResult;
gpuLinSrc.upload(linearSourceMat);
gpuColorMat.upload(colorMatrixMat);
Then...
cv::gpu::gemm(gpuLinSrc, gpuColorMat, 1.0, cv::gpu::GpuMat(), 0.0, gpuResult);
and finally download the data back from the GPU.
cv::Mat resultFromGPU;
gpuResult.download(resultFromGPU);
Update
Here's a more detailed example to show you what's happening:
#include <opencv2/opencv.hpp>
#include <iostream>
#include <numeric>
#include <vector>
// ============================================================================
// Make a 3 channel test image with 5 rows and 4 columns
cv::Mat make_image()
{
std::vector<float> v(5 * 4);
std::iota(std::begin(v), std::end(v), 1.0f); // Fill with 1..20
cv::Mat seq(5, 4, CV_32FC1, v.data()); // 5 rows, 4 columns, 1 channel
// Create 3 channels, each with different offset, so we can tell them apart
cv::Mat chans[3] = {
seq, seq + 100, seq + 200
};
cv::Mat merged;
cv::merge(chans, 3, merged); // 5 rows, 4 columns, 3 channels
return merged;
}
// Make a transposed color correction matrix.
cv::Mat make_color_mat()
{
float color_in[3][3] = {
{ 0.1f, 0.2f, 0.3f } // Coefficients for channel 0
, { 0.4f, 0.5f, 0.6f } // Coefficients for channel 1
, { 0.7f, 0.8f, 0.9f } // Coefficients for channel 2
};
return cv::Mat(3, 3, CV_32FC1, color_in).t();
}
void print_mat(cv::Mat m, std::string const& label)
{
std::cout << label << ":\n size=" << m.size()
<< "\n channels=" << m.channels()
<< "\n" << m << "\n" << std::endl;
}
// Perform matrix multiplication to obtain result point (r,c)
float mm_at(cv::Mat a, cv::Mat b, int r, int c)
{
return a.at<float>(r, 0) * b.at<float>(0, c)
+ a.at<float>(r, 1) * b.at<float>(1, c)
+ a.at<float>(r, 2) * b.at<float>(2, c);
}
// Perform matrix multiplication to obtain result row r
cv::Vec3f mm_test(cv::Mat a, cv::Mat b, int r)
{
return cv::Vec3f(
mm_at(a, b, r, 0)
, mm_at(a, b, r, 1)
, mm_at(a, b, r, 2)
);
}
// ============================================================================
int main()
{
try {
// Step 1
cv::Mat source_image(make_image());
print_mat(source_image, "source_image");
std::cout << "source pixel at (0,0): " << source_image.at<cv::Vec3f>(0, 0) << "\n\n";
// Step 2
cv::Mat color_mat(make_color_mat());
print_mat(color_mat, "color_mat");
// Step 3
// Reshape the source matrix to obtain a matrix:
// * with only one channel (CV_32FC1)
// * where each row corresponds to a single pixel from source
// * where each column corresponds to a single channel from source
cv::Mat reshaped_image(source_image.reshape(1, source_image.rows * source_image.cols));
print_mat(reshaped_image, "reshaped_image");
// Step 4
cv::Mat corrected_image;
// corrected_image = 1.0 * reshaped_image * color_mat
cv::gemm(reshaped_image, color_mat, 1.0, cv::Mat(), 0.0, corrected_image);
print_mat(corrected_image, "corrected_image");
// Step 5
// Reshape back to the original format
cv::Mat result_image(corrected_image.reshape(3, source_image.rows));
print_mat(result_image, "result_image");
std::cout << "result pixel at (0,0): " << result_image.at<cv::Vec3f>(0, 0) << "\n\n";
// Step 6
// Calculate one pixel manually...
std::cout << "check pixel (0,0): " << mm_test(reshaped_image, color_mat, 0) << "\n\n";
} catch (cv::Exception& e) {
std::cerr << e.what();
return -1;
}
}
// ============================================================================
Step 1
First we create a small test input image:
The image contains 3 channels of float values, i.e. the data type is CV_32FC3. Let's treat the channels as red, green, blue in that order.
The image contains 5 rows of pixels.
The image contains 4 columns of pixels.
Values in each channel are sequential, green = red + 100 and blue = red + 200.
source_image:
size=[4 x 5]
channels=3
[1, 101, 201, 2, 102, 202, 3, 103, 203, 4, 104, 204;
5, 105, 205, 6, 106, 206, 7, 107, 207, 8, 108, 208;
9, 109, 209, 10, 110, 210, 11, 111, 211, 12, 112, 212;
13, 113, 213, 14, 114, 214, 15, 115, 215, 16, 116, 216;
17, 117, 217, 18, 118, 218, 19, 119, 219, 20, 120, 220]
We can print out a single pixel, to make the structure clearer:
source pixel at (0,0): [1, 101, 201]
Step 2
Create a sample colour correction matrix (transposed) such that:
First column contains coefficients used to determine the red value
Second column contains coefficients used to determine the green value
Third column contains coefficients used to determine the blue value
color_mat:
size=[3 x 3]
channels=1
[0.1, 0.40000001, 0.69999999;
0.2, 0.5, 0.80000001;
0.30000001, 0.60000002, 0.89999998]
Sidenote: Color Correction Algorithm
We want to transform source pixel S to pixel T using coefficients C
S = [ sr, sg, sb ]
T = [ tr, tg, tb ]
C = [ cr1, cr2, cr3;
cg1, cg2, cg3;
cb1, cb2, cb3]
Such that
Tr = cr1 * sr + cr2 * sg + cr3 * sb
Tg = cg1 * sr + cg2 * sg + cg3 * sb
Tb = cb1 * sr + cb2 * sg + cb3 * sb
Which can be represented by the following matrix expression
T = S * C_transpose
Step 3
In order to be able to use the above algorithm, we first need to reshape our image into a matrix that:
Contains a single channel, so that value at each point is just a float
Has one pixel per row.
Has 3 columns representing red, green, blue
In this shape, matrix multiplication will mean that each pixel/row from input gets multiplied by the coefficient matrix to determine one pixel/row in the output.
The reshaped matrix looks as follows:
reshaped_image:
size=[3 x 20]
channels=1
[1, 101, 201;
2, 102, 202;
3, 103, 203;
4, 104, 204;
5, 105, 205;
6, 106, 206;
7, 107, 207;
8, 108, 208;
9, 109, 209;
10, 110, 210;
11, 111, 211;
12, 112, 212;
13, 113, 213;
14, 114, 214;
15, 115, 215;
16, 116, 216;
17, 117, 217;
18, 118, 218;
19, 119, 219;
20, 120, 220]
Step 4
We perform the multiplication, for example using gemm, to get the following matrix:
corrected_image:
size=[3 x 20]
channels=1
[80.600006, 171.5, 262.39999;
81.200005, 173, 264.79999;
81.800003, 174.5, 267.20001;
82.400002, 176, 269.60001;
83, 177.5, 272;
83.600006, 179, 274.39999;
84.200005, 180.5, 276.79999;
84.800003, 182, 279.20001;
85.400002, 183.5, 281.60001;
86, 185, 284;
86.600006, 186.5, 286.39999;
87.200005, 188, 288.79999;
87.800003, 189.5, 291.20001;
88.400009, 191, 293.60001;
89, 192.5, 296;
89.600006, 194, 298.39999;
90.200005, 195.50002, 300.79999;
90.800003, 197, 303.20001;
91.400009, 198.5, 305.60001;
92, 200, 308]
Step 5
Now we can reshape the image back to the original shape. The result is
result_image:
size=[4 x 5]
channels=3
[80.600006, 171.5, 262.39999, 81.200005, 173, 264.79999, 81.800003, 174.5, 267.20001, 82.400002, 176, 269.60001;
83, 177.5, 272, 83.600006, 179, 274.39999, 84.200005, 180.5, 276.79999, 84.800003, 182, 279.20001;
85.400002, 183.5, 281.60001, 86, 185, 284, 86.600006, 186.5, 286.39999, 87.200005, 188, 288.79999;
87.800003, 189.5, 291.20001, 88.400009, 191, 293.60001, 89, 192.5, 296, 89.600006, 194, 298.39999;
90.200005, 195.50002, 300.79999, 90.800003, 197, 303.20001, 91.400009, 198.5, 305.60001, 92, 200, 308]
Let's have a look at one pixel from the result:
result pixel at (0,0): [80.6, 171.5, 262.4]
Step 6
Now we can double check our result by performing the appropriate calculations manually (functions mm_test and mm_at).
check pixel (0,0): [80.6, 171.5, 262.4]
I'm trying to perform a basic JPEG Compression (DCT + quantization + IDCT) using OpenCV not using entropy-encoding/Huffman-coding. The problem is that after I decompress the compressed image, it is not even close in appearance to the original one.
I'm following these tutorials:
Basic JPEG Compressing/Decompressing Simulation
Basic JPEG Compression using OpenCV
Following are the 3 images (original, compressed and decompressed images):
I'm using the following matrix to luminance and chrominance:
double dataLuminance[8][8] = {
{16, 11, 10, 16, 24, 40, 51, 61},
{12, 12, 14, 19, 26, 58, 60, 55},
{14, 13, 16, 24, 40, 57, 69, 56},
{14, 17, 22, 29, 51, 87, 80, 62},
{18, 22, 37, 56, 68, 109, 103, 77},
{24, 35, 55, 64, 81, 104, 113, 92},
{49, 64, 78, 87, 103, 121, 120, 101},
{72, 92, 95, 98, 112, 100, 103, 99}
};
double dataChrominance[8][8] = {
{17, 18, 24, 27, 99, 99, 99, 99},
{18, 21, 26, 66, 99, 99, 99, 99},
{24, 26, 56, 99, 99, 99, 99, 99},
{47, 66, 99, 99, 99, 99, 99, 99},
{99, 99, 99, 99, 99, 99, 99, 99},
{99, 99, 99, 99, 99, 99, 99, 99},
{99, 99, 99, 99, 99, 99, 99, 99},
{99, 99, 99, 99, 99, 99, 99, 99}
};
// EDIT 1: #Micka told about the problem of using imread/imwrite, so I edited my code to use the compressed image directly from my program.
The compression method is:
void ImageCompression::compression(){
// Getting original image size
int height = imgOriginal.size().height;
int width = imgOriginal.size().width;
// Converting image color
Mat imgColorConverted;
cvtColor(imgOriginal, imgColorConverted, CV_BGR2YCrCb);
// Transforming 2D Array in Image Matrix
Mat luminance = Mat(8,8, CV_64FC1, &dataLuminance);
Mat chrominance = Mat(8,8, CV_64FC1, &dataChrominance);
cout << "Luminance: " << luminance << endl << endl;
cout << "Chrominance" << chrominance << endl << endl;
// Splitting the image into 3 planes
vector<Mat> planes;
split(imgColorConverted, planes);
// Downsampling chrominance
// Resizing to 1/4 of original image
resize(planes[1], planes[1], Size(width/2, height/2));
resize(planes[2], planes[2], Size(width/2, height/2));
// Resizing to original image size
resize(planes[1], planes[1], Size(width, height));
resize(planes[2], planes[2], Size(width, height));
// Dividing image in blocks 8x8
for ( int i = 0; i < height; i+=8 ){
for( int j = 0; j < width; j+=8 ){
// For each plane
for( int plane = 0; plane < imgColorConverted.channels(); plane++ ){
// Creating a block
Mat block = planes[plane](Rect(j, i, 8, 8));
// Converting the block to float
block.convertTo( block, CV_64FC1 );
// Subtracting the block by 128
subtract( block, 128.0, block );
// DCT
dct( block, block );
// Applying quantization
if( plane == 0 ){
divide( block, luminance, block );
}
else {
divide( block, chrominance, block );
}
// Converting it back to unsigned int
block.convertTo( block, CV_8UC1 );
// Copying the block to the original image
block.copyTo( planes[plane](Rect(j, i, 8, 8)) );
}
}
}
merge( planes, finalImage );
}
And my decompression method:
ImageCompression::decompression{
// Getting the size of the image
int height = finalImage.size().height;
int width = finalImage.size().width;
// Transforming 2D Array in Image Matrix
Mat luminance = Mat(8,8, CV_64FC1, &dataLuminance);
Mat chrominance = Mat(8,8, CV_64FC1, &dataChrominance);
// Splitting the image into 3 planes
vector<Mat> planes;
split(finalImage, planes);
// Dividing the image in blocks 8x8
for ( int i = 0; i < height; i+=8 ){
for( int j = 0; j < width; j+=8 ){
// For each plane
for( int plane = 0; plane < finalImage.channels(); plane++ ){
// Creating a block
Mat block = planes[plane](Rect(j, i, 8, 8));
// Converting the block to float
block.convertTo( block, CV_64FC1 );
// Applying dequantization
if( plane == 0 ){
multiply( block, luminance, block );
}
else {
multiply( block, chrominance, block );
}
// IDCT
idct( block, block );
// Adding 128 to the block
add( block, 128.0, block );
// Converting it back to unsigned int
block.convertTo( block, CV_8UC1 );
// Copying the block to the original image
block.copyTo( planes[plane](Rect(j, i, 8, 8)) );
}
}
}
merge(planes, finalImage);
cvtColor( finalImage, finalImage, CV_YCrCb2BGR );
imshow("Decompressed image", finalImage);
waitKey(0);
imwrite(".../finalResult.jpg", finalImage);
}
Does someone have any idea of why I'm getting that resulting image?
Thank you.
You need to add 128 back to the block before converting it back to unsigned int and then subtract it again in decompression.
add(block, 128.0, block);
// Converting it back to unsigned int
block.convertTo(block, CV_8UC1);
.
// Converting the block to float
block.convertTo(block, CV_64FC1);
subtract(block, 128.0, block);