Applying SVD to YCbCr image in OpenCV - c++

I am trying to watermark an image into a video sequence. The process requires decomposition of frames into SVD which I am trying to achieve using the partial code below. The SVD constructor at line 47 fails with a segmentation fault.
gdb reports the following error:
"Program received signal SIGSEGV, Segmentation fault.
0xb5d31ada in dlange_ () from /usr/lib/liblapack.so.3gf"
#include <iostream>
#include <stdio.h>
#include "cv.h"
#include "highgui.h"
const unsigned int MAX = 10000;
using namespace cv;
using namespace std;
int NO_FRAMES;
bool check_exit()
{
return (waitKey(27) > 0)?true:false;
}
int main(int argc, char ** argv)
{
Mat rgb[MAX];
Mat ycbcr[MAX];
Mat wm_rgb[MAX];
namedWindow("watermark",1);
namedWindow("RGB", 1);
namedWindow("YCBCR",1);
VideoCapture capture(argv[1]);
Mat watermark = imread(argv[2]);
int i=0;
capture >> rgb[i];
imshow("watermark", watermark);
while(!rgb[i].empty())
{
imshow("RGB", rgb[i]);
cvtColor(rgb[i], ycbcr[i], CV_RGB2YCrCb);
imshow("YCBCR", ycbcr[i]);
i++;
capture >> rgb[i];
cout<<"frame "<<i<<endl;
if(check_exit())
exit(0);
}
//This line creates Segmentation fault
SVD temp(rgb[0]);
capture.release();
return 0;
}

Being more familiar with the C interface, I'll just describe a few things that seem out of place in your C++ code:
The SVD() function expects the input image to be a floating point image, so you may need to convert scale to 32-bit from the standard 8-bit. Here's some very basic (and not very efficient) code for illustration purposes:
int N = img->width;
IplImage* W = cvCreateImage( cvSize(N, 1), IPL_DEPTH_32F , 1 );
IplImage* A = cvCreateImage( cvGetSize(img), IPL_DEPTH_32F , 1 );
cvConvertScale(img, A);
IplImage* W_mod = cvCreateImage( cvSize(N-l, 1), IPL_DEPTH_32F , 1 );
cvSVD(A, W, NULL, NULL, CV_SVD_MODIFY_A);
The SVD values are stored in the Nx1 matrix (IplImage in this case) named W. The input image img is converted to 32-bit in A. We used the CV_SVD_MODIFY_A flag to make it faster and modify values in A. The other options were left blank (NULL), but you can supply parameters as needed. Please check the OpenCV docs for those.
Hopefully you'll be able to figure out from this working code what was wrong in your C++ code.

Related

How to transform a 64-bit floats image into unsigned char using OpenCV

I have a single channel of 64-bit floats image that I am trying to transform into an unsigned char using OpenCV. I can successfully visualize the image and resize it as it is too big. However when I am trying to transform the resized image into an unsigned char I don't see anything.
I am doing the transformation using the following function as advised here.
I initially tried const uchar* inBuffer = desc.data; to transform it but according to the same source it seems to be unsafe and therefore opted for a recasting method. That also didn't work but that it seemed at my best understanding the best choice. The code is below:
#include <opencv2/core.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/opencv.hpp>
#include <iostream>
#include <string>
using namespace cv;
using namespace std;
int main(int argc, char** argv)
{
String imageName( "/home/to/Desktop/Myexample.tif" );
if( argc > 1)
{
imageName = argv[1];
}
Mat image;
Mat outImage;
Mat corrected;
// Read the file
image = cv::imread( imageName, IMREAD_UNCHANGED );
// Check for invalid input
if(image.empty())
{
cout << "Could not open or find the image" << std::endl ;
return -1;
}
cv::resize(image, outImage, Size(800,800));
cv::namedWindow("Resized", WINDOW_AUTOSIZE);
cv::imshow("Resized", outImage+220);
// Transformation of the resized image into a unsigned char for better visualization
cv::resize(outImage, corrected, Size(800,800));
cv::namedWindow("Corrected", WINDOW_AUTOSIZE);
// From here nothing is showing up
unsigned char const* inBuffer = reinterpret_cast<unsigned char const*>(outImage.data);
cv::imshow("Corrected", *inBuffer);
cv::waitKey(0);
return 0;
}
Another thing I thought could have been useful is from the following source where it was advised to use a double conversion. I understand that it is fast in terms of computation but at the same time this didn't give me any useful result.
Thank you in advance for shedding light on this matter.

OpenCV C++ Mat to Integer

I'm pretty new to OpenCV, so bear with me. I'm running a Mac Mini with OSX 10.8. I have a program that recognizes colors and displays them in binary picture (black and white). However, I want to store the number of white pixels as an integer (or float, etc.) to compare with other number of pixels. How can I do this? Here is my current code-
#include <iostream>
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/core/core.hpp"
using namespace cv;
using namespace std;
int main( int argc, char** argv )
{
VideoCapture cap(0); //capture the video from webcam
if ( !cap.isOpened() ) // if not success, exit program
{
cout << "Cannot open the web cam" << endl;
return -1;
}
namedWindow("HSVLeftRed", CV_WINDOW_AUTOSIZE);
namedWindow("HSVLeftGreen", CV_WINDOW_AUTOSIZE);
while (true) {
Mat image;
cap.read(image);
Mat HSV;
Mat leftgreen;
Mat leftred;
//Left Cropping
Mat leftimg = image(Rect(0, 0, 640, 720));
//Left Red Detection
cvtColor(leftimg,HSV,CV_BGR2HSV);
inRange(HSV,Scalar(0,0,150),Scalar(0,0,255), leftgreen);
//imshow("HSVLeftRed", leftgreen);
//print pixel type
//Left Green Detection
cvtColor(leftimg,HSV,CV_BGR2HSV);
inRange(HSV,Scalar(still need to find proper min values),Scalar(still need to find proper max values), leftgreen);
//imshow("HSVLeftGreen", leftgreen);
//compare pixel types
}
return 0;
}
Thanks in advance!
To count the non-zero pixels, OpenCV has this function cv::countNonZero. It takes input the image, whose number of non-zero pixels, we want to calculate and output is number of non-zero pixels(int). Here is the documentation.
In your case, since all the pixels are either black or white, all the non zero pixels will be white pixels.
This is how to use it,
int cal = countNonZero(image);
Change image, as per your code.

OpenCV - C++ Code runs in Eclipse but not in terminal?

I am trying to make the follwing Code by Mohammad Reza Mostajabi (http://alum.sharif.ir/~mostajabi/Tutorial.html) run under Ubuntu 12.04 with OpenCV 2.4.6.1. I made some minor changes with the libraries included and added "cv::initModule_nonfree()" right after starting the main file.
#include "cv.h"
#include "highgui.h"
#include "ml.h"
#include <stdio.h>
#include <iostream>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/nonfree/nonfree.hpp>
#include <vector>
using namespace cv;
using namespace std;
using std::cout;
using std::cerr;
using std::endl;
using std::vector;
char ch[30];
//--------Using SURF as feature extractor and FlannBased for assigning a new point to the nearest one in the dictionary
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create("FlannBased");
Ptr<DescriptorExtractor> extractor = new SurfDescriptorExtractor();
SurfFeatureDetector detector(500);
//---dictionary size=number of cluster's centroids
int dictionarySize = 1500;
TermCriteria tc(CV_TERMCRIT_ITER, 10, 0.001);
int retries = 1;
int flags = KMEANS_PP_CENTERS;
BOWKMeansTrainer bowTrainer(dictionarySize, tc, retries, flags);
BOWImgDescriptorExtractor bowDE(extractor, matcher);
void collectclasscentroids() {
IplImage *img;
int i,j;
for(j=1;j<=4;j++)
for(i=1;i<=60;i++){
sprintf( ch,"%s%d%s%d%s","train/",j," (",i,").jpg");
const char* imageName = ch;
img = cvLoadImage(imageName,0);
vector<KeyPoint> keypoint;
detector.detect(img, keypoint);
Mat features;
extractor->compute(img, keypoint, features);
bowTrainer.add(features);
}
return;
}
int main(int argc, char* argv[])
{
cv::initModule_nonfree();
int i,j;
IplImage *img2;
cout<<"Vector quantization..."<<endl;
collectclasscentroids();
vector<Mat> descriptors = bowTrainer.getDescriptors();
int count=0;
for(vector<Mat>::iterator iter=descriptors.begin();iter!=descriptors.end();iter++)
{
count+=iter->rows;
}
cout<<"Clustering "<<count<<" features"<<endl;
//choosing cluster's centroids as dictionary's words
Mat dictionary = bowTrainer.cluster();
bowDE.setVocabulary(dictionary);
cout<<"extracting histograms in the form of BOW for each image "<<endl;
Mat labels(0, 1, CV_32FC1);
Mat trainingData(0, dictionarySize, CV_32FC1);
int k=0;
vector<KeyPoint> keypoint1;
Mat bowDescriptor1;
//extracting histogram in the form of bow for each image
for(j=1;j<=4;j++)
for(i=1;i<=60;i++){
sprintf( ch,"%s%d%s%d%s","train/",j," (",i,").jpg");
const char* imageName = ch;
img2 = cvLoadImage(imageName,0);
detector.detect(img2, keypoint1);
bowDE.compute(img2, keypoint1, bowDescriptor1);
trainingData.push_back(bowDescriptor1);
labels.push_back((float) j);
}
//Setting up SVM parameters
CvSVMParams params;
params.kernel_type=CvSVM::RBF;
params.svm_type=CvSVM::C_SVC;
params.gamma=0.50625000000000009;
params.C=312.50000000000000;
params.term_crit=cvTermCriteria(CV_TERMCRIT_ITER,100,0.000001);
CvSVM svm;
printf("%s\n","Training SVM classifier");
bool res=svm.train(trainingData,labels,cv::Mat(),cv::Mat(),params);
cout<<"Processing evaluation data..."<<endl;
Mat groundTruth(0, 1, CV_32FC1);
Mat evalData(0, dictionarySize, CV_32FC1);
k=0;
vector<KeyPoint> keypoint2;
Mat bowDescriptor2;
Mat results(0, 1, CV_32FC1);;
for(j=1;j<=4;j++)
for(i=1;i<=60;i++){
sprintf( ch,"%s%d%s%d%s","eval/",j," (",i,").jpg");
const char* imageName = ch;
img2 = cvLoadImage(imageName,0);
detector.detect(img2, keypoint2);
bowDE.compute(img2, keypoint2, bowDescriptor2);
evalData.push_back(bowDescriptor2);
groundTruth.push_back((float) j);
float response = svm.predict(bowDescriptor2);
results.push_back(response);
}
//calculate the number of unmatched classes
double errorRate = (double) countNonZero(groundTruth- results) / evalData.rows;
printf("%s%f","Error rate is ",errorRate);
return 0;
}
After doing this I can compile the Code without problems. I can also run it within Eclipse, but once I try to make it work in terminal I get the following error message:
" OpenCV Error: Assertion failed (!_descriptors.empty()) in add, file /home/mark/Downloads/FP/opencv-2.4.6.1/modules/features2d/src/bagofwords.cpp, line 57
terminate called after throwing an instance of 'cv::Exception'
what(): /home/mark/Downloads/FP/opencv-2.4.6.1/modules/features2d/src/bagofwords.cpp:57: error: (-215) !_descriptors.empty() in function add "
I've been trying to solve the problem for a few days now, but I just cannot get rid of this error. I also tried to do it with CodeBlocks, which gives me the same error. I would appreciate some help very much!
Thanks!
It's possible that your program fails to load input images (when launched from the terminal window) because it can't find them. Make sure that your input images are copied to the directory from which you run the application. Eclipse may have a different home directory and hence it sees the image when the program is started in Eclipse.

OpenCV accumulatedWeight error (assertion fails on comparison of channels and size)

I am running the following code to calculated the Running Average of a collection of images read from a video on OpenCV.
EDIT: (Code updated)
#include <iostream>
#include <string>
#include <iomanip>
#include <sstream>
#include <cstdio>
#include "opencv2/core/core.hpp"
#include "opencv2/opencv.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/video/video.hpp"
using namespace std;
using namespace cv;
int main(int argc, char *argv[]) {
if(argc < 2) {
printf("Quitting. Insufficient parameters\n");
return 0;
}
char c;
int frameNum = -1;
const char* WIN_MAIN = "Main Window";
namedWindow(WIN_MAIN, CV_WINDOW_AUTOSIZE);
VideoCapture capture;
capture.open(argv[1]);
Mat acc, img;
capture.retrieve(img, 3);
acc = Mat::zeros(img.size(), CV_32FC3);
for(;;) {
if(!capture.grab()) {
printf("End of frame\n");
break;
}
capture.retrieve(img, 3);
Mat floating;
img.convertTo(floating, CV_32FC3);
accumulateWeighted(floating, acc, 0.01);
imshow(WIN_MAIN, img);
waitKey(10);
}
return 0;
}
On running the code with a sample video the following error pops up
OpenCV Error: Assertion failed (dst.size == src.size && dst.channels() == cn) in accumulateWeighted, file /usr/lib/opencv/modules/imgproc/src/accum.cpp, line 430
terminate called after throwing an instance of 'cv::Exception'
what(): /usr/lib/opencv/modules/imgproc/src/accum.cpp:430: error: (-215) dst.size == src.size && dst.channels() == cn in function accumulateWeighted
Aborted (core dumped)
What could be the possible reason for the error? Could you please guide me in the right direction?
Compiler used : g++
OpenCV version : 2.4.5
Thanks!
from the refman:
src – Input image as 1- or 3-channel, 8-bit or 32-bit floating point.
dst – Accumulator image with the same number of channels as input image, 32-bit or 64-bit
floating-point.
so, your camera input img CV_8UC3, and your acc img is (currently) CV_32F. that's a misfit.
you want 3channel floating point for acc, so that's :
acc = Mat::zeros(img.size(), CV_8UC3);`
for more precision, you want to change your img to float type, too, so it would be:
acc = Mat::zeros(img.size(), CV_32FC3); // note: 3channel now
for (;;) {
if(!capture.grab()) {
printf("End of frame\n");
break;
}
capture.retrieve(img); // video probably has 1 stream only
Mat floatimg;
img.convertTo(floatimg, CV_32FC3);
accumulateWeighted(floatimg, acc, 0.01);
EDIT:
try to replace your grab/retrieve sequence by:
for(;;) {
capture >> img;
if ( img.empty() )
break;
As I can read in the OpenCV documentation of retrieve
C++: bool VideoCapture::retrieve(Mat& image, int channel=0);
second argument is the channel
and in the documentation of accumulateWeighted it says :
C++: void accumulateWeighted(InputArray src, InputOutputArray dst,
double alpha, InputArray mask=noArray() )
Parameters:src – Input image as 1- or 3-channel, 8-bit or 32-bit
floating point.
But in your code :
capture.retrieve(img, 2);
I guess you have the wrong channel parameter
I had the same problem problem and here's my solution.
Mat frame, acc;
// little hack, read the first frame
capture >> frame;
acc = Mat::zeros(frame.size(), CV_32FC3);
for(;;) {
...

Knowing a pixel value after making an RGBtoHSV conversion OpenCv

Im trying to get the H,S and V Values of an image, so i convert an RGB image to HSV, and then just ask for the desired values, and then print them.. Im not quite sure im making this right, because when printing the Value (V of hsV) i get values of 100+ and i understand that the V just goes to 0-100, maybe im not using a correct method, here's the code:
#include "opencv/highgui.h"
#include "opencv/cv.h"
#include <cstdlib>
#include <iostream>
#include <stdio.h>
using namespace std;
int main(int argc, char** argv) {
int i=0,total=0;
IplImage* img = cvLoadImage( argv[1] );
IplImage* hsv;
CvSize size;
int key = 0, depth;
size = cvGetSize(img);
depth = img->depth;
hsv = cvCreateImage(size, depth, 3);
cvCvtColor( img, hsv, CV_BGR2HSV );
for(i=0;i<480;i++){ //asking for the values in \ form (1,1)(2,2),...(480,480)
CvScalar s;
s = cvGet2D(hsv,i,i);
printf("s=%f\n,s.val[2]); //s.val[2] equals to hs**V** right?
}
cvReleaseImage(&img);
cvReleaseImage(&val);
return 0;
}
The other answer here is correct but here is a code snippet that I have to calculate the V channel in opencv. I get the value from the Gimp app and this function gives me the opencv value.
//Max values: App HSV H=360 S=100 V=100 OpenCV H=180 S=255 V=255
double newHSV(double value)
{
//new_val = value * opencv_max_range / other_app_max_range
double newValue = value * 255 / 100;
return newValue;
}
To check your opencv HSV values in another application like Gimp, just calculate the formula to:
gimp_value = opencv_value * other_app_max_range / opencv_max_range
The way you're doing it is correct. Just that values are a little different.
H should ideally go from 0-360. But because a byte can only hold 0-255, H values are halved. So the range is 0-180.
V and S use the full range of 0-255 to specify value and saturation.
You can read more about it here: http://opencv.willowgarage.com/documentation/python/miscellaneous_image_transformations.html#cvtcolor