I'm trying very simple thing with opencv but getting error.
I'm just trying read 16bit png image and access to the specific pixel value. I tried many ways but couldn't manage the get value. I'm using OpenCV3.0 on windows8 64bit.
NOTE: while reading image using CV_LOAD_IMAGE_GRAYSCALE is fine, but CV_LOAD_IMAGE_ANYDEPTH rising error. But when i use CV_LOAD_IMAGE_GRAYSCALE my highest pixel is 9, which should be around 2000
I uploaded example image.example image
my example code:
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
int _tmain(int argc, _TCHAR* argv[])
{
cv::Mat frame = cv::imread("filepath", CV_LOAD_IMAGE_ANYDEPTH );//using CV_LOAD_IMAGE_GRAYSCALE is fine, but CV_LOAD_IMAGE_ANYDEPTH rising error
frame.convertTo(frame, CV_16U);// to be sure... i omitted this part also and same error
double min, max;
cv::Point mloc, mxloc;
cv::minMaxLoc(frame, &min, &max, &mloc, &mxloc);
//i can access min and max values but not the specific pixel value
float zmx = frame.at<unsigned char>(118, 38);//rise error
float zm = frame.at<float>(30,40);//rise error
return 0;
}
Error message:
Unhandled exception at 0x00007FF8EB288A5C in OpenCVTest.exe: Microsoft C++ exception: cv::Exception at memory location 0x000000A47F40F230.
But i think this is misleading error, i check the my image is 320*240, so i'm sure that there is pixel at that location.
I tried with Scalar also, but i'm getting same error
Your biggest problem is that you're trying to access a 16 bpp image, i.e. a Mat of type CV_16U with the wrong data type. You should use frame.at<ushort>(...) in case of single channel 16 bpp image (I suppose this is the case here), or with frame.at<Vec3w>(...) for 3 channels images.
Also you should make sure that you're loading the image properly. Using imread with parameter IMREAD_GRAYSCALE you're converting your image to 8bpp, which is not what you want. You should use IMREAD_ANYDEPTH or IMREAD_UNCHANGED.
Take a look at this code:
#include<opencv2/opencv.hpp>
int main()
{
// Read the image as original bpp
cv::Mat frame = cv::imread("path_to_image", cv::IMREAD_ANYDEPTH);
// Be sure that the image is loaded
if (frame.empty())
{
// No image loaded
return -1;
}
// Be sure that the image is 16bpp and single channel
if (frame.type() != CV_16U || frame.channels() != 1)
{
// Wrong image depth or channels
return -1;
}
double min_val, max_val;
cv::Point min_loc, max_loc;
cv::minMaxLoc(frame, &min_val, &max_val, &min_loc, &max_loc);
// Access values with correct data type
ushort zmx = frame.at<ushort>(max_loc);
return 0;
}
A couple of things you should notice.
First you are defining frame twice.
second, this line float zmx = frame.at<unsigned char>(118, 38); has a couple of issues. you are assigning unsigned char to a float. also you should notice the order is reversed to access the x,y pixel value you call frame.at<unsigned char>(y, x) then best way to assign to a Scalar instead like this
Scalar fmx = frame.at<uchar>(118, 38);
or use Point to avoid confusion
Scalar fmx = frame.at<uchar>(Point(38,118));
last thing, make sure you loaded the image properly and that frame has the image data
UPDATE
I just tested your code and it worked fine (check below), I can't think of anything but not finding the image in the path provided
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
int main(int argc, char** argv)
{
cv::Mat frame = cv::imread("0FD0X.png", CV_LOAD_IMAGE_GRAYSCALE);
frame.convertTo(frame, CV_16U);// to be sure... i omitted this part also and same error
double min, max;
cv::Point mloc, mxloc;
cv::minMaxLoc(frame, &min, &max, &mloc, &mxloc);
//i can access min and max values but not the specific pixel value
float zmx = frame.at<unsigned char>(118, 38);// no error
float zm = frame.at<float>(30, 40);// no error
std::cout << zmx << std::endl; // out 0
std::cout << min << std::endl; // out 0
std::cout << max << std::endl; // out 9
std::cout << mloc << std::endl; // out [0,0]
std::cout << mxloc << std::endl; // out [125,30]
return 0;
}
Update #2 to access multichannel image you need to access using the Vec3b data type. Also notice the order of the point coordinates. check the following code
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include "opencv2/highgui/highgui.hpp"
int main(int argc, char** argv)
{
cv::Mat frame = cv::imread("0FD0X.png", CV_LOAD_IMAGE_ANYDEPTH);
frame.convertTo(frame, CV_16U);// to be sure... i omitted this part also and same error
double min, max;
cv::Point mloc, mxloc;
cv::minMaxLoc(frame, &min, &max, &mloc, &mxloc);
//i can access min and max values but not the specific pixel value
ushort pValShort = frame.at<ushort>(38, 118);// no error
Vec3b pValVec = frame.at<Vec3b>(38, 118);// no error
Vec3b pValVecPoint = frame.at<Vec3b>(Point(118,38));// no error
std::cout << pValShort << std::endl; // out 2423
std::cout << pValVec << std::endl; // out [166,8,165]
std::cout << pValVecPoint << std::endl; // out [166,8,165]
std::cout << min << std::endl; // out 0
std::cout << max << std::endl; // out 2423
std::cout << mloc << std::endl; // out [0,0]
std::cout << mxloc << std::endl; // out [118,38]
return 0;
}
Related
I'm currently trying to use a monochrome camera with the aruco and opencv libraries in order to accelerate the computation and get better marker capturing. The problem i am having is that the monochrome feed is being tripled on screen when running the aruco_test program and so the resolution in diminished by two thirds and the markers are being detected three times each instead of one.
I saw feeds which talk about similar problems with monochrome cameras in opencv. Some answers suggested cropping the image (which fixes the tripling problem but not the smaller resolution) but it all seems to be caused by the conversion from either BGR2GRAY or GRAY2BGR.
Any help on what exactly is causing the images being tripled and how to bypass that part either in the aruco source code or opencv source code would be appreciated.
INFO :
Driver Info (not using libv4l2):
Driver name : uvcvideo
Card type : oCam-1MGN-U
Bus info : usb-0000:00:1d.0-1.5
Driver version: 3.13.11
Capabilities : 0x84000001
Video Capture
Streaming
Device Capabilities
Device Caps : 0x04000001
Video Capture
Streaming
Priority: 2
Video input : 0 (Camera 1: ok)
Format Video Capture:
Width/Height : 1280/960
Pixel Format : 'GREY'
Field : None
Bytes per Line: 1280
Size Image : 1228800
Colorspace : Unknown (00000000)
Crop Capability Video Capture:
Bounds : Left 0, Top 0, Width 1280, Height 960
Default : Left 0, Top 0, Width 1280, Height 960
Pixel Aspect: 1/1
Streaming Parameters Video Capture:
Capabilities : timeperframe
Frames per second: 30.000 (30/1)
Read buffers : 0
brightness (int) : min=0 max=127 step=1 default=64 value=64
exposure_absolute (int) : min=1 max=625 step=1 default=39 value=39
Using Aruco 2.0.19 and OpenCV 3.2
Pixel Format not being YUYV i cannot simply take the Y channel from the camera feed.
code executed :
#include <string>
#include <iostream>
#include <fstream>
#include <sstream>
#include "aruco.h"
#include "cvdrawingutils.h"
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
using namespace cv;
using namespace aruco;
MarkerDetector MDetector;
VideoCapture TheVideoCapturer;
vector< Marker > TheMarkers;
Mat TheInputImage, TheInputImageCopy;
CameraParameters TheCameraParameters;
void cvTackBarEvents(int pos, void *);
pair< double, double > AvrgTime(0, 0); // determines the average time required for detection
int iThresParam1, iThresParam2;
int waitTime = 0;
class CmdLineParser{int argc; char **argv; public: CmdLineParser(int _argc,char **_argv):argc(_argc),argv(_argv){} bool operator[] ( string param ) {int idx=-1; for ( int i=0; i<argc && idx==-1; i++ ) if ( string ( argv[i] ) ==param ) idx=i; return ( idx!=-1 ) ; } string operator()(string param,string defvalue="-1"){int idx=-1; for ( int i=0; i<argc && idx==-1; i++ ) if ( string ( argv[i] ) ==param ) idx=i; if ( idx==-1 ) return defvalue; else return ( argv[ idx+1] ); }};
cv::Mat resize(const cv::Mat &in,int width){
if (in.size().width<=width) return in;
float yf=float( width)/float(in.size().width);
cv::Mat im2;
cv::resize(in,im2,cv::Size(width,float(in.size().height)*yf));
return im2;
}
int main(int argc, char **argv) {
try {
CmdLineParser cml(argc,argv);
if (argc < 2 || cml["-h"]) {
cerr << "Invalid number of arguments" << endl;
cerr << "Usage: (in.avi|live[:idx_cam=0]) [-c camera_params.yml] [-s marker_size_in_meters] [-d dictionary:ARUCO by default] [-h]" << endl;
cerr<<"\tDictionaries: "; for(auto dict:aruco::Dictionary::getDicTypes()) cerr<<dict<<" ";cerr<<endl;
cerr<<"\t Instead of these, you can directly indicate the path to a file with your own generated dictionary"<<endl;
return false;
}
/////////// PARSE ARGUMENTS
string TheInputVideo = argv[1];
// read camera parameters if passed
if (cml["-c"] ) TheCameraParameters.readFromXMLFile(cml("-c"));
float TheMarkerSize = std::stof(cml("-s","-1"));
//aruco::Dictionary::DICT_TYPES TheDictionary= Dictionary::getTypeFromString( cml("-d","ARUCO") );
/////////// OPEN VIDEO
// read from camera or from file
if (TheInputVideo.find("live") != string::npos) {
int vIdx = 0;
// check if the :idx is here
char cad[100];
if (TheInputVideo.find(":") != string::npos) {
std::replace(TheInputVideo.begin(), TheInputVideo.end(), ':', ' ');
sscanf(TheInputVideo.c_str(), "%s %d", cad, &vIdx);
}
cout << "Opening camera index " << vIdx << endl;
TheVideoCapturer.open(vIdx);
waitTime = 10;
}
else TheVideoCapturer.open(TheInputVideo);
// check video is open
if (!TheVideoCapturer.isOpened()) throw std::runtime_error("Could not open video");
///// CONFIGURE DATA
// read first image to get the dimensions
TheVideoCapturer >> TheInputImage;
if (TheCameraParameters.isValid())
TheCameraParameters.resize(TheInputImage.size());
MDetector.setDictionary(cml("-d","ARUCO"));//sets the dictionary to be employed (ARUCO,APRILTAGS,ARTOOLKIT,etc)
MDetector.setThresholdParams(7, 7);
MDetector.setThresholdParamRange(2, 0);
// MDetector.setCornerRefinementMethod(aruco::MarkerDetector::SUBPIX);
//gui requirements : the trackbars to change this parameters
iThresParam1 = MDetector.getParams()._thresParam1;
iThresParam2 = MDetector.getParams()._thresParam2;
cv::namedWindow("in");
cv::createTrackbar("ThresParam1", "in", &iThresParam1, 25, cvTackBarEvents);
cv::createTrackbar("ThresParam2", "in", &iThresParam2, 13, cvTackBarEvents);
//go!
char key = 0;
int index = 0;
// capture until press ESC or until the end of the video
do {
TheVideoCapturer.retrieve(TheInputImage);
// copy image
double tick = (double)getTickCount(); // for checking the speed
// Detection of markers in the image passed
TheMarkers= MDetector.detect(TheInputImage, TheCameraParameters, TheMarkerSize);
// chekc the speed by calculating the mean speed of all iterations
AvrgTime.first += ((double)getTickCount() - tick) / getTickFrequency();
AvrgTime.second++;
cout << "\rTime detection=" << 1000 * AvrgTime.first / AvrgTime.second << " milliseconds nmarkers=" << TheMarkers.size() << std::endl;
// print marker info and draw the markers in image
TheInputImage.copyTo(TheInputImageCopy);
for (unsigned int i = 0; i < TheMarkers.size(); i++) {
cout << TheMarkers[i]<<endl;
TheMarkers[i].draw(TheInputImageCopy, Scalar(0, 0, 255));
}
// draw a 3d cube in each marker if there is 3d info
if (TheCameraParameters.isValid() && TheMarkerSize>0)
for (unsigned int i = 0; i < TheMarkers.size(); i++) {
CvDrawingUtils::draw3dCube(TheInputImageCopy, TheMarkers[i], TheCameraParameters);
CvDrawingUtils::draw3dAxis(TheInputImageCopy, TheMarkers[i], TheCameraParameters);
}
// DONE! Easy, right?
// show input with augmented information and the thresholded image
cv::imshow("in", resize(TheInputImageCopy,1280));
cv::imshow("thres", resize(MDetector.getThresholdedImage(),1280));
key = cv::waitKey(waitTime); // wait for key to be pressed
if(key=='s') waitTime= waitTime==0?10:0;
index++; // number of images captured
} while (key != 27 && (TheVideoCapturer.grab() ));
} catch (std::exception &ex)
{
cout << "Exception :" << ex.what() << endl;
}
}
void cvTackBarEvents(int pos, void *) {
(void)(pos);
if (iThresParam1 < 3) iThresParam1 = 3;
if (iThresParam1 % 2 != 1) iThresParam1++;
if (iThresParam1 < 1) iThresParam1 = 1;
MDetector.setThresholdParams(iThresParam1, iThresParam2);
// recompute
MDetector.detect(TheInputImage, TheMarkers, TheCameraParameters);
TheInputImage.copyTo(TheInputImageCopy);
for (unsigned int i = 0; i < TheMarkers.size(); i++)
TheMarkers[i].draw(TheInputImageCopy, Scalar(0, 0, 255));
// draw a 3d cube in each marker if there is 3d info
if (TheCameraParameters.isValid())
for (unsigned int i = 0; i < TheMarkers.size(); i++)
CvDrawingUtils::draw3dCube(TheInputImageCopy, TheMarkers[i], TheCameraParameters);
cv::imshow("in", resize(TheInputImageCopy,1280));
cv::imshow("thres", resize(MDetector.getThresholdedImage(),1280));
}
I'm trying to use OpenCV SVM classifier in cocos2d-x game. Here's a simple test function:
void HelloWorld::testOpenCV(){
// Load SVM classifier
auto classifierPath = FileUtils::getInstance()->fullPathForFilename("classifier.yml");
cv::Ptr<cv::ml::SVM> svm = cv::ml::StatModel::load<cv::ml::SVM>(classifierPath);
string filename = "test.jpg";
auto img = new Image();
img->initWithImageFile(filename);
int imageSize = (int)img->getDataLen();
int imageXW = img->getWidth();
int imageYW = img->getHeight();
unsigned char * srcData = img->getData();
CCLOG("imageXW=%d, imageYW=%d", imageXW, imageYW);
int ch = imageSize/(imageXW*imageYW);
CCLOG("image=%dch raw data...", ch);
cv::Mat testMat = createCvMatFromRaw(srcData, imageXW, imageYW, ch);
testMat.convertTo(testMat, CV_32F);
// try to predict which number has been drawn
try{
int predicted = svm->predict(testMat);
CCLOG("Recognizing following number -> %d", predicted);
}catch(cv::Exception ex){
}
}
And it gives an output:
imageXW=28, imageYW=28
image=3ch raw data...
OpenCV Error: Assertion failed (samples.cols == var_count && samples.type() == CV_32F) in predict, file /Volumes/build-storage/build/master_iOS-mac/opencv/modules/ml/src/svm.cpp, line 1930
It is based on this tutorial:
https://www.simplicity.be/article/recognizing-handwritten-digits/
Especially on this method:
// Standard library
#include <iostream>
#include <vector>
#include <string>
// OpenCV
#include <opencv2/core.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/ml.hpp>
// POSIX
#include <unistd.h>
/**
* main
**/
int main( int argc, char** argv )
{
//
// Load SVM classifier
cv::Ptr<cv::ml::SVM> svm = cv::ml::StatModel::load<cv::ml::SVM>("classifier.yml");
// read image file (grayscale)
cv::Mat imgMat = cv::imread("test.jpg", 0);
// convert 2d to 1d
cv::Mat testMat = imgMat.clone().reshape(1,1);
testMat.convertTo(testMat, CV_32F);
// try to predict which number has been drawn
try{
int predicted = svm->predict(testMat);
std::cout << std::endl << "Recognizing following number -> " << predicted << std::endl << std::endl;
std::string notifyCmd = "notify-send -t 1000 Recognized: " + std::to_string(predicted);
system(notifyCmd.c_str());
}catch(cv::Exception ex){
}
}
I've ran it in terminal and it worked.
Here's an implementation of createCvMatFromRaw:
cv::Mat HelloWorld::createCvMatFromRaw(unsigned char *rawData, int rawXW, int rawYW, int ch)
{
cv::Mat cvMat( rawYW, rawXW, CV_8UC4); // 8 bits per component, 4 channels
for (int py=0; py<rawYW; py++) {
for (int px=0; px<rawXW; px++) {
int nBasePos = ((rawXW * py)+px) * ch;
cvMat.at<cv::Vec4b>(py, px) = cv::Vec4b(rawData[nBasePos + 0],
rawData[nBasePos + 1],
rawData[nBasePos + 2],
0xFF);
}
}
return cvMat;
}
I've found it here:
http://blog.szmake.net/archives/845
What does this assert mean? Can someone explain it to me? How can I fix this?
The assertion says
OpenCV Error: Assertion failed (samples.cols == var_count && samples.type() == CV_32F)
which means that the sample either doesn't have the right number of columns or doesn't have type CV_32F.
It looks like you forgot the reshape function, so your data violates the first condition. I think in order to apply svm, the data needs to be a vector, i.e. 1 x n matrix.
When I run this program and adjust the slider multiple times bar the image appears different even though it is at the same slider position. If you try this code, move the slider from the minimum to maximum position back and forth several times and you can see a slight alteration to the image each time.
I have traced the point at which this happens to the line running the add function in my onProgram6Trackbar1 function. Removing it removes the variations between slide movements. Why is this happening?
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <string>
using namespace cv;
using namespace std;
#include <cmath>
class ProgramData {
public:
ProgramData() {
k1=0;
k2=0;
k3=0;
k4=0;
k5=0;
}
int k1;
int k2;
int k3;
int k4;
int k5;
Mat * source_U8C3;
Mat * temp1_U8C3;
Mat * temp2_U8C3;
Mat * temp3_U8C1;
Mat * temp4_U8C1;
Mat * temp5_U8C1;
Mat * temp6_U8C1;
Mat * temp7_U8C1;
vector<Mat> tempv1_U8C1;
vector<Mat> tempv2_U8C1;
Mat * output_U8C1;
Mat * output_U8C3;
Mat * dim1by1;
};
static void onProgram6Trackbar1(int v, void* vp) {
ProgramData * pd = (ProgramData *) vp;
*(pd->temp3_U8C1) = pd->tempv1_U8C1[2].clone();
inRange(*(pd->temp3_U8C1), pd->k1, 255, *(pd->temp4_U8C1));
bitwise_not(*(pd->temp4_U8C1), *(pd->temp5_U8C1));
bitwise_and(*(pd->temp5_U8C1), *(pd->temp3_U8C1), *(pd->temp6_U8C1));
bitwise_or(pd->temp6_U8C1, Scalar(pd->k1), pd->temp7_U8C1, pd->temp4_U8C1);
imshow( "Glare Reduction 4", *(pd->temp7_U8C1));
}
void program6(char * argv) {
ProgramData pd;
pd.k1 = 0;
Mat source = imread(argv, IMREAD_COLOR); // Read the file
pd.source_U8C3 = &source;
Size s( pd.source_U8C3->size().width / 1.3, pd.source_U8C3->size().height / 1.3 );
resize( *(pd.source_U8C3), *(pd.source_U8C3), s, 0, 0, CV_INTER_AREA );
pd.output_U8C3 = new Mat(pd.source_U8C3->rows,pd.source_U8C3->cols,pd.source_U8C3->type());
pd.output_U8C1 = new Mat(pd.source_U8C3->rows,pd.source_U8C3->cols,CV_8UC1);
//pd.temp1_U8C3 = new Mat(pd.source_U8C3->rows,pd.source_U8C3->cols,pd.source_U8C3->type());
pd.temp2_U8C3 = new Mat(pd.source_U8C3->rows,pd.source_U8C3->cols,pd.source_U8C3->type());
pd.temp3_U8C1 = new Mat(pd.source_U8C3->rows,pd.source_U8C3->cols,CV_8UC1);
pd.temp4_U8C1 = new Mat(pd.source_U8C3->rows,pd.source_U8C3->cols,CV_8UC1);
pd.temp5_U8C1 = new Mat(pd.source_U8C3->rows,pd.source_U8C3->cols,CV_8UC1);
pd.temp6_U8C1 = new Mat(pd.source_U8C3->rows,pd.source_U8C3->cols,CV_8UC1);
pd.temp7_U8C1 = new Mat(pd.source_U8C3->rows,pd.source_U8C3->cols,CV_8UC1);
pd.dim1by1 = new Mat(100,800,CV_8UC1);
cout << "source type = " << pd.source_U8C3->type() << endl;
if(! pd.source_U8C3->data ) { cout << "Could not open image" << std::endl; return;}
cvtColor(*(pd.source_U8C3), *(pd.temp2_U8C3), CV_BGR2HSV); // original to hsv
split(*(pd.temp2_U8C3), pd.tempv1_U8C1);
namedWindow( "Glare Reduction - Controls", WINDOW_AUTOSIZE ); // Create a window for display.
onProgram6Trackbar1(0,&pd);
createTrackbar("k1", "Glare Reduction - Controls", &(pd.k1), 255, &onProgram6Trackbar1, &pd);
imshow( "Glare Reduction - Controls", *(pd.dim1by1) ); // Show our image inside it.
waitKey(0); // Wait for a keystroke in the window
}
int main( int argc, char** argv )
{
program6("Blocks1.jpg");
}
Update 1:
New code posted below. I tried changing the code to not use any Mat pointers. Still does the exact same thing.
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
#include <string>
using namespace cv;
using namespace std;
#include <cmath>
class ProgramData {
public:
ProgramData() {
k1=0;
}
int k1;
Mat source_U8C3;
Mat temp1_U8C3;
Mat temp2_U8C3;
Mat temp3_U8C1;
Mat temp4_U8C1;
Mat temp5_U8C1;
Mat temp6_U8C1;
Mat temp7_U8C1;
vector<Mat> tempv1_U8C1;
vector<Mat> tempv2_U8C1;
Mat output_U8C1;
Mat output_U8C3;
Mat dim1by1;
};
static void onProgram6Trackbar1(int v, void* vp) {
ProgramData * pd = (ProgramData *) vp;
pd->temp3_U8C1 = pd->tempv1_U8C1[2].clone();
inRange(pd->temp3_U8C1, Scalar(pd->k1), Scalar(255), pd->temp4_U8C1);
bitwise_not(pd->temp4_U8C1, pd->temp5_U8C1); // Note for monday, here does not work below works. Why?
bitwise_and(pd->temp5_U8C1, pd->temp3_U8C1, pd->temp6_U8C1);
bitwise_or(pd->temp6_U8C1, Scalar(pd->k1), pd->temp7_U8C1, pd->temp4_U8C1);
imshow( "Glare Reduction 4", pd->temp7_U8C1);
}
int main( int argc, char** argv ) {
ProgramData pd;
pd.k1 = 0;
pd.source_U8C3 = imread("Photo Examples/Blocks1.jpg", IMREAD_COLOR); // Read the file
Size s( pd.source_U8C3.size().width / 1.3, pd.source_U8C3.size().height / 1.3 );
resize( pd.source_U8C3, pd.source_U8C3, s, 0, 0, CV_INTER_AREA );
pd.dim1by1.create(100,800,CV_8UC1);
cout << "source type = " << pd.source_U8C3.type() << endl;
if(! pd.source_U8C3.data ) { cout << "Could not open image" << std::endl; return 0;}
cvtColor(pd.source_U8C3, pd.temp2_U8C3, CV_BGR2HSV); // original to hsv
split(pd.temp2_U8C3, pd.tempv1_U8C1);
namedWindow( "Glare Reduction - Controls", WINDOW_AUTOSIZE ); // Create a window for display.
onProgram6Trackbar1(0,&pd);
createTrackbar("k1", "Glare Reduction - Controls", &(pd.k1), 255, &onProgram6Trackbar1, &pd);
imshow( "Glare Reduction - Controls", pd.dim1by1 ); // Show our image inside it.
waitKey(0); // Wait for a keystroke in the window
return 0;
}
Update 2:
I think I may have found the source of the problem. When add this line
static void onProgram6Trackbar1(int v, void* vp) {
ProgramData * pd = (ProgramData *) vp;
pd->temp3_U8C1 = pd->tempv1_U8C1[2].clone();
inRange(pd->temp3_U8C1, Scalar(pd->k1), Scalar(255), pd->temp4_U8C1);
bitwise_not(pd->temp4_U8C1, pd->temp5_U8C1);
bitwise_and(pd->temp5_U8C1, pd->temp3_U8C1, pd->temp6_U8C1);
pd->temp7_U8C1 = pd->tempv1_U8C1[2].clone(); // <----
bitwise_or(pd->temp6_U8C1, Scalar(pd->k1), pd->temp7_U8C1, pd->temp4_U8C1);
imshow( "Glare Reduction 4", pd->temp7_U8C1);
}
to onProgram6Trackbar1 it suddenly works as expected. I thought since opencv 2 does its own memory allocation I didn't have to initialize pd->temp7_U8C1 which is serving as the output matrix in the call to bitwise_or. It's almost like the underlying memory in pd->temp7_U8C1 was pointing to memory that belonged to one of the buffers that was used as output to the image processing done in main (pd.tempv1_U8C1 or pd.source_U8C3). Either the line I added did something else that I have not thought of.
So my new question is why did this line fix it and what is going on underneath. Is the result of using an uninitialized mat behavior defined somewhere in the documentation? It was my understanding that you don't have to initialize the size or type of a matrix that you are using as an output mat.
maybe a bit too old, anyway: First check the slightly cleaned code. I removed everything that's redundant and send the actual function of the trackbar into a member of your class. This way, you can directly operate on the members.
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <iostream>
using namespace cv;
using namespace std;
class ProgramData
{
public:
ProgramData()
{
k1 = 0;
}
int k1;
Mat source_U8C3,
temp2_U8C3, temp4_U8C1,
temp5_U8C1, temp6_U8C1,
temp7_U8C1;
vector<Mat> tempv1_U8C1;
void reduce_glare(void)
{
// sets elements in temp4 to 255 if within range
inRange(tempv1_U8C1[2], Scalar(k1), Scalar(255), temp4_U8C1);
// bitwise_not(InputArray src, OutputArray dst)
bitwise_not(temp4_U8C1, temp5_U8C1);
// bitwise_and(InputArray src1, InputArray src2, OutputArray dst)
bitwise_and(temp5_U8C1, tempv1_U8C1[2], temp6_U8C1);
// watch out here:
temp7_U8C1 = Mat::ones(tempv1_U8C1[2].size(), CV_8UC1);
Mat x = Mat::ones(tempv1_U8C1[2].size(), CV_8UC1) * k1;
// bitwise_or(InputArray src1, InputArray src2, OutputArray dst, InputArray mask)
bitwise_or(temp6_U8C1, x, temp7_U8C1, temp4_U8C1);
cout << "source type = " << temp7_U8C1.type() << endl;
cout << "source channels = " << temp7_U8C1.channels() << endl;
cout << "source depth = " << temp7_U8C1.depth() << endl;
}
};
void onProgram6Trackbar1(int v, void *vp)
{
ProgramData *pd = static_cast<ProgramData *>(vp);
(*pd).reduce_glare();
imshow("Glare Reduction 4", pd->temp7_U8C1);
}
int main(int argc, char **argv)
{
ProgramData pd;
pd.source_U8C3 = imread("CutDat.jpeg", IMREAD_COLOR);
Size s(pd.source_U8C3.size().width / 1.3, pd.source_U8C3.size().height / 1.3);
resize(pd.source_U8C3, pd.source_U8C3, s, 0, 0, CV_INTER_AREA);
cout << "source type = " << pd.source_U8C3.type() << endl;
cvtColor(pd.source_U8C3, pd.temp2_U8C3, CV_BGR2HSV);
split(pd.temp2_U8C3, pd.tempv1_U8C1);
namedWindow("Glare Reduction - Controls", WINDOW_AUTOSIZE);
imshow("Glare Reduction - Controls", Mat(100, 800, CV_8UC1));
createTrackbar("k1", "Glare Reduction - Controls", &(pd.k1), 255, &onProgram6Trackbar1, &pd);
waitKey(0);
return 0;
}
Important is the line where temp7_U8C1 is initialized, but not with the original data. The result you get is still not want you want, but it highlights that the issue is within the call to bitwise_or. Your question regarding the Scalar bug doesn't apply here as I've shown it in the code.
The code is tested on Windows with 2.4.10 and on Ubuntu 2.4.8 both giving the same results. Testing the code on valgrind runs fine.
I'm trying to get the value of the standard deviation of frame saved in a float variable, here's what I've tried :
#include <opencv2\core\core.hpp>
#include <opencv2\highgui\highgui.hpp>
#include <opencv2\imgproc\imgproc.hpp>
#include <iostream>
int main (){
cv::Mat frame,thresholdResult,contours, stddev , mean ;
cv::Mat meanResult;
cv::Scalar meanScalar,devScalar;
double meanValue,stddevValue;
cv::VideoCapture cap(2);
cap >> frame;
meanResult= cv::Mat::zeros (frame.rows,frame.cols,CV_32FC1);
while (cv::waitKey(10)!=27){
cap >> frame;
cv::cvtColor(frame,frame,CV_BGR2GRAY);
cv::imshow("frame", frame);
meanScalar = cv::mean(frame);
meanValue = meanScalar[0];
std::cout <<meanValue << std::endl;
cv::threshold(frame,thresholdResult,meanValue,255,CV_THRESH_BINARY);
cv::imshow("threshold result " , thresholdResult);
cv::blur(thresholdResult,thresholdResult,cv::Size(3,3));
cv::Canny(thresholdResult,contours,125,350); // canny
cv::imshow(" canny " , contours);
cv::meanStdDev(frame,mean,stddev); // stddev number of channels is
stddevValue= stddev.at<uchar>(0,0);// here the program crashes
if(!stddev.empty())
cv::imshow("stddev",stddev);
}
return 0;
}
any idea how can I get this solve !
I've found the error :
instead of using : stddevValue= stddev.at<uchar>(0,0);// here the program crashes
I had to use : stddevValue= stddev.at<double>(0,0); this works!
I have been porting some video processing code to C++ using OpenCV 2.4.3. The following test program closely mimics how my code will read each frame from a video, operate on its contents, and then write new frames to a new video file.
Strangely, the output frames are entirely black when the pixels are set individually, but are written correctly when the entire frame is cloned.
In practice, I'd use the two macros to access and assign desired values, but the sequential scan used in the example shows the idea more clearly.
Does anyone know where I'm going wrong?
test.cpp:
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <iostream>
#include <string>
using namespace std;
using namespace cv;
#define RGB_REF(PR,NC,R,C,CH) (*((PR) + ((3*(NC)*(R)+(C))+(CH))))
#define GRAY_REF(PR,NC,R,C) (*((PR) + (NC)*(R)+(C)))
int main(int argc, char* argv[])
{
string video_path(argv[1]);
cerr << "Video path is " + video_path + "\n";
VideoCapture capture(video_path);
if ( !capture.isOpened() )
{
cerr << "Input file could not be opened\n";
return 1;
} else
{
string output_path(argv[2]);
VideoWriter output;
int ex = (int)capture.get(CV_CAP_PROP_FOURCC);
Size S = Size((int) capture.get(CV_CAP_PROP_FRAME_WIDTH),
(int) capture.get(CV_CAP_PROP_FRAME_HEIGHT));
output.open(output_path,ex,capture.get(CV_CAP_PROP_FPS),S,true);
if ( !output.isOpened() )
{
cerr << "Output file could not be opened\n";
return 1;
}
unsigned int numFrames = (unsigned int) capture.get(CV_CAP_PROP_FRAME_COUNT);
unsigned int m = (unsigned int) capture.get(CV_CAP_PROP_FRAME_HEIGHT);
unsigned int n = (unsigned int) capture.get(CV_CAP_PROP_FRAME_WIDTH);
unsigned char* im = (unsigned char*) malloc(m*n*3*sizeof(unsigned char));
unsigned char* bw = (unsigned char*) malloc(m*n*3*sizeof(unsigned char));
Mat frame(m,n,CV_8UC3,im);
Mat outputFrame(m,n,CV_8UC3,bw);
for (size_t i=0; i<numFrames; i++)
{
capture >> frame;
for (size_t x=0;x<(3*m*n);x++)
{
bw[x] = im[x];
}
output << outputFrame; // blank frames
// output << frame; // works
// output << (outputFrame = frame); // works
}
}
}
When you query a frame from VideoCapture as capture >> frame;, frame is modified. Say, it has a new data buffer. So im no longer points to the buffer of frame.
Try
bm[x] = frame.ptr()[x];