OpenCv QR-Code is detected but can't be decoded - c++

I wrote a simple QrCode detection and decode code with OpenCV.
But the problem I'm facing is that the QR code gets detected but can't be decoded with the following image (see bottom).
The code I wrote looks like this:
int main(int argc, char* argv[])
{
cv::Mat src = imread("scaled.png");
if(src.empty())
{
cout << "can not open " << "Picture" << endl;
return -1;
}
QRCodeDetector qrDecoder = QRCodeDetector();
std::string data;
data = qrDecoder.detectAndDecode(src);
if(data.length()>0)
{
cout << "data: " << data; //data should be STOP
}
return 0;
}
Does somebody know why the QR code can be detected but not decoded ?
Here the image I used:
Edit:
I've searched a little more about QR code detection with OpenCv and found these to code snippets from: https://docs.opencv.org/3.4.9/de/dc3/classcv_1_1QRCodeDetector.html
setEpsX(double epsX)
setEpsY(double epsY)
unfortunately the documentation is very bad so those somebody know what these 2 parameters are and if they can fix my problem ?

I think I found the problem:
The image I use has a Size of 2400x1600 which is to big to decode. Therefore I resized the image before I decode the image so my code looks like this:
int main(int argc, char* argv[])
{
cv::Mat src = imread("scaled.png");
if(src.empty())
{
cout << "can not open " << "Picture" << endl;
return -1;
}
std::string data;
cv::resize(src,src, cv::Size(1600,1200));
QRCodeDetector qrDecoder = QRCodeDetector();
data = qrDecoder.detectAndDecode(src);
if(data.length()>0)
{
cout << "data: " << data; //data should be STOP
}
return 0;
}

Related

How to use caffemodel with OpenCV on iOS?

I am trying to use a .caffemodel alongside OpenCV on iOS devices. I found this github repository, but it can only be built with Xcode 6. I am working with Xcode 7, but I also downloaded Xcode 6 and still have no success on building it.
How can I use a caffemodel with OpenCV on iOS 9?
PS: The alternative would be this but it's written with swift & metal and I need to be able to use it with OpenCV.
You can use OpenCV DNN contrib module.
You need first to build OpenCV with contrib modules, you can find the steps here.
Then you can import and use the .caffemodel following this tutorial.
Here is an updated version of the tutorial, since it's not working as is:
#include <opencv2/dnn.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
using namespace cv;
using namespace cv::dnn;
#include <fstream>
#include <iostream>
#include <cstdlib>
using namespace std;
/* Find best class for the blob (i. e. class with maximal probability) */
void getMaxClass(dnn::Blob &probBlob, int *classId, double *classProb)
{
Mat probMat = probBlob.matRefConst().reshape(1, 1); //reshape the blob to 1x1000 matrix
Point classNumber;
minMaxLoc(probMat, NULL, classProb, NULL, &classNumber);
*classId = classNumber.x;
}
std::vector<String> readClassNames(const char *filename = "synset_words.txt")
{
std::vector<String> classNames;
std::ifstream fp(filename);
if (!fp.is_open())
{
std::cerr << "File with classes labels not found: " << filename << std::endl;
exit(-1);
}
std::string name;
while (!fp.eof())
{
std::getline(fp, name);
if (name.length())
classNames.push_back( name.substr(name.find(' ')+1) );
}
fp.close();
return classNames;
}
int main(int argc, char **argv)
{
cv::dnn::initModule();
String modelTxt = "bvlc_googlenet.prototxt";
String modelBin = "bvlc_googlenet.caffemodel";
String imageFile = (argc > 1) ? argv[1] : "space_shuttle.jpg";
Ptr<dnn::Importer> importer;
try //Try to import Caffe GoogleNet model
{
importer = dnn::createCaffeImporter(modelTxt, modelBin);
}
catch (const cv::Exception &err) //Importer can throw errors, we will catch them
{
std::cerr << err.msg << std::endl;
}
if (!importer)
{
std::cerr << "Can't load network by using the following files: " << std::endl;
std::cerr << "prototxt: " << modelTxt << std::endl;
std::cerr << "caffemodel: " << modelBin << std::endl;
std::cerr << "bvlc_googlenet.caffemodel can be downloaded here:" << std::endl;
std::cerr << "http://dl.caffe.berkeleyvision.org/bvlc_googlenet.caffemodel" << std::endl;
exit(-1);
}
dnn::Net net;
importer->populateNet(net);
importer.release(); //We don't need importer anymore
Mat img = imread(imageFile);
if (img.empty())
{
std::cerr << "Can't read image from the file: " << imageFile << std::endl;
exit(-1);
}
resize(img, img, Size(224, 224)); //GoogLeNet accepts only 224x224 RGB-images
dnn::Blob inputBlob = dnn::Blob(img); //Convert Mat to dnn::Blob batch of images
net.setBlob(".data", inputBlob); //set the network input
net.forward(); //compute output
dnn::Blob prob = net.getBlob("prob"); //gather output of "prob" layer
int classId;
double classProb;
getMaxClass(prob, &classId, &classProb);//find the best class
std::vector<String> classNames = readClassNames();
std::cout << "Best class: #" << classId << " '" << classNames.at(classId) << "'" << std::endl;
std::cout << "Probability: " << classProb * 100 << "%" << std::endl;
return 0;
} //main
I will post another answer because with recent versions there are some differences.
First of all now dnn is already inside the standard OpenCV library so you do not have to build it from contrib_modules.
The function to load the network is readNetFromCaffe.
For example the following code load the NN:
std::string modelName = "path/to/mymodel.caffemodel";
std::string protoName = "path/to/deploy.prototxt";
cv::dnn::Net net;
try
{
net = cv::dnn::readNetFromCaffe(protoName, modelName);
}
catch (cv::Exception& e)
{
std::cerr << "Exception: " << e.what() << std::endl;
if (net.empty())
{
std::cerr << "Can't load network by using the following files: " << std::endl;
std::cerr << "prototxt: " << protoName << std::endl;
std::cerr << "caffemodel: " << modelName << std::endl;
std::cerr << "bvlc_googlenet.caffemodel can be downloaded here:" << std::endl;
std::cerr << "http://dl.caffe.berkeleyvision.org/bvlc_googlenet.caffemodel" << std::endl;
exit(-1);
}
}
Then you can run the NN:
cv::Mat res_mat;
float res;
cv::Mat inputBlob = cv::dnn::blobFromImage(roi, 1.0f, cv::Size(227, 227),
cv::Scalar(0, 0, 0), false);
net.setInput(inputBlob);
//During the forward pass output of each network layer is computed,
//but in this example we need output from "prob" layer only.
res_mat = net.forward("score");
std::cout<<res_mat<<std::endl;
res_mat = res_mat.reshape(1, 1); //reshape the blob to 1x2 matrix
return res_mat.at<float>(0);
The function cv::dnn::blobFromImage resize the image to the input network size specified by the third argument (in my case cv::Size(227, 227)). The argument cv::Scalar(0, 0, 0) is to subtract the means from the three BGR channels.
score is the name of the output layer in the NN I used. You can see this information in the prototxt file.

PCL OpenNI2Grabber get Point Cloud without viewer

I have an Asus XTION camera. On my PC there is Windows 7 with Visual Studio 2013 installed. I also got PCL, openCV compiled for VS2013 and the openNi Driver is intalled. Now I want to get a Point Cloud from the Camera. With the sample I can view the actual frames and save one:
// Original code by Geoffrey Biggs, taken from the PCL tutorial in
// http://pointclouds.org/documentation/tutorials/pcl_visualizer.php
// Simple OpenNI viewer that also allows to write the current scene to a .pcd
// when pressing SPACE.
#define _CRT_SERCURE_NO_WARNINGS
#include <pcl/io/openni2_grabber.h>
#include <pcl/io/pcd_io.h>
#include <pcl/visualization/cloud_viewer.h>
#include <pcl/console/parse.h>
#include <pcl/filters/passthrough.h>
#include <iostream>
using namespace std;
using namespace pcl;
PointCloud<PointXYZRGBA>::Ptr cloudptr(new PointCloud<PointXYZRGBA>); // A cloud that will store color info.
PointCloud<PointXYZ>::Ptr fallbackCloud(new PointCloud<PointXYZ>); // A fallback cloud with just depth data.
PointCloud<PointXYZRGBA>::Ptr filteredCloud(new PointCloud<PointXYZRGBA>);
boost::shared_ptr<visualization::CloudViewer> viewer; // Point cloud viewer object.
Grabber* openniGrabber; // OpenNI grabber that takes data from the device.
unsigned int filesSaved = 0; // For the numbering of the clouds saved to disk.
bool saveCloud(false), noColor(false); // Program control.
void
printUsage(const char* programName)
{
cout << "Usage: " << programName << " [options]"
<< endl
<< endl
<< "Options:\n"
<< endl
<< "\t<none> start capturing from an OpenNI device.\n"
<< "\t-v FILE visualize the given .pcd file.\n"
<< "\t-h shows this help.\n";
}
// This function is called every time the device has new data.
void
grabberCallback(const PointCloud<PointXYZRGBA>::ConstPtr& cloud)
{
if (!viewer->wasStopped())
viewer->showCloud(cloud);
if (saveCloud)
{
stringstream stream;
stream << "inputCloud" << filesSaved << ".pcd";
string filename = stream.str();
// Filter object.
PassThrough<pcl::PointXYZRGBA> filter;
filter.setInputCloud(cloud);
// Filter out all points with Z values not in the [0-2] range.
filter.setFilterFieldName("z");
filter.setFilterLimits(0.0, 1.5);
filter.filter(*filteredCloud);
if (io::savePCDFile(filename, *filteredCloud, true) == 0)
{
filesSaved++;
cout << "Saved " << filename << "." << endl;
}
else PCL_ERROR("Problem saving %s.\n", filename.c_str());
saveCloud = false;
}
}
// For detecting when SPACE is pressed.
void
keyboardEventOccurred(const visualization::KeyboardEvent& event,
void* nothing)
{
if (event.getKeySym() == "space" && event.keyDown())
saveCloud = true;
}
// Creates, initializes and returns a new viewer.
boost::shared_ptr<visualization::CloudViewer>
createViewer()
{
boost::shared_ptr<visualization::CloudViewer> v
(new visualization::CloudViewer("OpenNI viewer"));
v->registerKeyboardCallback(keyboardEventOccurred);
return (v);
}
int
main(int argc, char** argv)
{
if (console::find_argument(argc, argv, "-h") >= 0)
{
printUsage(argv[0]);
return -1;
}
bool justVisualize(false);
string filename;
if (console::find_argument(argc, argv, "-v") >= 0)
{
if (argc != 3)
{
printUsage(argv[0]);
return -1;
}
filename = argv[2];
justVisualize = true;
}
else if (argc != 1)
{
printUsage(argv[0]);
return -1;
}
// First mode, open and show a cloud from disk.
if (justVisualize)
{
// Try with color information...
try
{
io::loadPCDFile<PointXYZRGBA>(filename.c_str(), *cloudptr);
}
catch (PCLException e1)
{
try
{
// ...and if it fails, fall back to just depth.
io::loadPCDFile<PointXYZ>(filename.c_str(), *fallbackCloud);
}
catch (PCLException e2)
{
return -1;
}
noColor = true;
}
cout << "Loaded " << filename << "." << endl;
if (noColor)
cout << "This cloud has no RGBA color information present." << endl;
else cout << "This cloud has RGBA color information present." << endl;
}
// Second mode, start fetching and displaying frames from the device.
else
{
openniGrabber = new pcl::io::OpenNI2Grabber();
if (openniGrabber == 0)
return -1;
boost::function<void(const PointCloud<PointXYZRGBA>::ConstPtr&)> f =
boost::bind(&grabberCallback, _1);
openniGrabber->registerCallback(f);
}
viewer = createViewer();
if (justVisualize)
{
if (noColor)
viewer->showCloud(fallbackCloud);
else viewer->showCloud(cloudptr);
}
else openniGrabber->start();
// Main loop.
while (!viewer->wasStopped())
boost::this_thread::sleep(boost::posix_time::seconds(1));
if (!justVisualize)
openniGrabber->stop();
}
But with this code I need the viewer. How can I get the Point Cloud without the viewer? It should work without kinect SDK, only with PCL, openni.
I solved the problem like this:
#include <pcl/io/openni2_grabber.h>
#include <pcl/io/pcd_io.h>
#include <pcl/console/parse.h>
#include <iostream>
using namespace std;
using namespace pcl;
int main(int argc, char** argv)
{
PointCloud<PointXYZRGBA>::Ptr sourceCloud(new PointCloud<PointXYZRGBA>);
boost::function<void(const PointCloud<PointXYZRGBA>::ConstPtr&)> function = [&sourceCloud](const PointCloud<PointXYZRGBA>::ConstPtr &cloud)
{
copyPointCloud(*cloud, *sourceCloud);
};
// Create Kinect2Grabber
Grabber* grabber = new io::OpenNI2Grabber();
// Regist Callback Function
grabber->registerCallback(function);
// Start Retrieve Data
grabber->start();
boost::this_thread::sleep(boost::posix_time::seconds(1));
// Stop Retrieve Data
grabber->stop();
cout << "The Cloud size: " << sourceCloud->size() << " points ..." << endl;
}

Reading images in directory using VideoCapture not working

I am trying to read multiple image files named like 001.jpg, 002.jpg, 003.jpg and onwards by using OpenCV on Linux and displaying them in a window after some delay. Here is my code -
int main(int argc, char ** argv){
cv::VideoCapture cam("/home/bikz05/Desktop/dataset/%03d.jpg");
if (!cam.isOpened())
cout << "Capture is not opened";
cv::Mat image;
const string Window = "Image Number";
namedWindow(Window, CV_WINDOW_NORMAL);
while(true) {
cam >> image;
if (!image.data) {
std::cout << "Preview Ends" << std::endl;
break;
}
cv::imshow(Window, image);
waitKey(1000);
}
return 0;
}
In the first line of the main function, I even tried using
cv::VideoCapture cam("/home/bikz05/Desktop/dataset/001.jpg");
I even tried to place the images in the same folder as the binary, even that didn't help.
In both the cases, it displays only the first image and then the program exits. Note, it doesn't finish.
P.S. - The answers provided on Stack Exchange and opencv.org did not produce the intended result.
The code as you have written has several problems, but if you are going to read images from the disk, use cv::imread() instead of cv::VideoCapture:
std::string filename = "/home/bikz05/Desktop/dataset/001.jpg";
const std::string Window = "Image Number";
cv::namedWindow(Window, CV_WINDOW_NORMAL);
while (true)
{
cv::Mat image = cv::imread(filename.c_str());
if (!image.data)
{
std::cout << "!!! File not found: " << filename << std::endl;
break;
}
cv::imshow(Window, image);
cv::waitKey(1000);
// And before the loop ends, update filename with the next image:
// filename = ???;
}
I haven't tested this, but I'm sure you get the idea.
use glob as in glob(folderpath, vectorOfimages) then access each of the images as vectorOfimages[i].
vectorOfimages is
vector<String>
and folderpath is a String.
I had the same problem and videoCapture does not solve it.

The call imshow() in OpenCV is not creating any output

I am trying to read data from an industrial camera using the V4l linux driver and C++. I would like to display the result using the OpenCV. I read the buffer, create an Mat object, which actually contains values in range 0...255.
The problem seems to be the imshow() call. When commenting this line out, an actual window without an image is displayed. Once uncommented no window is diplayed and also no output in terminal after this line is shown. I am not able to find a solution on my own, all examples I found look the same as my code to me.
Here is the code:
#include <fcntl.h>
#include "opencv/cv.h"
#include "opencv/highgui.h"
#include <libv4l2.h>
#include <libv4l1.h>
#include <linux/videodev2.h>
#include <sys/ioctl.h>
#define BUFFERSIZE 357120 // 744 * 480
using namespace cv;
using namespace std;
int main(int argc, char **argv) {
int cameraHandle, i;
unsigned char pictureBuffer[BUFFERSIZE];
char cameraDevice[] = "/dev/video0";
struct v4l2_control V4L2_control;
/* open camera device */
if (( cameraHandle = v4l1_open(cameraDevice, O_RDONLY)) == -1 ){
printf("Unable to open the camera");
return -1;
}
// disable auto exposure
V4L2_control.id = V4L2_CID_EXPOSURE_AUTO;
V4L2_control.value = V4L2_EXPOSURE_SHUTTER_PRIORITY;
ioctl(cameraHandle, VIDIOC_S_CTRL, &V4L2_control);
// set exposure time
V4L2_control.id = V4L2_CID_EXPOSURE_ABSOLUTE;
V4L2_control.value = 2;
ioctl(cameraHandle, VIDIOC_S_CTRL, &V4L2_control);
// get 5 pictures to warm up the camera
for (i = 0; i <= 5; i++){
v4l1_read(cameraHandle, pictureBuffer, BUFFERSIZE);
}
// show pictures
Mat mat = Mat(744, 480, CV_8UC3, (void*)pictureBuffer);
cout << "M = " << endl << " " << mat << endl << endl; // display the image data
namedWindow("imagetest", CV_WINDOW_AUTOSIZE );
imshow("imagetest", mat);
waitKey(30);
cout << "test output" << endl;
//clenup
v4l1_close(cameraHandle);
destroyWindow("imagetest");
return 0;
}
EDIT:
Well, after running the code in terminal instead of ecipse I saw a segmentation fault Even commenting everything behind the
cout << "M = " << endl << " " << mat << endl << endl;
line gives me this error.
Solved. The problem lied in the wrong file format. CV_8UC1 or CV_8U instead of CV_8UC3 brought and an output. The difference between those formats is described here: In OpenCV, what's the difference between CV_8U and CV_8UC1?

Video display window in OpenCV not sizing up to video

I have written the following code below to display a video in OpenCV. I have compiled it fine but when I run it, the window that is supposed to show the video opens but it is too small to actually see if the video is playing. Everything else seems to be working fine. The width, height and number of frames are printed on the command line as coded. Anyone know what the problem is? Check it out.
void info()
{
cout << "This program will accept input video with fixed lengths and produce video textures" << endl;
}
int main(int argc, char *argv[])
{
info();
if(argc != 2)
{
cout << "Please enter more parameters" << endl;
return -1;
}
const string source = argv[1];
VideoCapture input_vid(source);
if(! input_vid.isOpened())
{
cout << "Error: Could not find input video file" << source << endl;
return -1;
}
Size S = Size((int) input_vid.get(CV_CAP_PROP_FRAME_WIDTH), //Acquire size of input video
(int) input_vid.get(CV_CAP_PROP_FRAME_HEIGHT));
cout << "Width: = " << S.width << " Height: = " << S.height << " Number of frames: " << input_vid.get(CV_CAP_PROP_FRAME_COUNT)<<endl;
const char* PLAY = "Video player";
namedWindow(PLAY, CV_WINDOW_AUTOSIZE);
//imshow(PLAY,100);
char c;
c = (char)cvWaitKey(27);
//if ( c == 27)break;
return 0;
}
assuming video is from webcam:
capture = CaptureFromCAM( 0 );
SetCaptureProperty(capture,CV_CAP_PROP_FRAME_HEIGHT, 640);
SetCaptureProperty(capture,CV_CAP_PROP_FRAME_WIDTH, 480);
this will fix your problem
another simple tweak could be using CV_WINDOW_NORMAL instead of CV_WINDOW_AUTOSIZE
namedWindow(PLAY, CV_WINDOW_AUTOSIZE);
which lets you resize the window manually