I am trying to write a program that is able to capture images from two different cameras in two different threads. I want to do this because when I do this in the same thread I have to keep waiting for cvQueryFrame twice the amount of time and so i can not grab images at 30 fps (I get 15 FPS from each camera).
I have taken a look at this SO post, but this only works for one camera. Using cvQueryFrame and boost::thread together
My current program gives varying results, sometimes it gives memory leaks, usually I just don't see anything happening and sometimes it worls for a few seconds but the the image freezes again. The strange thing is that earlier when I didn't call cvShowImage, but had my imageProcessing function do something useful I could see that I was getting real time results from both cameras. I assume this means that it is possible to make this work, but that I made a stupid mistake somewhere. My OS is LINUX and I am using OpenCV 2.4
My code:
#include <iostream>
#include <cstdio>
#include <cv.h>
#include <ml.h>
#include <cvaux.h>
#include <highgui.h>
#include <vector>
#include <stdio.h>
#include "producer_consumer_queue.hpp"
//Camera settings
int cameraWidth = 1280;
int cameraHeight = 720;
int waitKeyValue = 5;
bool threads_should_exit = false;
CvCapture * capture;
CvCapture * capture2;
using namespace std;
using namespace cv;
void grabFrame(concurrent_queue<IplImage* > * frame_queue, int camNumber) {
try {
//Load first frames
cout << "grabFrame: " << camNumber << " init with " << cameraWidth << " x " << cameraHeight << endl;
IplImage* frame;
if (camNumber == 0)frame = cvQueryFrame(capture);
if (camNumber == 1)frame = cvQueryFrame(capture2);
while (frame && !threads_should_exit) {
if (camNumber == 0)frame = cvQueryFrame(capture);
if (camNumber == 1)frame = cvQueryFrame(capture2);
IplImage* frame_copy = NULL;
frame_copy = cvCloneImage(frame);
if (camNumber == 0)cvShowImage("NE", frame);
cout << "grabFrame: " << camNumber << " pushing back to queue" << endl;
frame_queue->push(frame_copy);
int k = cvWaitKey(waitKeyValue);
if (k == 1048603 || k == 27 || k == '\r') {
cout << "grabFrame: Process killed" << endl;
//release memory
threads_should_exit = true;
}
}
} catch (const concurrent_queue<IplImage* >::Canceled & e) {
cout << "grabFrame: Show thread is canceled" << endl;
return;
}
}
void processFrames(concurrent_queue<IplImage* > * frame_queue0, concurrent_queue<IplImage* > * frame_queue1) {
try {
do {
cout << "processFrames: Processing two frames" << endl;
IplImage* frm = NULL;
frame_queue0->wait_and_pop(frm);
IplImage * frm2 = NULL;
frame_queue1->wait_and_pop(frm2);
cvReleaseImage(&frm);
cvReleaseImage(&frm2);
} while (!threads_should_exit);
} catch (const concurrent_queue<IplImage* >::Canceled & e) {
cout << "processFrames: Processing thread is canceled" << endl;
return;
}
}
int main() {
capture = cvCreateCameraCapture(0);
capture2 = cvCreateCameraCapture(1);
cvSetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH, cameraWidth);
cvSetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT, cameraHeight);
cvSetCaptureProperty(capture2, CV_CAP_PROP_FRAME_WIDTH, cameraWidth);
cvSetCaptureProperty(capture2, CV_CAP_PROP_FRAME_HEIGHT, cameraHeight);
boost::thread_group frame_workers;
boost::thread_group frame_workers2;
concurrent_queue<IplImage* > frame_queue(&frame_workers);
concurrent_queue<IplImage* > frame_queue2(&frame_workers2);
boost::thread * query_thread = new boost::thread(processFrames, &frame_queue, &frame_queue2);
boost::thread * cam0_thread = new boost::thread(grabFrame, &frame_queue, 0);
usleep(10000);
boost::thread * cam1_thread = new boost::thread(grabFrame, &frame_queue2, 1);
frame_workers.add_thread(query_thread);
frame_workers.add_thread(cam0_thread);
frame_workers2.add_thread(query_thread);
frame_workers2.add_thread(cam1_thread);
while (true) {
if (threads_should_exit) {
cout << "Main: threads should be killed" << endl;
while (!frame_queue.empty()) {
usleep(10000);
}
frame_workers.remove_thread(query_thread);
frame_workers2.remove_thread(query_thread);
frame_workers.remove_thread(cam0_thread);
frame_workers2.remove_thread(cam1_thread);
frame_workers.join_all();
break;
}
usleep(10000);
}
return 0;
}
EDIT:
I added a simple function to detect a piece of paper to see if everything is working fine when I don't call on cvShowImage(). My program can detect a piece of paper fine if I don't call cvShowImage(). If I do the program has strange behavior again and freezes etc.
There should only be one thread manipulating the GUI (this is true for just about any GUI framework). You should organize your code so that cvShowImage is only invoked from the main "GUI thread".
It seems that the work that's being done in query_thread could just as easily be done inside the main thread.
Related
I trained a simple object detection model on Google Cloud with the vision API. After exporting as a tflite model, I tried to run it on a Raspberry Pi 3B+ with the simple starter code below and tensorflow lite 2.6.0rc-2. The code runs the standard MobileNet model fine, but gives a bus error while allocating the tensors with my custom model. I then tried running the same code with my model on WSL debian, which worked. The vision API says it supports ARM edge devices, so I don't understand why it's not working. Does the raspberry pi just not have enough memory? If so, why does it run the more complex MobileNet model?
Test code modified from https://github.com/Qengineering/TensorFlow_Lite_SSD_RPi_32-bits
#include <opencv2/opencv.hpp>
#include <opencv2/dnn.hpp>
#include <opencv2/highgui.hpp>
#include <fstream>
#include <iostream>
#include <opencv2/core/ocl.hpp>
#include "tensorflow/lite/interpreter.h"
#include "tensorflow/lite/kernels/register.h"
#include "tensorflow/lite/string_util.h"
#include "tensorflow/lite/model.h"
#include <cmath>
using namespace cv;
using namespace std;
const size_t width = 192;
const size_t height = 192;
std::vector<std::string> Labels;
std::unique_ptr<tflite::Interpreter> interpreter;
static bool getFileContent(std::string fileName)
{
// Open the File
std::ifstream in(fileName.c_str());
// Check if object is valid
if(!in.is_open()) return false;
std::string str;
// Read the next line from File untill it reaches the end.
while (std::getline(in, str))
{
// Line contains string of length > 0 then save it in vector
if(str.size()>0) Labels.push_back(str);
}
// Close The File
in.close();
return true;
}
void detect_from_video(Mat &src)
{
Mat image;
int cam_width =src.cols;
int cam_height=src.rows;
// copy image to input as input tensor
cv::resize(src, image, Size(width,height));
memcpy(interpreter->typed_input_tensor<uchar>(0), image.data, image.total() * image.elemSize());
interpreter->SetAllowFp16PrecisionForFp32(true);
interpreter->SetNumThreads(4); //quad core
// cout << "tensors size: " << interpreter->tensors_size() << "\n";
// cout << "nodes size: " << interpreter->nodes_size() << "\n";
// cout << "inputs: " << interpreter->inputs().size() << "\n";
// cout << "input(0) name: " << interpreter->GetInputName(0) << "\n";
// cout << "outputs: " << interpreter->outputs().size() << "\n";
interpreter->Invoke(); // run your model
const float* detection_locations = interpreter->tensor(interpreter->outputs()[0])->data.f;
const float* detection_classes=interpreter->tensor(interpreter->outputs()[1])->data.f;
const float* detection_scores = interpreter->tensor(interpreter->outputs()[2])->data.f;
const int num_detections = *interpreter->tensor(interpreter->outputs()[3])->data.f;
//there are ALWAYS 10 detections no matter how many objects are detectable
//cout << "number of detections: " << num_detections << "\n";
const float confidence_threshold = 0.5;
for(int i = 0; i < num_detections; i++){
if(detection_scores[i] > confidence_threshold){
int det_index = (int)detection_classes[i]+1;
float y1=detection_locations[4*i ]*cam_height;
float x1=detection_locations[4*i+1]*cam_width;
float y2=detection_locations[4*i+2]*cam_height;
float x2=detection_locations[4*i+3]*cam_width;
Rect rec((int)x1, (int)y1, (int)(x2 - x1), (int)(y2 - y1));
rectangle(src,rec, Scalar(0, 0, 255), 1, 8, 0);
putText(src, format("%s", Labels[det_index].c_str()), Point(x1, y1-5) ,FONT_HERSHEY_SIMPLEX,0.5, Scalar(0, 0, 255), 1, 8, 0);
}
}
}
int main(int argc,char ** argv)
{
float f;
float FPS[16];
int i;
int Fcnt=0;
Mat frame;
chrono::steady_clock::time_point Tbegin, Tend;
for(i=0;i<16;i++) FPS[i]=0.0;
// Load model
std::unique_ptr<tflite::FlatBufferModel> model = tflite::FlatBufferModel::BuildFromFile("detect.tflite");
// Build the interpreter
tflite::ops::builtin::BuiltinOpResolver resolver;
tflite::InterpreterBuilder(*model.get(), resolver)(&interpreter);
interpreter->AllocateTensors();
// Get the names
bool result = getFileContent("labels.txt");
if(!result)
{
cout << "loading labels failed";
exit(-1);
}
// VideoCapture cap("James.mp4");
VideoCapture cap(0);
if (!cap.isOpened()) {
cerr << "ERROR: Unable to open the camera" << endl;
return 0;
}
cout << "Start grabbing, press ESC on Live window to terminate" << endl;
while(1){
// frame=imread("Traffic.jpg"); //need to refresh frame before dnn class detection
cap >> frame;
if (frame.empty()) {
cerr << "End of movie" << endl;
break;
}
detect_from_video(frame);
Tend = chrono::steady_clock::now();
//calculate frame rate
f = chrono::duration_cast <chrono::milliseconds> (Tend - Tbegin).count();
Tbegin = chrono::steady_clock::now();
FPS[((Fcnt++)&0x0F)]=1000.0/f;
for(f=0.0, i=0;i<16;i++){ f+=FPS[i]; }
putText(frame, format("FPS %0.2f",f/16),Point(10,20),FONT_HERSHEY_SIMPLEX,0.6, Scalar(0, 0, 255));
//show output
imshow("RPi 4 - 2.0 GHz - 2 Mb RAM", frame);
char esc = waitKey(5);
if(esc == 27) break;
}
cout << "Closing the camera" << endl;
// When everything done, release the video capture and write object
cap.release();
destroyAllWindows();
cout << "Bye!" << endl;
return 0;
}
Stack trace of bus error, occurs during tensor allocation
Program terminated with signal SIGBUS, Bus error.
#0 0x00134dd0 in tflite::ops::builtin::broadcastto::ResizeOutputTensor(TfLiteContext*, tflite::ops::builtin::broadcastto::BroadcastToContext*) ()
(gdb) bt
#0 0x00134dd0 in tflite::ops::builtin::broadcastto::ResizeOutputTensor(TfLiteContext*, tflite::ops::builtin::broadcastto::BroadcastToContext*) ()
#1 0x00135194 in tflite::ops::builtin::broadcastto::Prepare(TfLiteContext*, TfLiteNode*) ()
#2 0x000d36c4 in tflite::Subgraph::PrepareOpsStartingAt(int, std::vector<int, std::allocator<int> > const&, int*) ()
#3 0x000d386c in tflite::Subgraph::PrepareOpsAndTensors() ()
#4 0x000d5c64 in tflite::Subgraph::AllocateTensors() ()
#5 0x0001b2cc in tflite::Interpreter::AllocateTensors() ()
#6 0x000161d8 in main(int, char**) (argc=1, argv=0x7ebd0644)
at MobileNetV1.cpp:106
Tflite object detection model trained for a single label type with 50 images (I want to get the model to work before I add more)
https://storage.googleapis.com/ml-data-storage-bucket/models/model-export/iod/tflite-ping_pong_ball_1-2021-08-02T19%3A46%3A09.324008Z/model.tflite
I am trying to use OpenCV to stream video from 2 cameras continuously via separate threads. The following code is displaying Segmentation fault (core dumped)
What is the reason for this and How do I fix this issue?
main.cpp
#include <iostream>
#include <pthread.h>
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/videoio.hpp>
using namespace std;
struct thread_data {
string path;
int thread_id;
};
void *capture(void *threadarg)
{
struct thread_data *data;
data = (struct thread_data *) threadarg;
cv::VideoCapture cap(data->path);
if( !cap.isOpened())
{
std::cout<<"Not good, open camera failed"<<std::endl;
}
std::cout<< "Opened IP camera successfully!"<<std::endl;
cv::Mat frame;
string ext = ".jpg";
string result;
while (true) {
cap >> frame;
cv::imshow("Frame",frame);
cv::waitKey(1);
}
pthread_exit(NULL);
}
int main(void) {
pthread_t threads[2];
struct thread_data td[2];
int rc=0;
for( int i = 0; i < 2; i++ ) {
cout <<"main() : creating thread, " << i << endl;
td[i].thread_id = i;
td[0].path = "rtsp://admin:opencv123#192.168.1.23:554/Streaming/Channels/101/";
td[1].path = "rtsp://admin:opencv123#192.168.1.24:554/Streaming/Channels/101/";
rc = pthread_create(&threads[i], NULL, capture, (void *)&td[i]);
if (rc) {
cout << "Error:unable to create thread," << rc << endl;
exit(-1);
}
}
pthread_exit(NULL);
return 0;
}
log:
main() : creating thread, 0 main() : creating thread, 1
Segmentation fault (core dumped)
When I tried running it multiple times, I am able to open only one camera and that too isn't streaming continuously. It starts and stops in a few seconds.
Sometimes I get an error which says
OpenCV Error: Insufficient memory (Failed to allocate 140703464366800 bytes) in OutOfMemoryError
I have gone through various Q&As' on StackOverflow but none helped.
The problem here is that the code is facing race conditions. I was able to replicate the issue on my system and identified the following issues:
OpenCV window titles are not unique.
Spawned threads are not joined.
Race condition while opening the video stream.
Lets look into these issues in detail.
1.
OpenCV windows are uniquely identified by their title. In the current code, the title is a hardcoded string "Frame". So basically, both threads are creating/updating/destroying the same window in unknown order. That is a race condition which can be fixed by adding a string field to the struct thread_data which will serve as unique window identifier.
2.
In the main thread, the child threads are created asynchronously so the for loop will exit immediately after creating the threads and the program will exit pre-maturely without waiting for the spawned thread to complete execution. This problem can be solved by adding function calls to wait for the threads before the program exits. This process is called joining and can be achieved by calling pthread_join for each spawned thread.
3.
This issue was a bit trickier to track down. Due to some reason the backend library for video stream capture used by OpenCV is not behaving in a thread-safe manner. Seemingly, the video capture opening process is prone to race-condition and requires a synchronization lock. The lock can be easily implemented by calling the functions pthread_mutex_lock and pthread_mutex_unlock just before and after opening the VideoCapture object.
Here is the modified code demonstrating the solution for all of the above mentioned issues
#include <iostream>
#include <pthread.h>
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/videoio.hpp>
using namespace std;
//Mutex for thread synchronization
static pthread_mutex_t foo_mutex = PTHREAD_MUTEX_INITIALIZER;
struct thread_data
{
string path;
int thread_id;
string window_title; //Unique window title for each thread
};
void *capture(void *threadarg)
{
struct thread_data *data;
data = (struct thread_data *) threadarg;
cv::VideoCapture cap;
//Safely open video stream
pthread_mutex_lock(&foo_mutex);
cap.open(data->path);
pthread_mutex_unlock(&foo_mutex);
if( !cap.isOpened())
{
std::cout<<"Not good, open camera failed"<<std::endl;
}
std::cout<< "Opened IP camera successfully!"<<std::endl;
cv::Mat frame;
string ext = ".jpg";
string result;
//Create window with unique title
cv::namedWindow(data->window_title);
while (true)
{
cap >> frame;
cv::imshow(data->window_title,frame);
cv::waitKey(10);
}
//Release VideoCapture object
cap.release();
//Destroy previously created window
cv::destroyWindow(data->window_title);
//Exit thread
pthread_exit(NULL);
}
int main(void)
{
const int thread_count = 2;
pthread_t threads[thread_count];
struct thread_data td[thread_count];
//Initialize thread data beforehand
td[0].path = "rtsp://admin:opencv123#192.168.1.23:554/Streaming/Channels/101/";
td[0].window_title = "First Window";
td[1].path = "rtsp://admin:opencv123#192.168.1.24:554/Streaming/Channels/101/";
td[1].window_title = "Second Window";
int rc=0;
for( int i = 0; i < thread_count; i++ )
{
cout <<"main() : creating thread, " << i << endl;
td[i].thread_id = i;
rc = pthread_create(&(threads[i]), NULL, capture, (void *)& (td[i]) );
if (rc)
{
cout << "Error:unable to create thread," << rc << endl;
exit(-1);
}
}
//Wait for the previously spawned threads to complete execution
for( int i = 0; i < thread_count; i++ )
pthread_join(threads[i], NULL);
pthread_exit(NULL);
return 0;
}
void writeFrametoDisk(const cv::Mat *frame, std::string path, int frameNum, std::string windowName)
{
cv::imwrite(path.append(std::to_string(frameNum)).append(".png"), *frame);
return;
}
void openCameraStream(int deviceId, std::string dirName)
{
cv::VideoCapture cap;
cap.open(deviceId);
if(!cap.isOpened()){std::cerr << "Unable to open the camera " << std::endl;}
std::cout << cap.get(cv::CAP_PROP_FRAME_WIDTH) << " " << cap.get(cv::CAP_PROP_FRAME_HEIGHT) << std::endl;
std::string windowName = deviceId == 0 ? "Cam 0" : "Cam 1";
std::string outputDir = dirName;
bool frameStFg = false;
while(!frameStFg)
{
cv::Mat frame;
cap.read(frame);
if(frame.empty()){std::cerr << "frame buffer empty " << std::endl;}
else
{
frameStFg = true;
}
}
cv::TickMeter timer;
timer.start();
int frameCount = 0;
while(1)
{
cv::Mat frame;
cap.read(frame);
if(frame.empty()){std::cerr << "frame buffer empty " << std::endl;}
frameCount++;
std::thread th(writeFrametoDisk, &frame, outputDir, frameCount, windowName);
th.join();
//// a simple wayto exit the loop
if(frameCount > 500)
{
break;
}
}
timer.stop();
std::cout << "Device id " << deviceId << " Capture ran for " << timer.getTimeSec() << " seconds" << std::endl;
std::cout << "Device id " << deviceId << " Number of frames to be capture should be " << timer.getTimeSec() * 30 << std::endl;
std::cout << "Device id " << deviceId << " Number of frames captured " << frameCount << std::endl;
cap.release();
}
int main(int argc, char * argv[])
{
std::string outputDir1 = "";
std::string outputDir2 = "";
std::thread camera1Thread(openCameraStream, 0, outputDir1);
std::thread camera2Thread(openCameraStream, 1, outputDir2);
camera1Thread.join();
camera2Thread.join();
}
As the comments mention, streaming without any imshow() of the image streams works well with out any problems. My setup includes, running two threads with two USB camera's and also these two threads launch a new thread each to save the image read from the cameras. I didn't observe any frame drops or errors and also was able to capture and write at approximately 30 fps.
A little debugging of the reason, it is recommended to use any imshow() in the main thread only i.e. main() function. Hope this helps anyone.
I'm trying to get the color(rgb value) under my cursor. When my is code compiled and i run it my program there is a box that says "Unhandled exception at 0x00007FFBF64B3C58 in thing_1.exe: Microsoft C++ exception: cv::Exception at memory location 0x0000001DA30FEFB0.". When i press continue the box just comes back. I'm new to coding so it might be a newbie mistake and sorry for my messy code...
#include "opencv2/highgui/highgui.hpp"
#include <iostream>
#include <Windows.h>
using namespace cv;
using namespace std;
boolean k = true;
POINT cursorPosition;
HWND hWnd;
int main(int argc, char* argv[]){
VideoCapture cap(0);
while (1){
Mat frame;
bool bSuccess = cap.read(frame);
imshow("Video", frame);
if (waitKey(10) == 27){
break;
}
if (k == true){
hWnd = GetActiveWindow();
k = false;
}
GetCursorPos(&cursorPosition);
ScreenToClient(hWnd, &cursorPosition);
if (cursorPosition.x < 0) {cursorPosition.x = 0;}
if (cursorPosition.x > 640) {cursorPosition.x = 640;}
if (cursorPosition.y < 0) {cursorPosition.y = 0;}
if (cursorPosition.y > 479) {cursorPosition.y = 479;}
Vec4b intensity = frame.at<Vec4b>(cursorPosition.x, cursorPosition.y);
uchar blue = intensity.val[0];
uchar green = intensity.val[1];
uchar red = intensity.val[2];
std::cout << "red = " << int(red) << " | green = " << int(green) << " | blue = " << int(blue) << " | X: " << cursorPosition.x << " |Y: " << cursorPosition.y << std::endl;
}
return 0;
}
The key to finding the error was in getting the complete error message:
OpenCV Error: Assertion failed (((((sizeof(size_t)<<28)|0x8442211)>>((DataType<_Tp>::depth) & ((1 << 3) - 1))*4) & 15) == elemSize1()) in cv::Mat::at, file c:\users\tika\documents\visual studio projects\3rdparty\opencv-3.2\opencv\build\include\opencv2\core\mat.inl.hpp, line 957
The most important parts, being the file name and line number. At line 957, file mat.inl.hpp we find this interesting bit of code:
CV_DbgAssert(CV_ELEM_SIZE1(DataType<_Tp>::depth) == elemSize1());
This is where the error comes from. I'll let you investigate, but this really means that Mat expects the template paramater to the call to frame.at<>() to be a BYTE, as confirmed by a quick printout on stdout std::cout >> frame.elemsize1().
Here is my version of your program, it was a clinch to write after installing opencv. It's in C++14.
// ConsoleApplication1.cpp : Defines the entry point for the console application.
//
//#include "stdafx.h"
#include <Windows.h>
#include <opencv2/opencv.hpp>
#include <conio.h>
#include <iostream>
int main()
{
HWND hwnd = ::GetConsoleWindow();
if (hwnd == NULL)
{
std::cout << "Error ! hwnd is NULL\n";
return 3;
}
auto cam = cv::VideoCapture::VideoCapture(0);
auto frame = cv::Mat();
POINT pnt = {};
for (;;)
{
cam.grab();
cam >> frame;
::GetCursorPos(&pnt);
::ScreenToClient(hwnd, &pnt);
std::cout << frame.elemSize1() << "cx: " << frame.cols << " cy: " << frame.rows << " x: " << pnt.x << " y: " << pnt.y;
if (0 < pnt.x && pnt.x < frame.cols
&& 0 < pnt.y && pnt.y < frame.rows)
{
const RGBTRIPLE& rgb = *reinterpret_cast<const RGBTRIPLE*>(&frame.at<BYTE>(pnt.y, pnt.x));
std::cout << " color:"
<< std::setw(4)
<< (unsigned)rgb.rgbtRed
<< std::setw(4)
<< (unsigned)rgb.rgbtGreen
<< std::setw(4)
<< (unsigned)rgb.rgbtBlue;
}
std::cout << "\n";
}
return 0;
}
Hope this will help you in moving forward.
Are you sure that GetActiveWindow returns a valid HWND? Your console application does not have a window message queue.
A NULL hwnd will make your call to ScreenToClient fail, and the cursorPosition is still in screen coordinates. To make your program work, you could start by:
Call GetConsoleWindow() to get the HWND, the returned value will be valid for the duration of your app, so only call it once. Check that the HWND you get is not NULL!!
Get the x, y size of frame, so you can check that your client-coordinates cursorPosition is within bounds. - That's very likely what causes the crash.
Always check that cursorPosition is within the bounds of Mat frame before accessing it. Bounds-checking is something that you should always do when accessing a table with coordinates that come from a third party.
I don't really know about opencv, but it seems your matrix needs to be initialized... Sorry I thought
How about moving the declaration for frame out of the loop, and specifying its size and depth? You could also move the call to GetConsoleWindow() out of there as well... The top of your main() should then look like:
void main()
{
VideoCapture cap(0);
HWND hwnd = GetActiveWindow();
if (!hwnd)
{
std::cout << "Error!! hwnd is NULL\n";
return 3;
}
Mat frame(640, 480, CV_U8C3);
while (1)
{
cap.grab();
cap >> frame;
// grab cursor pos
etc...
...
I have an Asus XTION camera. On my PC there is Windows 7 with Visual Studio 2013 installed. I also got PCL, openCV compiled for VS2013 and the openNi Driver is intalled. Now I want to get a Point Cloud from the Camera. With the sample I can view the actual frames and save one:
// Original code by Geoffrey Biggs, taken from the PCL tutorial in
// http://pointclouds.org/documentation/tutorials/pcl_visualizer.php
// Simple OpenNI viewer that also allows to write the current scene to a .pcd
// when pressing SPACE.
#define _CRT_SERCURE_NO_WARNINGS
#include <pcl/io/openni2_grabber.h>
#include <pcl/io/pcd_io.h>
#include <pcl/visualization/cloud_viewer.h>
#include <pcl/console/parse.h>
#include <pcl/filters/passthrough.h>
#include <iostream>
using namespace std;
using namespace pcl;
PointCloud<PointXYZRGBA>::Ptr cloudptr(new PointCloud<PointXYZRGBA>); // A cloud that will store color info.
PointCloud<PointXYZ>::Ptr fallbackCloud(new PointCloud<PointXYZ>); // A fallback cloud with just depth data.
PointCloud<PointXYZRGBA>::Ptr filteredCloud(new PointCloud<PointXYZRGBA>);
boost::shared_ptr<visualization::CloudViewer> viewer; // Point cloud viewer object.
Grabber* openniGrabber; // OpenNI grabber that takes data from the device.
unsigned int filesSaved = 0; // For the numbering of the clouds saved to disk.
bool saveCloud(false), noColor(false); // Program control.
void
printUsage(const char* programName)
{
cout << "Usage: " << programName << " [options]"
<< endl
<< endl
<< "Options:\n"
<< endl
<< "\t<none> start capturing from an OpenNI device.\n"
<< "\t-v FILE visualize the given .pcd file.\n"
<< "\t-h shows this help.\n";
}
// This function is called every time the device has new data.
void
grabberCallback(const PointCloud<PointXYZRGBA>::ConstPtr& cloud)
{
if (!viewer->wasStopped())
viewer->showCloud(cloud);
if (saveCloud)
{
stringstream stream;
stream << "inputCloud" << filesSaved << ".pcd";
string filename = stream.str();
// Filter object.
PassThrough<pcl::PointXYZRGBA> filter;
filter.setInputCloud(cloud);
// Filter out all points with Z values not in the [0-2] range.
filter.setFilterFieldName("z");
filter.setFilterLimits(0.0, 1.5);
filter.filter(*filteredCloud);
if (io::savePCDFile(filename, *filteredCloud, true) == 0)
{
filesSaved++;
cout << "Saved " << filename << "." << endl;
}
else PCL_ERROR("Problem saving %s.\n", filename.c_str());
saveCloud = false;
}
}
// For detecting when SPACE is pressed.
void
keyboardEventOccurred(const visualization::KeyboardEvent& event,
void* nothing)
{
if (event.getKeySym() == "space" && event.keyDown())
saveCloud = true;
}
// Creates, initializes and returns a new viewer.
boost::shared_ptr<visualization::CloudViewer>
createViewer()
{
boost::shared_ptr<visualization::CloudViewer> v
(new visualization::CloudViewer("OpenNI viewer"));
v->registerKeyboardCallback(keyboardEventOccurred);
return (v);
}
int
main(int argc, char** argv)
{
if (console::find_argument(argc, argv, "-h") >= 0)
{
printUsage(argv[0]);
return -1;
}
bool justVisualize(false);
string filename;
if (console::find_argument(argc, argv, "-v") >= 0)
{
if (argc != 3)
{
printUsage(argv[0]);
return -1;
}
filename = argv[2];
justVisualize = true;
}
else if (argc != 1)
{
printUsage(argv[0]);
return -1;
}
// First mode, open and show a cloud from disk.
if (justVisualize)
{
// Try with color information...
try
{
io::loadPCDFile<PointXYZRGBA>(filename.c_str(), *cloudptr);
}
catch (PCLException e1)
{
try
{
// ...and if it fails, fall back to just depth.
io::loadPCDFile<PointXYZ>(filename.c_str(), *fallbackCloud);
}
catch (PCLException e2)
{
return -1;
}
noColor = true;
}
cout << "Loaded " << filename << "." << endl;
if (noColor)
cout << "This cloud has no RGBA color information present." << endl;
else cout << "This cloud has RGBA color information present." << endl;
}
// Second mode, start fetching and displaying frames from the device.
else
{
openniGrabber = new pcl::io::OpenNI2Grabber();
if (openniGrabber == 0)
return -1;
boost::function<void(const PointCloud<PointXYZRGBA>::ConstPtr&)> f =
boost::bind(&grabberCallback, _1);
openniGrabber->registerCallback(f);
}
viewer = createViewer();
if (justVisualize)
{
if (noColor)
viewer->showCloud(fallbackCloud);
else viewer->showCloud(cloudptr);
}
else openniGrabber->start();
// Main loop.
while (!viewer->wasStopped())
boost::this_thread::sleep(boost::posix_time::seconds(1));
if (!justVisualize)
openniGrabber->stop();
}
But with this code I need the viewer. How can I get the Point Cloud without the viewer? It should work without kinect SDK, only with PCL, openni.
I solved the problem like this:
#include <pcl/io/openni2_grabber.h>
#include <pcl/io/pcd_io.h>
#include <pcl/console/parse.h>
#include <iostream>
using namespace std;
using namespace pcl;
int main(int argc, char** argv)
{
PointCloud<PointXYZRGBA>::Ptr sourceCloud(new PointCloud<PointXYZRGBA>);
boost::function<void(const PointCloud<PointXYZRGBA>::ConstPtr&)> function = [&sourceCloud](const PointCloud<PointXYZRGBA>::ConstPtr &cloud)
{
copyPointCloud(*cloud, *sourceCloud);
};
// Create Kinect2Grabber
Grabber* grabber = new io::OpenNI2Grabber();
// Regist Callback Function
grabber->registerCallback(function);
// Start Retrieve Data
grabber->start();
boost::this_thread::sleep(boost::posix_time::seconds(1));
// Stop Retrieve Data
grabber->stop();
cout << "The Cloud size: " << sourceCloud->size() << " points ..." << endl;
}
I'm trying to display a video file at 25fps smoothly without any lag. The code below does this, but only achieves about 10fps, taking about 0.1ms to execute. With cvWaitKey(1) I get around 0.03 to 0.04ms, which would be perfect, but the named window just stays grey and doesn't show the video!
Is this because cvShowImage() is too slow? Is there any other way to speed up the code and output the video smoothly?
See my code below.
Thanks a lot in advance,
Adrian
#include <cv.h>
#include <iostream>
#include <highgui.h>
#include <cxcore.h>
#include <cvaux.h>
#include <sstream>
#include <time.h>
using namespace std;
using namespace cv;
int main(int argc, char** argv)
{
CvCapture* vid = 0;
IplImage* input; //Input image
int fps;
int i=0;
clock_t start, end;
/*Creates the GUI to output the processed images*/
cvNamedWindow("Video input", 0);
/*Open input video*/
if (argc!=2)
{
cout << "Please specify an input video." << endl;
return -1;
}
else
{
vid=cvCreateFileCapture(argv[1]);
if (!vid)
{
cout << "Could not extract frame." << endl;
return -1;
}
}
input = cvQueryFrame(vid);
fps = (int)cvGetCaptureProperty(vid, CV_CAP_PROP_FPS);
cout << fps << endl;
cout << "Video found." << endl;
/*Extraction loop */
while (input)
{
start = clock();
cout << flush;
cout << i << "\r";
i++;
/*Show image*/
cvShowImage("Video input", input);
cvWaitKey(2); //Wait is needed or else we see a grey box
input = cvQueryFrame(vid); //Read next frame from video
end = clock();
cout << (double)(end-start)/CLOCKS_PER_SEC << " s" << endl;
};
/*Release the allocated memory for the frames */
cvReleaseImage(&input);
cvDestroyWindow("Video input");
return 1;
}
Have you tried this without all the cout stuff?
The debug build of the Microsoft STL has cout handling which is almost unbelievably slow.
Try calling cvWaitKey with 1000 / fps wanted, in your case :
cvWaitKey(1000/25)
You could try something like:
char key = cvWaitKey(10); //waits 10 milliseconds
if (key == 27) //and if ESC is pressed, get out of the loop
break;