I am taking input from the camera. to be more clear i added a photo:
2 cameras that connected on the same usb port
with OpenCV as the following :
#define CamLeft 2
#define CamRight 0
#define WIN_L "win_l"
#define WIN_R "win_r"
int main(int argc, const char * argv[])
{
VideoCapture capLeft(CamLeft);
bool opened = capLeft.isOpened();
if(!opened /*|| !capRight.isOpened()*/) // check if we succeeded
return -1;
Mat edges;
namedWindow(WIN_L,1);
for(;;)
{
Mat frameL;
Mat frameR;
capLeft >> frameL; // get a new frame from camera
cvtColor(frameL, edges, CV_RGB2RGBA);
imshow(WIN_L, edges);
if(waitKey(30) >= 0) break;
}
return 0;
}
So I am creating a window named "win_l" stands for window left and process video capture. It works well. Now I upgraded my code to support another camera like this:
int main(int argc, const char * argv[])
{
VideoCapture capLeft(CamLeft);
VideoCapture capRight(CamRight);
bool opened = capLeft.isOpened();
if(!opened /*|| !capRight.isOpened()*/) // check if we succeeded
return -1;
Mat edges;
namedWindow(WIN_L,1);
namedWindow(WIN_R,1);
for(;;)
{
Mat frameL;
Mat frameR;
capLeft >> frameL; // get a new frame from camera
cvtColor(frameL, edges, CV_RGB2RGBA);
imshow(WIN_L, edges);
imshow(WIN_R, edges);
if(waitKey(30) >= 0) break;
}
return 0;
}
But then I don't see the debugger hit this line: bool opened.... is it the correct way to take capture from 2 cameras?
I suspect USB bandwidth allocating problem. Not very sure if that is the cause though, but it is usually the case for 2 separate camera with individual USB cable each.
But still, do give it a shot. Some methods to overcome that problem: 1)put a Sleep(ms) in between the lines of your capture line. 2)Use lower resolution which would reduce the bandwidth used by each camera. 3)Use MJPEG format(compressed frames).
EDIT:
I came across this website, WEBCAM AND USB DRIVERS, thought you might like to read it.
Anyway, I am not sure if both cameras are able to run concurrently with each other only through one USB port and the reason is as mentioned(I actually forgotten about it. My bad):
USB: Universal Serial Bus, as the name suggest, it is a serial data transmission, meaning, that it can only send data from one device to/and from the computer.
What happening in your code is a deadlock, causing your program to not progress, even on one camera.
2 reasons that causes this:
1)I suspect that the driver for each camera is consistently overwriting each other, sending an I/O Request Packet (IRP), and the "pipe" keeps on receiving and processing each request, and the system keep deallocating each resource immediately after it is allocated. This goes on and on.
2)There is starvation from the 2nd camera, where if it is not allocated the resource, the program cannot progress. Hence the system is stuck in a loop where the 2nd camera keep requesting for the resource.
However, it would be weird if somebody invented this 2 webcam using a single serial bus when each camera needs a channel/pipe of it's own to communicate with the system. Maybe you would like to check with the manufacturer straight? Maybe they came up with a workaround, hence they invented this camera?
If not, the only purpose for this invention is pretty redundant. I can only think of a security purpose, one configured for night view and one configured for day view.
Related
I want to receive frames from UDP port and run face recognition algorithms on them with opencv cv::dnn framework. Tello drone is sending frames over UDP protocol.
/* load dnn model */
cv::dnn::Net net = cv::dnn::readNetFromCaffe("dnnmodel/deploy.prototxt.txt","dnnmodel/res10_300x300_ssd_iter_140000.caffemodel");
cv::VideoCapture cap("udp://#0.0.0.0:11111?overrun_nonfatal=1&fifo_size=50000000");
cv::Mat frame;
float confidenceThreshold = 0.2;
while(true)
{
if(!cap.read(frame))
break;
cv::Mat inputBlob = cv::dnn::blobFromImage(frame, 1, cv::Size(300, 300), cv::Scalar(104.0, 177.0, 123.0), false, false);
net.setInput(inputBlob, "data");
cv::Mat detection = net.forward("detection_out");
cv::Mat detectionMat(detection.size[2], detection.size[3], CV_32F, detection.ptr<float>());
cv::imshow("window", frame);
char key = cv::waitKey(10);
if (key == 27) // ESC
break;
}
Camera response time is very high like 10-20 seconds. When I move the camera, I get the new frame after 20 seconds.
But If I used my own laptop webcam instead of udp port in VideoCapture with this call ;
VideoCapture cap;
cap.open(0)
result is perfect. There is no delay when I am using the webcam.
What is the reason of this delay ?
With unreliable protocols like UDP, where the comms stack can, and will, discard data if not promptly taken out to user space, it's important to attach a high importance to reading data, even at the expense of added complexity in the recv code.
In this case, a separate thread can be used to extract datagrams as soon as available and queueing the buffers, (pointers to buffers, anyway), off to processing code that, otherwise, would result in excessive time use and dropped datagrams.
Hey - it worked!
I'm running into an odd problem with OpenCV on Linux, Ubuntu 16.04 specifically. If I use usual code to show a webcam stream like this it works fine:
// WebcamTest.cpp
#include <opencv2/opencv.hpp>
#include <iostream>
int main()
{
// declare a VideoCapture object and associate to webcam, 1 => use 2nd webcam, the 0th webcam is the one integral to the TX2 development board
cv::VideoCapture capWebcam(1);
// check if VideoCapture object was associated to webcam successfully, if not, show error message and bail
if (capWebcam.isOpened() == false)
{
std::cout << "error: capWebcam not accessed successfully\n\n";
return (0);
}
cv::Mat imgOriginal; // input image
cv::Mat imgGrayscale; // grayscale of input image
cv::Mat imgBlurred; // intermediate blured image
cv::Mat imgCanny; // Canny edge image
char charCheckForEscKey = 0;
// while the Esc key has not been pressed and the webcam connection is not lost . . .
while (charCheckForEscKey != 27 && capWebcam.isOpened())
{
bool blnFrameReadSuccessfully = capWebcam.read(imgOriginal); // get next frame
// if frame was not read successfully, print error message and jump out of while loop
if (!blnFrameReadSuccessfully || imgOriginal.empty())
{
std::cout << "error: frame not read from webcam\n";
break;
}
// convert to grayscale
cv::cvtColor(imgOriginal, imgGrayscale, CV_BGR2GRAY);
// blur image
cv::GaussianBlur(imgGrayscale, imgBlurred, cv::Size(5, 5), 0);
// get Canny edges
cv::Canny(imgBlurred, imgCanny, 75, 150);
cv::imshow("imgOriginal", imgOriginal);
cv::imshow("imgCanny", imgCanny);
charCheckForEscKey = cv::waitKey(1); // delay (in ms) and get key press, if any
} // end while
return (0);
}
This example shows the webcam stream in one imshow window and a Canny edges image in a second window. Both windows update and show the images as expected with very little if any perceptible flicker.
If you're wondering why I'm using the 1th camera instead of the usual 0th camera, I'm running this on a Jetson TX2 and the 0th camera is the one integral to the development board and I'd prefer to use an additional external webcam. For this same reason I have to use Ubuntu 16.04 but I suspect the result would be the same with Ubuntu 18.04 (have not tested this however).
If instead I have a function that takes significant processing instead of taking simple Canny edges, i.e.:
int main(void)
{
.
.
.
// declare a VideoCapture object and associate to webcam, 1 => use 2nd webcam, the 0th webcam is the one integral to the TX2 development board
cv::VideoCapture capWebcam(1);
// check if VideoCapture object was associated to webcam successfully, if not, show error message and bail
if (capWebcam.isOpened() == false)
{
std::cout << "error: capWebcam not accessed successfully\n\n";
return (0);
}
cv::namedWindow("imgOriginal");
cv::Mat imgOriginal;
char charCheckForEscKey = 0;
// while the Esc key has not been pressed and the webcam connection is not lost . . .
while (charCheckForEscKey != 27 && capWebcam.isOpened())
{
bool blnFrameReadSuccessfully = capWebcam.read(imgOriginal); // get next frame
// if frame was not read successfully, print error message and jump out of while loop
if (!blnFrameReadSuccessfully || imgOriginal.empty())
{
std::cout << "error: frame not read from webcam\n";
break;
}
detectLicensePlate(imgOriginal);
cv::imshow("imgOriginal", imgOriginal);
charCheckForEscKey = cv::waitKey(1); // delay (in ms) and get key press, if any
} // end while
.
.
.
return (0);
}
The detectLicensePlate() function takes about a second to run.
The problem I'm having is, when running this program, the window only appears for the slightest amount of time, usually not long enough to even be perceptible, and never long enough to actually see the result.
The strange thing is, the window disappears, then the second or so day occurs for detectLicensePlate() to do its thing, then the window appears again for a very short time, then disappears again, and so on. It's almost as though just after cv::imshow("imgOriginal", imgOriginal);, cv::destroyAllWindows(); is implicitly being called.
The behavior I'm attempting to achieve is for the window to stay open and continue to show the previous result while processing the next. From what I recall this was the default behavior on Windows.
I should mention that I'm explicitly declaring the windows with cv::namedWindow("imgOriginal"); before the while loop in an attempt to not let it go out of scope but this does not seem to help.
Of course I can make the delay longer, i.e.
charCheckForEscKey = cv::waitKey(1500);
To wait for 1.5 seconds, but then the application gets very unresponsive.
Based on this post c++ opencv image not display inside the boost thread I tried declaring the window outside the while loop and putting detectLicensePlate() and cv::imshow() on a separate thread, as follows:
.
.
.
cv::namedWindow("imgOriginal");
boost::thread myThread;
// while the Esc key has not been pressed and the webcam connection is not lost . . .
while (charCheckForEscKey != 27 && capWebcam.isOpened())
{
// if frame was not read successfully, print error message and jump out of while loop
if (!blnFrameReadSuccessfully || imgOriginal.empty())
{
std::cout << "error: frame not read from webcam\n";
break;
}
myThread = boost::thread(&preDetectLicensePlate, imgOriginal);
myThread.join();
.
.
.
} // end while
// separate function
void preDetectLicensePlate(cv::Mat &imgOriginal)
{
detectLicensePlate(imgOriginal);
cv::imshow("imgOriginal", imgOriginal);
}
I even tried putting detectLicensePlate() on a separate thread but not cv::imshow(), and the other way around, still the same result. No matter how I change the order or use threading I can't get the window to stay open while the next round of processing is going.
I realize I could use an entirely different windowing environment, such as Qt or something else, and that may or may not solve the problem, but I'd really prefer to avoid that for various reasons.
Does anybody have any other suggestions to get an OpenCV imshow window to stay open until the window is next updated or cv::destroyAllWindows() is called explicitly?
So i'm currently working on a project that needs to do a facial recognition on rtsp ip cam , i managed to get the rtsp feed with no problems, but when it comes to applying the face recognition the video feed gets too slow and shows a great delay, i even used multithreading to make it better but with no success,here is my code i'm still a beginner in multi threading matters so any help would be appreciated.
#include <iostream>
#include <thread>
#include "opencv2/opencv.hpp"
#include <vector>
using namespace std;
using namespace cv;
void detect(Mat img, String strCamera) {
string cascadeName1 = "C:\\ocv3.2\\Build\\install\\etc\\haarcascades\\haarcascade_frontalface_alt.xml";
CascadeClassifier facedetect;
bool loaded1 = facedetect.load(cascadeName1);
Mat original;
img.copyTo(original);
vector<Rect> human;
cvtColor(img, img, CV_BGR2GRAY);
equalizeHist(img, img);
facedetect.detectMultiScale(img, human, 1.1, 2, 0 | 1, Size(40, 80), Size(400, 480));
if (human.size() > 0)
{
for (int gg = 0; gg < human.size(); gg++)
{
rectangle(original, human[gg].tl(), human[gg].br(), Scalar(0, 0, 255), 2, 8, 0);
}
}
imshow("Detect " + strCamera, original);
int key6 = waitKey(40);
//End of the detect
}
void stream(String strCamera) {
VideoCapture cap(strCamera);
if (cap.isOpened()) {
while (true) {
Mat frame;
cap >> frame;
resize(frame, frame, Size(640, 480));
detect(frame, strCamera);
}
}
}
int main() {
thread cam1(stream, "rtsp://admin:password#ipaddress:554/live2.sdp?tcp");
thread cam2(stream, "rtsp://admin:password#ipaddress/live2.sdp?tcp");
cam1.join();
cam2.join();
return 0;
}
I had similar issues and was able to resolve them by completely isolating the frame capturing from processing of the images. I also updated OpenCV to the latest (3.2.0) available, but I think this will also resolve problems with earlier versions.
void StreamLoop(String strCamera, LFQueue1P1C<Mat> *imageQueue, bool *shutdown) {
VideoCapture cap(strCamera, CV_CAP_FFMPEG);
Mat image;
while(!(*shutdown) && cap.isOpened()){
*cap >> image;
imageQueue->Produce(image);
}
}
int main(){
Mat aImage1;
bool shutdown(false);
LFQueue1P1C<Mat> imageQueue;
string rstp("rtsp://admin:password#ipaddress:554/live2.sdp?tcp");
thread streamThread(StreamLoop, rtsp, &imageQueue, &shutdown);
...
while(!shutdownCondition){
if(imageQueue.Consume(aImage1)) {
// Process Image
resize(aImage1, aImage1, Size(640, 480));
detect(aImage1, rtsp);
}
}
shutdown = true;
if(streamThread.joinable()) streamThread.join();
...
return 0;
}
It seems that there is some issue with rtsp in OpenCV where it easily hangs up if there are even slight pauses while picking up the frames. As long as I pick up frames without much pause I have not seen a problem.
Also, I didn't have this issue when the video cameras where directly connected to my local network. It was not until we deployed them at a remote site that I started getting the hang ups. Separating frame retrieval and processing into separate threads resolved my issues, hopefully someone else might find this solution useful.
Note: The queue I used is a custom queue for passing items from one thread to another. The code I posted is modified from my original code to make it more readable and applicable to this problem.
i'm still a beginner in multi threading matters so any help would be appreciated
Having threads that have no way of exiting will cause you issues in the future. Even if it is test code, get in the habit of making sure the code has an exit path. As an example: You might copy and paste a section of code later on and forget there is an infinite loop in there and it will cause great grief later trying to track down why you have mysterious crashing or your resources are locked up.
I am not a C++ developer but I had the same problem in Java. I solved my issue by calling VideoCapture.grab() function before reading camera frame. According to OpenCV Doc, the use of the grab function is :
The primary use of the function is in multi-camera environments,
especially when the cameras do not have hardware synchronization.
Besides that, in java application, you should release your frame's Mat objects every time you read new frames.
I'm trying to set the camera parameters using the following codeand it is not working at all.
using namespace cv;
int main(int argc,char *argv[])
{
VideoCapture cap(0); // open the default camera
if(!cap.isOpened()) // check if we succeeded
return -1;
bool value = cap.set(CV_CAP_PROP_FRAME_WIDTH,10);
for(;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
imshow("frame", frame);
unsigned char *dad = (unsigned char*)frame.data;
if(waitKey(30) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
OpenCV tries to set this size directly in the camera, so it doesn't need to resize the frame.
The problem with this approach is that if your camera doesn't support this size natively, OpenCV will fail setting the value, leaving you the task to resize the frame after it is retrieved.
cap.set() seems to return the success of the function, I suggest you check it.
I recommend taking a look at another thread: how to change the capture resolution in OpenCV.
from opencv is using directshow for video capture. however, your camera only support a few of the resolution settings like, 480*320, 640*480, 720p, 1080p. if you set something else, it would not work at all.
if you want to check what kinds of resolution your camera support.
download the graphedt and check in the capture pin property.
the above code is not using for changing the camera parameters. I think it usu full for showing the video in your machine. May be this link is useful to you http://opencv.willowgarage.com/wiki/CameraCapture
Hey guys,
I'm using OpenCV with the C++ API, and in order for my project to be more reliable I need a certain camera connection\disconnection handling.
I have searched for how-to's, but I could only find answers that require an ugly hack in order to do so.
Can you suggest a cleaner way to do it?
Thnx
Detecting camera connection/disconnection might require some tricks.
I suggest that you start another thread to check the success of cvCreateCameraCapture() in a loop, while your application is running.
Something like the following:
while (run_detection_thread) // global variable controlled by the main thread
{
CvCapture* capture = cvCreateCameraCapture(-1); //-1 or whatever number works for you
if (camera) //camera is connected
{
sleep(1);
}
else
{
// camera was disconnected
}
}
I think that I have a good workaround for this problem. I create an auxiliary Mat array with zeros with the same resolution like the output from camera. I assign it to Mat array to which just after is assign the frame captured from camera and at the end I check the norm of this array. If it is equal zero it means that there was no new frame captured from camera.
VideoCapture cap(0);
if(!cap.isOpened()) return -1;
Mat frame;
cap >> frame;
Mat emptyFrame = Mat::zeros(CV_CAP_PROP_FRAME_WIDTH, CV_CAP_PROP_FRAME_HEIGHT, CV_32F);
for(;;)
{
frame = emptyFrame;
cap >> frame;
if (norm(frame) == 0) break;
}