I like to capture images using RPi at least 60Hz. My code is in C++ and we have a library here for C++ interface. But that library has maximum 30Hz.
My target is minimum 60 Hz.
So far what I found is raspistill can make upto 90Hz, so I am trying to interface my C++ program to raspistill code.
I found one library here PiCam that has direct interface to raspistll. Not very sure, it can go to 60Hz, I am still trying to test it and have a few issues.
My queries are
(1)How is it possible to have 60Hz fps at RPi using C++?
(2)To interface to PiCam, I have already compiled, build and installed the library with no issues.
But I get black image when I capture. What could be the issue? A part of my code is shown below.
CCamera* cam = StartCamera(640, 480,60,1,true);
char mybuffer[640 * 480 * 4];
int ret = cam->ReadFrame(0, mybuffer, sizeof(mybuffer));
cout << " ret " << ret << endl;
Mat img(480, 640, CV_8UC4,mybuffer);
imwrite("img.jpg", img);
img.jpg is captured with black image.
(3)Using PiCam, how can I changed to Gray image?
I use Raspicam from here on a Raspberry Pi 3 and get around 90 fps in black and white mode.
I am currently re-purposing the code for something else so it is not 100% perfect for your needs, but should get you going.
#include <ctime>
#include <fstream>
#include <iostream>
#include <raspicam/raspicam.h>
#include <unistd.h> // for usleep()
using namespace std;
#define NFRAMES 1000
#define WIDTH 1280
#define HEIGHT 960
int main ( int argc,char **argv ) {
raspicam::RaspiCam Camera;
// Allowable values: RASPICAM_FORMAT_GRAY,RASPICAM_FORMAT_RGB,RASPICAM_FORMAT_BGR,RASPICAM_FORMAT_YUV420
Camera.setFormat(raspicam::RASPICAM_FORMAT_GRAY);
// Allowable widths: 320, 640, 1280
// Allowable heights: 240, 480, 960
// setCaptureSize(width,height)
Camera.setCaptureSize(WIDTH,HEIGHT);
// Open camera
cout<<"Opening Camera..."<<endl;
if ( !Camera.open()) {cerr<<"Error opening camera"<<endl;return -1;}
// Wait until camera stabilizes
cout<<"Sleeping for 3 secs"<<endl;
usleep(3000000);
cout << "Grabbing " << NFRAMES << " frames" << endl;
// Allocate memory for camera buffer
unsigned long bytes=Camera.getImageBufferSize();
cout << "Width: " << Camera.getWidth() << endl;
cout << "Height: " << Camera.getHeight() << endl;
cout << "ImageBufferSize: " << bytes << endl;;
unsigned char *data=new unsigned char[bytes];
for(int frame=0;frame<NFRAMES;frame++){
// Capture frame
Camera.grab();
// Extract the image
Camera.retrieve (data,raspicam::RASPICAM_FORMAT_IGNORE);
}
}
return 0;
}
By the way, it works very nicely with CImg.
Also, I haven't yet had the time to see if it works faster to create a new thread to process each frame, or to have a few threads started at the beginning and use a condition variable to start one after acquiring each frame.
What Mark Setchell responded is correct.
But I found out that setting frame rate parameter is not exposed at its API level and can't set frame rate. Default frame rate is 30 Hz.
Can change it at src/private/private_impl.cpp file. I set to 60Hz and now it works.
Related
I'm having some trouble since I changed from OpenCV 3.x to 4.x (compiled from source) in my C++ project. I've replicated this behaviour in a small example that just opens a webcam and records for 5 seconds.
With 3.x I am able to set the webcam framerate to 30 at full hd, but the same code with 4.x just ignores the camera.set(cv::CAP_PROP_FPS,30) and sets it to 5 instead. If I use 720p, the fps is set to 10.
Maybe the code is not relevant here, as it's a classical example, but just in case I'll leave it here.
#include "opencv2/opencv.hpp"
#include "iostream"
#include "thread"
#include <unistd.h>
using namespace cv;
VideoCapture camera(0);
bool stop = false;
int fc = 0;
void saveFrames()
{
while(!stop)
{
Mat frame;
camera >> frame;
cv::imwrite("/tmp/frames/frame" + std::to_string(fc) + ".jpg", frame);
fc++;
}
}
int main()
{
if(!camera.isOpened())
return -1;
camera.set(cv::CAP_PROP_FRAME_WIDTH,1920);
camera.set(cv::CAP_PROP_FRAME_HEIGHT,1080);
camera.set(cv::CAP_PROP_FPS,30);
double fps = camera.get(cv::CAP_PROP_FPS);
std::cout << "FPS setting: " << fps << std::endl; // 5 with OCV4, 30 with OCV3
std::thread tr(saveFrames);
int waitSeconds = 5;
usleep(waitSeconds * 1e6);
stop = true;
tr.join();
std::cout << "Written " << fc << " frames of " << fps * waitSeconds << std::endl;
return 0;
}
Edit: more tests with other computers yield the same result except in a Macbook Pro (but running the same distribution) where OpenCV 4.3 seems to work. The other 2 computers are desktops with usb webcams.
Edit 2: same problem with version 3.4 building from source code. For now, only 3.2 from the repo works ok in the two computers with usbcams.
This is a known bug that affects OpenCV > 3.3
The operations I did were quite simple:
I read an .avi file with a dimension of 1280x720, stored one frame of the video to a Mat object and displayed it.
Here is part of the code:
VideoCapture capL;
capL.open("F:/renderoutput/cube/left.avi");
Mat frameL;
cout << capL.get(CAP_PROP_FRAME_WIDTH) << ", " << capL.get(CAP_PROP_FRAME_HEIGHT) << endl;
for (;;)
{
capL.read(frameL);
cout << frameL.size() << endl;
if (frameL.empty())
break;
imshow("Output", frameL);
waitKey(200);
}
......
But the dimensions of the capL and frameL are not he same, with the former being 1280x720 and latter 1280x360. Why is this happening? I have been using OpenCV 3.3.1 in Visual Studio for quite a long time and some day this happened.
Most likely the video is interlaced. So you have only half height of it in every frame.
I am working in a real time image processing project, I am using a Basler camera model acA1300-200uc with communication by USB3, but I am having troubles with the fps of my c++ program because the camera supports over 200 fps but my program only runs arround 30 fps and I dont know how to increase it, my project need 100 fps aprox.
This is my code, I hope you can help me, thanks in advance.
#include <Windows.h>
#include <opencv2\core\core.hpp>
#include <opencv2\highgui\highgui.hpp>
#include <opencv2\video\video.hpp>
#include <pylon\PylonIncludes.h>
#include <time.h>
using namespace Pylon;
// Settings for using Basler USB cameras.
#include <pylon/usb/BaslerUsbInstantCamera.h>
typedef Pylon::CBaslerUsbInstantCamera Camera_t;
using namespace Basler_UsbCameraParams;
using namespace cv;
using namespace std;
static const uint32_t c_countOfImagesToGrab = 1000;
int main(int argc, char* argv[]) {
int frames = 0;
double seconds = 0,fps;
time_t start, end;
Pylon::PylonAutoInitTerm autoInitTerm;
try
{
CDeviceInfo info;
info.SetDeviceClass(Camera_t::DeviceClass());
Camera_t camera(CTlFactory::GetInstance().CreateFirstDevice(info));
cout << "Dispositivo utilizado: " << camera.GetDeviceInfo().GetModelName() << endl;
camera.Open();
camera.MaxNumBuffer = 10;
CImageFormatConverter formatConverter;
formatConverter.OutputPixelFormat = PixelType_BGR8packed;
CPylonImage pylonImage;
Mat openCvImage, gray_img;
vector<Vec3f> circles;
int64_t W = 800, H = 600;
camera.Width.SetValue(W);
camera.Height.SetValue(H);
camera.StartGrabbing(c_countOfImagesToGrab, GrabStrategy_LatestImageOnly);
CGrabResultPtr ptrGrabResult;
camera.RetrieveResult(5000, ptrGrabResult, TimeoutHandling_ThrowException);
cout << "SizeX: " << ptrGrabResult->GetWidth() << endl;
cout << "SizeY: " << ptrGrabResult->GetHeight() << endl;
cvNamedWindow("OpenCV Display Window", CV_WINDOW_AUTOSIZE);
time(&start);
while (camera.IsGrabbing())
{
camera.RetrieveResult(5000, ptrGrabResult, TimeoutHandling_ThrowException);
if (ptrGrabResult->GrabSucceeded())
{
formatConverter.Convert(pylonImage, ptrGrabResult);
openCvImage = Mat(ptrGrabResult->GetHeight(), ptrGrabResult->GetWidth(), CV_8UC3, (uint8_t *)pylonImage.GetBuffer());
imshow("OpenCV Display Window", openCvImage);
frames++;
if (waitKey(30)>=0) break;
}
}
time(&end);
}
catch (...) { cout << "error" << endl; }
seconds = difftime(end, start);
fps = frames / seconds;
cout << "fps: " << fps;
Sleep(1000);
}
Frame rate is affected by many parameters. If the manufacturer specifies 200fps as the maximum at full resolution, this is the absolute maximum with:
minimum exposure time (too small for most applications)
nothing else going on on the USB bus
optimal transfer and acquisition parameters (maximum aquisition frame rate, no bandwidth limitations, fast readout mode
...
In case you haven't noticed, that's the marketing guy with big and juicy bait. 200fps cannot be achieved in most applications due to many factors.
You can read out the resulting framerate for your current configuration like that:
// Get the resulting frame rate
double d = camera.ResultingFrameRate.GetValue();
Refer to the camera's user manual... There's an entire chapter on frame rate, framerate limitations, framerate optimization
I also see a waitkey(30) call in your fps measurement. This function will delay your grab loop for at least 30ms unless you press any key. If you display each frame for 30 milliseconds (at least that's how I understand the waitkey documentation), how are you supposed to reach 100 fps? 1 frame / 0.03 s = 33.33 fps.
How can i print all the color in image using C++ in hexadecimal code such as (#FFFFFF) ?
What kind of library that i need in order to extract all the color pixel by pixel? and how to code to loop to all pixel one by one code using that library?
Sorry my bad english.
Thanks.
If I may be so bold as to quote Sir Isaac Newton, the easiest way to dump an image's pixels in hex is by "standing on the shoulders of giants". The giant in this case is ImageMagick, which is installed on most Linux distros and is available for free, for macOS and Windows.
At the command line, simply type:
convert picture.jpg txt:
Output
# ImageMagick pixel enumeration: 300,168,65535,srgb
0,0: (64507,56283,34952) #FBDB88 srgb(251,219,136)
1,0: (65535,58596,37779) #FFE493 srgb(255,228,147)
2,0: (65535,56026,36237) #FFDA8D srgb(255,218,141)
3,0: (62708,51400,33153) #F4C881 srgb(244,200,129)
4,0: (62965,49858,33153) #F5C281 srgb(245,194,129)
5,0: (63222,48830,33153) #F6BE81 srgb(246,190,129)
6,0: (63479,48316,33924) #F7BC84 srgb(247,188,132)
7,0: (64250,48830,34952) #FABE88 srgb(250,190,136)
Second easiest option is the CImg C++ library which you can obtain from here. I believe it is significantly simpler than OpenCV which, on a Raspberry Pi for example, takes around 1GB of space and over an hour to build, whereas CImg is a single, header-only file that you include in your code with no libraries needing to be installed, which can do what you ask.
////////////////////////////////////////////////////////////////////////////////
// main.cpp
//
// CImg example of dumping pixels in hex
//
// Build with: g++-6 -std=c++11 main.cpp -lpng -lz -o main
////////////////////////////////////////////////////////////////////////////////
#include <iostream>
#include <cstdlib>
#define cimg_use_png // Do this if you want CImg to process PNGs itself without resorting to ImageMagick
#define cimg_use_jpeg // Do this if you want CImg to process JPEGs itself without resorting to ImageMagick
#define cimg_use_tiff // Do this if you want CImg to process TIFFs itself without resorting to ImageMagick
#define cimg_use_curl // Do this if you want CImg to be able to load images via a URL
#define cimg_display 0 // Do this if you don't need a GUI and don't want to link to GDI32 or X11
#include "CImg.h"
using namespace cimg_library;
using namespace std;
int main() {
// Load image
CImg<unsigned char> img("input.png");
// Get width, height, number of channels
int w=img.width();
int h=img.height();
int c=img.spectrum();
cout << "Dimensions: " << w << "x" << h << " " << c << " channels" <<endl;
// Dump all pixels
for(int y=0;y<h;y++){
for(int x=0;x<w;x++){
char hex[16];
sprintf(hex,"#%02x%02x%02x",img(x,y,0),img(x,y,1),img(x,y,2));
cout << y << "," << x << " " << hex << endl;
}
}
}
We can test that by creating a small gradient image from red-blue with ImageMagick like this:
convert -size 1x10 gradient:red-blue input.png
Here it is enlarged:
and running the program like this - hopefully you can see the hex goes from ff0000 (full red) to 0000ff (full blue):
./main
Sample Output
Dimensions: 1x10 3 channels
0,0 #ff0000
1,0 #8d0072
2,0 #1c00e3
3,0 #aa0055
4,0 #3800c7
5,0 #c70038
6,0 #5500aa
7,0 #e3001c
8,0 #72008d
9,0 #0000ff
All you need is the OpenCV. To load picture, get pixel and write it hex value:
#include <opencv\cv.h>
//....
Mat img = imread(filename);
for (int x = 0; x < img.cols; x++) {
for (int y = 0; y < img.rows; y++) {
Vec3b color = img.at<Vec3b>(y, x);
cout << std::hex << color.val[0]; //blue
cout << std::hex << color.val[1]; //green
cout << std::hex << color.val[2]; //red
//or
int colorint = img.at<int>(y, x);
cout << std::hex << colorint; //redgreenblue
}
}
I have the code below. Is a open realtime edge detection, but i had an error on line: pProcessedFrame = cvCreateImage(cvSize(pFrame->width, pFrame->height), IPL_DEPTH_8U, 1);
"Unhandled exception at 0x00007FF6CAF1284C in opencv2.exe: 0xC0000005: Access violation reading location 0x000000000000002C."
Anybody can resolve this insue?
My configuration is Visual Studio 2013 and Opencv 2.4.10
#include <iostream>
#include "opencv/cv.h"
#include "opencv/highgui.h"
using namespace std;
// Define the IplImage pointers we're going to use as globals
IplImage* pFrame;
IplImage* pProcessedFrame;
IplImage* tempFrame;
// Slider for the low threshold value of our edge detection
int maxLowThreshold = 1024;
int lowSliderPosition = 150;
// Slider for the high threshold value of our edge detection
int maxHighThreshold = 1024;
int highSliderPosition = 250;
// Function to find the edges of a given IplImage object
IplImage* findEdges(IplImage* sourceFrame, double thelowThreshold, double theHighThreshold, double theAperture)
{
// Convert source frame to greyscale version (tempFrame has already been initialised to use greyscale colour settings)
cvCvtColor(sourceFrame, tempFrame, CV_RGB2GRAY);
// Perform canny edge finding on tempframe, and push the result back into itself!
cvCanny(tempFrame, tempFrame, thelowThreshold, theHighThreshold, theAperture);
// Pass back our now processed frame!
return tempFrame;
}
// Callback function to adjust the low threshold on slider movement
void onLowThresholdSlide(int theSliderValue)
{
lowSliderPosition = theSliderValue;
}
// Callback function to adjust the high threshold on slider movement
void onHighThresholdSlide(int theSliderValue)
{
highSliderPosition = theSliderValue;
}
int main(int argc, char** argv)
{
// Create two windows
cvNamedWindow("WebCam", CV_WINDOW_AUTOSIZE);
cvNamedWindow("Processed WebCam", CV_WINDOW_AUTOSIZE);
// Create the low threshold slider
// Format: Slider name, window name, reference to variable for slider, max value of slider, callback function
cvCreateTrackbar("Low Threshold", "Processed WebCam", &lowSliderPosition, maxLowThreshold, onLowThresholdSlide);
// Create the high threshold slider
cvCreateTrackbar("High Threshold", "Processed WebCam", &highSliderPosition, maxHighThreshold, onHighThresholdSlide);
// Create CvCapture object to grab data from the webcam
CvCapture* pCapture;
// Start capturing data from the webcam
pCapture = cvCaptureFromCAM(CV_CAP_V4L2);
// Display image properties
cout << "Width of frame: " << cvGetCaptureProperty(pCapture, CV_CAP_PROP_FRAME_WIDTH) << endl; // Width of the frames in the video stream
cout << "Height of frame: " << cvGetCaptureProperty(pCapture, CV_CAP_PROP_FRAME_HEIGHT) << endl; // Height of the frames in the video stream
cout << "Image brightness: " << cvGetCaptureProperty(pCapture, CV_CAP_PROP_BRIGHTNESS) << endl; // Brightness of the image (only for cameras)
cout << "Image contrast: " << cvGetCaptureProperty(pCapture, CV_CAP_PROP_CONTRAST) << endl; // Contrast of the image (only for cameras)
cout << "Image saturation: " << cvGetCaptureProperty(pCapture, CV_CAP_PROP_SATURATION) << endl; // Saturation of the image (only for cameras)
cout << "Image hue: " << cvGetCaptureProperty(pCapture, CV_CAP_PROP_HUE) << endl; // Hue of the image (only for cameras)
// Create an image from the frame capture
pFrame = cvQueryFrame(pCapture);
// Create a greyscale image which is the size of our captured image
pProcessedFrame = cvCreateImage(cvSize(pFrame->width, pFrame->height), IPL_DEPTH_8U, 1);
// Create a frame to use as our temporary copy of the current frame but in grayscale mode
tempFrame = cvCreateImage(cvSize(pFrame->width, pFrame->height), IPL_DEPTH_8U, 1);
// Loop controling vars
char keypress;
bool quit = false;
while (quit == false)
{
// Make an image from the raw capture data
// Note: cvQueryFrame is a combination of cvGrabFrame and cvRetrieveFrame
pFrame = cvQueryFrame(pCapture);
// Draw the original frame in our window
cvShowImage("WebCam", pFrame);
// Process the grame to find the edges
pProcessedFrame = findEdges(pFrame, lowSliderPosition, highSliderPosition, 3);
// Showed the processed output in our other window
cvShowImage("Processed WebCam", pProcessedFrame);
// Wait 20 milliseconds
keypress = cvWaitKey(20);
// Set the flag to quit if escape was pressed
if (keypress == 27)
{
quit = true;
}
} // End of while loop
// Release our stream capture object to free up any resources it has been using and release any file/device handles
cvReleaseCapture(&pCapture);
// Release our images
cvReleaseImage(&pFrame);
cvReleaseImage(&pProcessedFrame);
// This causes errors if you don't set it to NULL before releasing it. Maybe because we assign
// it to pProcessedFrame as the end result of the findEdges function, and we've already released pProcessedFrame!!
tempFrame = NULL;
cvReleaseImage(&tempFrame);
// Destory all windows
cvDestroyAllWindows();
}
Thank you all. I found solution, my cam not capturing image, I change to another camera and now the code is running fine.