extract frame from a video for detection with opencv - c++

i'm developing a program to detect object in video or image.
It works with the image, but now i want to use it with video. I use a specific folder to pick the image so i wanted to save frames from video in that folder before the detection.
The variable video and salvataggio are alredy setted.
In the following code i navigate throw the folder to analize the video:
DIR *dir;
dir = opendir(video.c_str());
string vidName;
struct dirent *ent;
if (dir != NULL) {
while ((ent = readdir (dir)) != NULL) {
vidName= ent->d_name;
if(vidName.compare(".")!= 0 && vidName.compare("..")!= 0)
{
//string vidPath(neg + vidName);
estraiframe(video, vidName, salvataggio);
}
}
closedir (dir);
}
else {
cout<<"directory "<< video << " not present"<<endl;
}
}
The function estraiframe save the frame in the output folder.
void estraiframe(string path, string vidName, string output){
string vidPath(path + vidName);
VideoCapture cap(vidPath);
if( !cap.isOpened()){
cout << "Cannot open the video file" << endl;
return;
}
double count = cap.get(CV_CAP_PROP_FRAME_COUNT);
double rate = cap.get(CV_CAP_PROP_FPS);
int counter = 0;
for (int i=1; i< count; i+=rate*5)
{
cap.set(CV_CAP_PROP_POS_FRAMES,i);
Mat frame;
cap.read(frame);
counter++;
string nomeframe = to_string(counter) + "-frame_from"+vidName+".jpg";
string percorso (output+nomeframe);
cout << percorso;
imwrite(percorso,frame);
}
}
Apparently it works but after the last frame it gave me the following error:
Assertion stream_index < ogg->nstreams failed at libavformat/oggdec.c:898
I locked for it but i didin't find where is the error

Your video contains various stream indexes, e.g. 3 audios, and 1 video stream would result in 4 stream indexes. I recommend that you check the amount of streams your video contains and print ogg->nstreams to check if this corresponds. Looking at the source code in oggdec.c at line 898
static int ogg_read_seek(AVFormatContext *s, int stream_index,
int64_t timestamp, int flags)
{
struct ogg *ogg = s->priv_data;
struct ogg_stream *os = ogg->streams + stream_index;
int ret;
av_assert0(stream_index < ogg->nstreams); /* Line 898 */
You're clearly going out of bounds here.

Apparently I manage to solve it, I modified the function that extract the frame.
void estraiframe(string path, string vidName, string output){
string vidPath(path + vidName);
VideoCapture cap(vidPath);
if( !cap.isOpened()){
cout << "Cannot open the video file" << endl;
return;
}
double rate = cap.get(CV_CAP_PROP_FPS);
int counter = 0;
Mat frame;
int i=1;
while(1)
{
cap.read ( frame);
if( frame.empty()) break;
counter++;
if (counter == rate*5*i){
i++;
string nomeframe = to_string(counter) + "-frame_from"+vidName+".jpg";
string percorso (output+nomeframe);
imwrite(percorso, frame);
}
char key = waitKey(10);
if ( key == 27) break;
}
}

Related

input parameters for av_read_frame() method

I want to read and open a video in encoded domain without decoding. I have written the code up to now and it works without errors. But the output of the method av_read_frame() just gives number of zeros and same negative integer value is repeating.
I'm not sure whether I passed the parameters correctly to the method. Please help.
void CFfmpegmethods::VideoRead(){
av_register_all();
const char *url = "H:\\Sanduni_projects\\ad_1.mp4";
AVDictionary *options = NULL;
AVFormatContext *s = avformat_alloc_context(); //NULL;
//AVFormatContext *avfmt = NULL;
//avformat_alloc_context();
AVPacket pkt;
//AVFormatContext *avformat_alloc_context();
//AVIOContext *avio_alloc_context();
//open an input stream and read the header
int ret = avformat_open_input(&s, url, NULL, NULL);
//avformat_find_stream_info(s, &options); //finding the missing information
if (ret < 0)
abort();
av_dict_set(&options, "video_size", "640x480", 0);
av_dict_set(&options, "pixel_format", "rgb24", 0);
if (avformat_open_input(&s, url, NULL, &options) < 0){
abort();
}
av_dict_free(&options);
AVDictionaryEntry *e;
if (e = av_dict_get(options, "", NULL, AV_DICT_IGNORE_SUFFIX)) {
fprintf(stderr, "Option %s not recognized by the demuxer.\n", e->key);
abort();
}
//int i = 0;
while (1){
//Split what is stored in the file into frames and return one for each call
//returns the next frame of the stream
int frame = av_read_frame(s, &pkt);
//cout <<i << " " << frame << endl;
waitKey(30);
//i++;
}
//make the packet free
av_packet_unref(&pkt);
//Close the file after reading
avformat_close_input(&s);
}
Method av_read_frame() output zeros while reading the packets and after that gives negative values. In my code loop runs infinitely. Therefore gives infinite number of negative values.
This is the modified code
while (1){
//Split what is stored in the file into frames and return one for each call
//returns the next frame of the stream
int frame = av_read_frame(s, pkt);
duration = pkt->duration;
size = pkt->size;
total_size = total_size + size;
total_duration = total_duration + duration;
i++;
if (frame < 0) break;
cout << "frame" << i << " " << size << " "<< duration << endl;
}

Debug glibc free(): invalid pointer

I'm trying to debug code that eventually throws
*** glibc detected *** ./build/smonitor: free(): invalid pointer:
Its challenging because I don't use free... I've seen the other SO posts that have examples replicating the issue.. I need help on how to debug. First off, I'm a C/C++ n00b so my pointer skills are in-development but I'm not doing much dynamic memory allocation (I think).
I'm starting to write my own 'security' application where I take snapshots from cameras and write them to a NFS share, I'll eventually have a display of each camera's snapshot. Right now, I take captures from 1 camera and cycle them through my display window (using opencv). I can run for a while (~hour) before I get the core dump. I create a thread to run the window, I should loop until my run flag gets set to false and then I call cvReleaseImage.. I have no idea why this is failing, any guidance is greatly appreciated!
// will be replaced with camera X filename on NFS share
std::string generate_filename()
{
static const char alphanum[] =
"0123456789"
"ABCDEFGHIJKLMNOPQRSTUVWXYZ"
"abcdefghijklmnopqrstuvwxyz";
std::string filename = "";
std::stringstream ss;
for (int i = 0; i < 10; i++)
{
ss << alphanum[rand() % (sizeof(alphanum) - 1)];
}
ss << ".jpg";
printf("Generated filename: %s\n", ss.str().c_str());
return ss.str();
}
std::string generate_file_path()
{
std::stringstream ss;
ss << CAPTURES_PATH << generate_filename();
return ss.str();
}
void capture_photo(std::string& filepath)
{
time_t now;
time_t end;
double seconds;
bool cancelCapture = false;
int count = 0;
CvCapture* capture = cvCreateCameraCapture(0);
printf(“Opened camera capture\n");
IplImage* frame;
while(1)
{
frame = cvQueryFrame(capture);
if (!frame)
{
fprintf(stderr, "Could not read frame from video stream\n\n");
} else
{
cvShowImage(WINDOW, frame);
cvWaitKey(100);
if (get_snapshot_enabled()) // simulate delay between snapshots
{
filepath = generate_file_path();
printf("Saving image\n");
cvSaveImage(filepath.c_str(), frame);
break;
}
}
}
printf("Ending camera capture\n");
cvReleaseCapture(&capture);
}
void* manage_window(void* arg)
{
time_t now;
time_t end;
double seconds = 0;
double stateSec;
int i = 0;
int rem = 0;
IplImage* images[10];
time_t lastUpdate;
time_t tDiff; // time diff
cvNamedWindow(WINDOW, CV_WINDOW_FREERATIO);
cvSetWindowProperty(WINDOW, CV_WND_PROP_FULLSCREEN, CV_WINDOW_FULLSCREEN);
std::string filepath;
time(&now);
int lastPos = 0;
while (1)
{
if (get_snapshot_enabled())
{
write_console_log("Going to capture photo\n");
// camera was selected
filepath = generate_file_path();
printf("Generated filepath: %s\n", filepath.c_str());
capture_photo(filepath);
if (!filepath.empty())
{
printf("Received filepath: %s\n", filepath.c_str());
time(&now);
images[lastPos] = cvLoadImage(filepath.c_str());
cvShowImage(WINDOW, images[lastPos]);
cvWaitKey(100);
if (lastPos == 10) lastPos = 0;
else lastPos++;
}
}
time(&end);
seconds = difftime(end, now);
if (seconds >= 5)
{
cvShowImage(WINDOW, images[ i % 10])
cvWaitKey(100);
i++;
time(&now);
}
// check if we're running
if (!get_running())
{
// log some error we're not running...
write_logs("Window thread exiting, not running...");
break;
}
}
for (i=0; i < 10; i++)
cvReleaseImage(&images[i]);
pthread_exit(NULL);
}
In void* manage_window(void* arg) there are lines
IplImage* images[10];
and
images[lastPos] = cvLoadImage(filepath.c_str());
if (lastPos == 10) lastPos = 0;
else lastPos++;
where lastPos can be incremented to 10, leading to
images[10] = cvLoadImage(filepath.c_str());
i.e. invalid write beyond the end of array. I think something like valgrind would have detected this.

How can I know if a image is RGB in OpenCV?

I've made a program in c++ using OpenCV library. The program record video from webcam and then split it in frames. I want to know if the frames are in RGB beacuse i want to access the RGB properties of every pixel. The codec for capture is CV_FOURCC('M','J','P','G'). How can i get the frames in RGB colorspace?
int main() {
Mat image;
VideoCapture cap(0);
cap.set(CV_CAP_PROP_FPS, 10);
if ( !cap.isOpened() ) {
cout << "ERROR : Cannot open the video file"<<endl;
return -1;
}
namedWindow("MyWindow", CV_WINDOW_AUTOSIZE);
double dWidth = cap.get(CV_CAP_PROP_FRAME_WIDTH);
double dHeight = cap.get(CV_CAP_PROP_FRAME_HEIGHT);
cout << "Frame size :" << dWidth << "x" << dHeight << endl;
Size frameSize(static_cast<int>(dWidth), static_cast<int>(dHeight));
VideoWriter oVideoWriter("E:/myVideo.avi", CV_FOURCC('M', 'J', 'P', 'G'), 10, frameSize, true);
if (!oVideoWriter.isOpened()) {
cout << "ERROR : Failed to write the video"<<endl;
return - 1;
}
while (1) {
Mat image;
bool bSuccess = cap.read(image);
if (!bSuccess) {
cout << "ERROR : Cannot read a frame from video file" << endl;
break;
}
oVideoWriter.write(image);
imshow("MyWindow", image);
if (waitKey(10) == 27) {
saveImages();
cout << "ESC key is pressed by user" << endl;
break
}
}
return 0;
}
int saveImages() {
CvCapture *capture = cvCaptureFromFile("E:/myVideo.avi");
if(!capture)
{
cout<<"!!! cvCaptureFromAVI failed (file not found?)"<<endl;
return -1;
}
int fps = (int) cvGetCaptureProperty(capture, CV_CAP_PROP_FPS);
IplImage* frame = NULL;
int frame_number = 0;
char key = 0;
while (key != 'q')
{
frame = cvQueryFrame(capture);
if (!frame)
{
cout<<"!!! cvQueryFrame failed: no frame"<<endl;
break;
}
char filename[100];
strcpy(filename, "frame_");
char frame_id[30];
_itoa(frame_number, frame_id, 10);
strcat(filename, frame_id);
strcat(filename, ".jpg");
printf("* Saving: %s\n", filename);
if (!cvSaveImage(filename, frame))
{
cout<<"!!! cvSaveImage failed"<<endl;
break;
}
frame_number++;
key = cvWaitKey(1000 / fps);
}
cvReleaseCapture(&capture);
return 0;
}
When OpenCV loads colored images (i.e. 3 channel) from the disk, camera, or a video file, the image data will be stored in the BGR format. This is a simple test that you can do:
/* Code using the C++ API */
cv::VideoCapture cap(0);
if (!cap.isOpened()) {
std::cout << "!!! Failed to open webcam" << std::endl;
return -1;
}
if (!cap.read(frame)) {
std::cout << "!!! Failed to read a frame from the camera" << std::endl;
return -1;
}
bool is_colored = false;
if (frame.channels() == 3) {
is_colored = true;
}
// Do something with is_colored
// ...
Unless you have a weird camera, the frames will always be colored (and as result, stored as BGR).
When cv::imwrite() (C++ API) or cvSaveImage() (C API) are called, OpenCV does the proper magic tricks to ensure the data is saved in a compatible way with requested output format (JPG, PNG, AVI, etc) and during this process it automatically converts the data to RGB if it needs to.
Nevertheless, if for some reason you need to convert the image to RGB you can call:
cv::Mat img_rgb;
cv::cvtColor(frame, img_rgb, CV_BGR2RGB);
Please note that OpenCV has a C API and also a C++ API, and they shouldn't be mixed:
If you use IplImage then stick with the rest of the C API.
If you decide to go with cv::Mat, then keep using the C++ API.
There are different ways to access the pixels of a cv::Mat, here is one of them:
unsigned char* pixels = (unsigned char*)(frame.data);
for (int i = 0; i < frame.rows; i++)
{
for (int j = 0; j < frame.cols; j++)
{
char b = pixels[frame.step * j + i] ;
char g = pixels[frame.step * j + i + 1];
char r = pixels[frame.step * j + i + 2];
}
}

get depth map from depth data kinect openni open cv

I m beginning in programming with openni and open cv I m working with the kinect .
here a part of the code that I m using to get de data depth map ( it s working )
Now my question is :
How could I get the depht map as an image in return ?
my data is on DepthPixel* pDepth1 and I want to display the image of the depth map (because I want to save her ).
Thank you
VideoFrameRef frame;
DepthPixel* pDepth1 = NULL;
DepthPixel* pDepth2 = NULL;
for (int i = 0; i < 2; i++)
{
int changedStreamDummy;
VideoStream* pStream = &depth;
rc = OpenNI::waitForAnyStream(&pStream, 1, &changedStreamDummy, SAMPLE_READ_WAIT_TIMEOUT);
if (rc != STATUS_OK)
{
printf("Wait failed! (timeout is %d ms)\n%s\n", SAMPLE_READ_WAIT_TIMEOUT, OpenNI::getExtendedError());
continue;
}
rc = depth.readFrame(&frame);
if (rc != STATUS_OK)
{
printf("Read failed!\n%s\n", OpenNI::getExtendedError());
continue;
}
if (frame.getVideoMode().getPixelFormat() != PIXEL_FORMAT_DEPTH_1_MM && frame.getVideoMode().getPixelFormat() != PIXEL_FORMAT_DEPTH_100_UM)
{
printf("Unexpected frame format\n");
continue;
}
if (i == 0){
int dummy;
cout << "Press any key to take first pic: ";
cin >> dummy;
pDepth1 = (DepthPixel*)frame.getData();
}
else{
int dummy;
cout << "Press any key to take second pic: ";
cin >> dummy;
pDepth2 = (DepthPixel*)frame.getData();
}
Assuming that you have your dependencies sorted out, the following has worked for me in the past
//set up the capture with the correct settings
VideoCapture capture = new VideoCapture;
capture->open(CV_CAP_OPENNI);
capture->set(CV_CAP_OPENNI_IMAGE_GENERATOR_OUTPUT_MODE, CV_CAP_OPENNI_VGA_30HZ);
//get an image and display it
Mat image;
capture->grab();
capture->retrieve(image, CV_CAP_OPENNI_DEPTH_MAP);
then work with the image as you would any other matrix. Remember that the image CV_16U as the capture records depths in 16 bit unsigned integers.

Write/save videostream if detection true

I want to write a videostream after detection is found true.
I use this link as Videowrite example
My code implementation looks like that:
int main(int argc, const char** argv) {
bool detection = false;
VideoCapture cap(-1);
if (!cap.isOpened())
{
printf("ERROR: Cannot open the video file");
}
namedWindow("MyVideo", CV_WINDOW_AUTOSIZE);
double dWidth = cap.get(CV_CAP_PROP_FRAME_WIDTH);
double dHeight = cap.get(CV_CAP_PROP_FRAME_HEIGHT);
cout << "Frame Size = " << dWidth << "x" << dHeight << endl;
Size frameSize (static_cast<int>(dWidth), static_cast<int>(dHeight));
VideoWriter record ("/home/hacker/MyVideo.avi", CV_FOURCC('P','I','M','1'),
30, frameSize, true);
if (!record.isOpened())
{
printf("Error: Failed to write the video");
return -1;
}
while (true)
{
Mat frame;
if (!frame.empty())
{
detectAndDisplay(frame);
}
else
{
printf(" --(!) No captured frame -- Break!"); break;
}
if (detection == true)
{
record.write(frame);
}
char c = cvWaitKey(33);
if (c == 27) { break; }
}
return 0;
}
In my home directory I can see the Myvideo.avi but it's empty.
I got the following errors on command line:
VIDIOC_QUERMENU: Invalid argument
VIDIOC_QUERMENU: Invalid argument
Frame size: 640x480 Output #0, avi, to '/home/hacker/MyVideo.avi":
Stream #0.0: Video: mpeg1video (hq), yvu420p, 640x480, q=2-31, 19660
kb/s, 9ok tbn, 23,98 tbc
--(!) No captured frame -- Break! Process returned 0 (0x0) execution time: 0,75 s
You should release videowriter ( record.Release(); ). It closes file.
I try to solve it like this:
But i have 2 problems:
I want to save the MyVideo.avi if detecAndDisplay(frame) == true. But he saves it anyway(with an empty video record). And if that save the video recording is running faster.
int main( int argc, const char** argv ) {
Mat frame;
VideoCapture capture(-1); // open the default camera
if( !capture.isOpened() )
{
printf("Camera failed to open!\n");
return -1;
}
capture >> frame; // get first frame for size
for(;;)
{
// get a new frame from camera
capture >> frame;
//-- 3. Apply the classifier to the frame
if( !frame.empty() )
{
detectAndDisplay( frame );
}
if(detectAndDisplay(frame)==true)
{
// record video
VideoWriter record("MyVideo.avi", CV_FOURCC('D','I','V','X'), 30, frame.size(), true);
if( !record.isOpened() )
{
printf("VideoWriter failed to open!\n");
return -1;
}
// add frame to recorded
record << frame;
}
if(waitKey(30) >= 0) break;
}
return 0;
}
/** #function detectAndDisplay */
/** this function detect face draw a rectangle around and detect eye & mouth and draw a circle around */
bool detectAndDisplay( Mat frame ) {
...
...
return true;
}
This might be why your video file was empty.
bool detection = false;
...
if (detection == true)
{
record.write(frame);
}