I have a series of images saved on my system according to their time stamps.
For example the images are named as:
20140305180348.jpg
20140305180349.jpg
I have 100 such images, I want to open them using OpenCV one after the other. I have tried using cvCapturefromFile() but using it I am able to open just a single image at a time. I want to stitch/join them so that I can make a video.
I am sorry I cannot post the code as I am not allowed to. How do I proceed?
In OpenCV, to write images to a video, you can use VideoWriter (and do it in a loop to read a sequence of images):
VideoWriter outputVideo; // Open the output
// ... set video properties like FPS
if (!outputVideo.isOpened())
{
cout << "Could not open the output video for write: " << source << endl;
return -1;
}
for(...)
{
// read your frame, e.g. to Mat img
// outputVideo.write(img); //save or
outputVideo << img;
}
cout << "Finished writing" << endl;
Check out here for more info.
In this tutorial is an example how to write a video. Just modify the for-loop in the end.
pseudocode:
open videocontainer
int i=0;
while(i<100){
Mat img = imread("path"+to_string(i)+".jpg");
outputvideo << img;
}
close videocontainer
Related
i'm learning CUDA and i came across a course that is helping even though the code is very old and i'm having problems running it i'm trying to understand it, so he reads images using openCV imread which gives a Mat obj i guess but the data is saved as a uchar*
cv::Mat image = cv::imread(filename.c_str(), CV_LOAD_IMAGE_COLOR);
but after i was stuck in converting uchar to uchar4 and i was reading the code from the teacher and he wrote .
cv::Mat image = cv::imread(filename.c_str(), CV_LOAD_IMAGE_COLOR);
if (image.empty()) {
std::cerr << "Couldn't open file: " << filename << std::endl;
exit(1);
}
cv::cvtColor(image, imageInputRGBA, CV_BGR2RGBA);
//allocate memory for the output
imageOutputRGBA.create(image.rows, image.cols, CV_8UC4);
//This shouldn't ever happen given the way the images are created
//at least based upon my limited understanding of OpenCV, but better to check
if (!imageInputRGBA.isContinuous() || !imageOutputRGBA.isContinuous()) {
std::cerr << "Images aren't continuous!! Exiting." << std::endl;
exit(1);
}
*h_inputImageRGBA = (uchar4 *)imageInputRGBA.ptr<unsigned char>(0);
*h_outputImageRGBA = (uchar4 *)imageOutputRGBA.ptr<unsigned char>(0);
are the two last lines the ones where he subtly converts from uchar to uchar4 ...
h_inputImageRGBA
h_outputImageRGBA
are both uchar4**
can somebody help me understand the code
here is the link to the source
function name : Preprocess
I'm currently using OpenCV(4.5.1-dev) c++ api to try to read frames from a mp4 .video file. I keep getting error "can not open video file". I compile open cv with
-DWITH_FFMPEG=OFF
as suggested in OpenCV readme.txt. The minimum reproducible code is
cv::VideoCapture cap;
cap.open(video_path, cv::CAP_V4L);
if(!cap.isOpened())
{
std::cerr << "trying to open video file at: " << video_path << std::endl;
CV_Error(cv::Error::StsError, "Can not open Video file");
}
for(int frameNum = 0; frameNum < cap.get(cv::CAP_PROP_FRAME_COUNT); frameNum++)
{
cv::Mat frame;
cap >> frame; // get the next frame from video
frames.push_back(frame);
}
where for video_path, I tried both absolute and relative path. Neither works. Any suggestions?
The operations I did were quite simple:
I read an .avi file with a dimension of 1280x720, stored one frame of the video to a Mat object and displayed it.
Here is part of the code:
VideoCapture capL;
capL.open("F:/renderoutput/cube/left.avi");
Mat frameL;
cout << capL.get(CAP_PROP_FRAME_WIDTH) << ", " << capL.get(CAP_PROP_FRAME_HEIGHT) << endl;
for (;;)
{
capL.read(frameL);
cout << frameL.size() << endl;
if (frameL.empty())
break;
imshow("Output", frameL);
waitKey(200);
}
......
But the dimensions of the capL and frameL are not he same, with the former being 1280x720 and latter 1280x360. Why is this happening? I have been using OpenCV 3.3.1 in Visual Studio for quite a long time and some day this happened.
Most likely the video is interlaced. So you have only half height of it in every frame.
I'm using OpenCV and v4l2loopback library to emulate video devices:
modprobe v4l2loopback devices=2
Then I check what devices I have:
root#blah:~$ v4l2-ctl --list-devices
Dummy video device (0x0000) (platform:v4l2loopback-000):
/dev/video1
Dummy video device (0x0001) (platform:v4l2loopback-001):
/dev/video2
XI100DUSB-SDI (usb-0000:00:14.0-9):
/dev/video0
video0 is my actual camera where I grab frames from, then I plan to process them via OpenCV and write it to video2 (which is a sink I believe).
Here is how I attempt to do so:
int width = 320;
int height = 240;
Mat frame(height, width, CVX_8UC3, Scalar(0, 0, 255));
cvtColor(frame, frame, CVX_BGR2YUV);
int fourcc = CVX_FOURCC('Y', 'U', 'Y', '2');
cout << "Trying to open video for write: " << FLAGS_out_video << endl;
VideoWriter outputVideo = VideoWriter(
FLAGS_out_video, fourcc, 30, frame.size());
if (!outputVideo.isOpened()) {
cerr << "Could not open the output video for write: " << FLAGS_out_video
<< endl;
}
As far as I know video output format should be YUYV (which is equal to YUY2 in OpenCV). Please correct me if I'm wrong. In my code I'm not writing into outputVideo anything yet, just trying to open it for write, but I keep getting outputVideo.isOpened()==false for some reason, no additional errors/info in the output:
root#blah:~$ main --uid='' --in_video='0' --out_video='/dev/video2'
Trying to open video for write: /dev/video2
Could not open the output video for write: /dev/video2
I'd appreciate any advice or help on how to debug/resolve this issue. Thank you in advance!
I am trying to create some video from a set of large raw images that I have.
The code is as follow:
int imageWidth=687;
int imageHeight=916;
int fps=3;
int ex=-1;
CvSize size = cvSize(imageWidth,imageHeight);
VideoWriter outputVideo;
outputVideo.open(MovieOutput, ex, fps, size, true);
if(outputVideo.isOpened())
{
cout << "error opening output video";
}
for(int frameNo=0;frameNo<58;frameNo++)
{
ostringstream outfilename;
outfilename << InputDir<< (frameNo+1)<<".jpg";
rawimages.Read(frameNo);
Mat image=rawimages.ToOpencvImage();
imwrite( outfilename.str(), image );
outputVideo <<image;
imshow("Image", image);
if(waitKey(30) >= 0) break;
}
I can see that images are shown on screen and also different jpg are saved on hard disk.
I can see that the output avi is created, but its size is zero.
What is the problem with this code?
some note:
The output size is very big. Can it generate movie with that size?
To summarize the comments: you pass the second parameter to the VideoWriter open command with a value of -1. This is supposed to open a codec selection dialogue in Windows, but as of OpenCV 2.4.5, the dialogue seems to be bugged - it appears, but I couldn't manage to write to a file afterwards.
Selecting a codec directly works fine and makes more sense in my opinion. More info about this command and available codecs can be found here.
outputVideo.open("example.avi", CV_FOURCC('M','J','P','G'), fps, size, true);