So I'm writing a C++ program that will take a wav file, generate a visualization, and export the video out alongside the audio using ffmpeg (pipe). I've been able to get output out to ffmpeg just fine and a video with the visualization and audio are created by ffmpeg.
The problem is the video and audio are desyncing. The video is just too fast and ends before the song is completed (the video file is the correct length; the waveform just flatlines and ends, indicating that ffmpeg reached the end of the video and is just using the last frame it received until the audio ends). So I'm not sending enough frames to ffmpeg.
Below is a truncated version of the source code:
// Example code
int main()
{
// LoadAudio();
uint32_t frame_max = audio.sample_rate / 24; // 24 frames per second
uint32_t frame_counter = 0;
// InitializePipe2FFMPEG();
// (Left channel and right channel are always equal in size)
for (uint32_t i = 0; i < audio.left_channel.size(); ++i)
{
// UpdateWaveform4Image();
if (frame_counter % frame_max == 0)
{
// DrawImageAndSend2Pipe();
frame_counter = 1;
}
else
{
++frame_counter;
}
}
// FlushAndClosePipe();
return 0;
}
The commented-out functions are fake and irrelevant. I know this because "UpdateWaveform4Image()" updates the waveform used to generate the image every sample. (I know that's inefficient, but I'll worry about optimization later.) The waveform is a std::vector in which each element stores the y-coordinate of each sample. It has no effect on when the program will generate a new frame for the video.
Also ffmpeg is set to output 24 frames per second--trust me, I thought that was the problem too because by default ffmpeg outputs to 25 fps.
My line of thinking for the modulus check is that frame_counter is incremented every sample. frame_max equals 2000 because 48000 / 24 = 2000. I know the audio is clocked at 48kHz because I created the file myself. So it SHOULD generate a new image every 2000 samples.
Here is a link to the output video: [output]
Any advice would be helpful.
EDIT: Skip to 01:24 to see the waveform flatline.
Related
I am working with rtmp and ffmpeg/libav. I have a clip with a fps of 60 fps where it is made of 2 consecutive video frames and 1 audio frame. The 2 consecutive frames correspond to left and right images toke from 2 cameras (already sync).
I use av_read_frame for reading the frame and then if it is a video frame I use avcodec_decode_video2(I know it is deprecated but it still works and I will replace it later). Now, since I have to put the 2 video frames side by side, I wait until the 2 frames are completely read and I put them together with OpenCV. The problem is that it seems that I always read only the left frames interrupted sometimes by the right frames. I searched a bit the problem and it seems that is related to the flushing of the decoder. So, I tried to flush the decoder by using empty packages or using avcodec_flush_buffers but the situation is even worst, I get the left and right images but only few frames.
In the following I put a sample of my code:
bool left = false;
while(true)
{
ret = av_read_frame(fmt_ctx, &pkt);
if(ret < 0)
break
if(pkt.stream_index == video_index)
{
ret = avcodec_decode_video2(video_dec_ctx, frame, &got_frame, &pkt);
if(got_frame)
{
if(!left)
{
//I SAVE THE LEFT FRAME
left = true;
}
else
{
//I PUT THE LEFT AND RIGHT FRAME TOGETHER
left = false;
}
}
}
}
UPDATE
I figured out what is the problem: basically the clip/stream I am trying to read is made of 1 I frame and then P frames that actually are I frames, because each frame represents Left and Right.
I cannot force anything with av_read_frame, so I have to find a workaround.
I'm guessing adjacent frames (left and right) may have same PTS values. You may first buffer the frames then fetch according to PTS value (look for same values) after correctly fetching the frames it may be trivial to adjoin them.
Hope that helps.
I would like to ask you how to split video (.mp4) on several videos by open cv (language c++) by time? For example I have video 10 seconds long and I want to create two videos from it; the first video captures frames from original video between 0 second and 5 seconds and the second video captures frames from original video between 6 seconds and 10 seconds.
Is there somebody who knows answer?
Just read the inputVideo and calculate how many frames you need.
Then write the number of frames to the first output video and the rest to the second.
Something like this should work
for(;;) //Show the image captured in the window and repeat
{
countFirstVideo++;
inputVideo >> src; // read
if (src.empty()){ break; } // check if at end
if(countFirstVideo++ < myDesignatedSize)
{
outputVideo1 << src;
}
else
{
outputVideo2 << src;
}
}
I am having issues with reading a recording I made using the Recorder class in openni2 with the asus Xtion PRO LIVE. The problem is that once every ~50 frames a wrong frame is read from the recording, this was tested by storing the generated image (converted to an opencv matrix) as an .png image (using the opencv imwrite function) with an index. Shown below is my code, I also tried the code posted in the "OpenNI2: Reading in .oni Recording" question. This also doesnt work. The videostreams, device, videoFrameRefs and the playback controller are all initialized in an initialization function. I can post this as well if necessary.
openni::VideoStream depthStream;
openni::VideoStream colorStream;
openni::Device device;
openni::VideoFrameRef m_depthFrame;
openni::VideoFrameRef m_colorFrame;
openni::PlaybackControl *controler;
typedef struct _ImagePair {
cv::Mat GrayscaleImage;
cv::Mat DepthImage;
cv::Mat RGBImage;
float AverageDistance;
std::string FileName;
int index;
}ImagePair;
ImagePair SensorData::getNextRawImagePair(){
ImagePair result;
openni::Status rc;
std::cout<<"Getting next raw image pair...";
rc = controler->seek(depthStream, rawPairIndex);
if(!correctStatus(rc,"Seek depth"))return result;
openni::VideoStream** m_streams = new openni::VideoStream*[2];
m_streams[0] = &depthStream;
m_streams[1] = &colorStream;
bool newDepth=false,newColor=false;
while(!newDepth || !newColor){
int changedIndex;
openni::Status rc = openni::OpenNI::waitForAnyStream(m_streams, 2, &changedIndex);
if (rc != openni::STATUS_OK)
{
printf("Wait failed\n");
//return 1;
}
switch (changedIndex)
{
case 0:
rc = depthStream.readFrame(&m_depthFrame);
if(!correctStatus(rc,"Read depth")){
return result;
}
newDepth = true;
break;
case 1:
rc = colorStream.readFrame(&m_colorFrame);
if(!correctStatus(rc,"Read color")){
return result;
}
newColor = true;
break;
default:
printf("Error in wait\n");
}
}
//convert rgb to bgr cv::matrix
cv::Mat cv_image;
const openni::RGB888Pixel* colorImageBuffer = (const openni::RGB888Pixel*)m_colorFrame.getData();
cv_image.create(m_colorFrame.getHeight(), m_colorFrame.getWidth(), CV_8UC3);
memcpy( cv_image.data, colorImageBuffer,3*m_colorFrame.getHeight()*m_colorFrame.getWidth()*sizeof(uint8_t) );
//convert to BGR opencv color order
cv::cvtColor(cv_image,cv_image,cv::COLOR_RGB2BGR);
result.RGBImage = cv_image.clone();
//convert to grayscale
cv::cvtColor(cv_image,cv_image,cv::COLOR_BGR2GRAY);
result.GrayscaleImage =cv_image.clone();
//convert depth to cv::Mat INT16
const openni::DepthPixel* depthImageBuffer = (const openni::DepthPixel*)m_depthFrame.getData();
cv_image.create(m_depthFrame.getHeight(), m_depthFrame.getWidth(), CV_16UC1);
memcpy( cv_image.data, depthImageBuffer,m_depthFrame.getHeight()*m_depthFrame.getWidth()*sizeof(INT16) );
result.DepthImage = cv_image.clone();
result.index = rawPairIndex;
rawPairIndex++;
std::cout<<"done"<<std::endl;
return result;
}
I also tried it without the using waitForAnyStream part but that only made it worse. The file I am trying to load is over 1GB, not sure if that is a problem. The behaviour seems random because the indexes of the wrong images are not always the same.
UPDATE:
I changed the seek function to seek in the color stream instead of the depth stream. I also ensured that each stream is only waited for once in the waitForAnyStream function by setting the corresponding point in m_streams to null.
I found out that it is possible to find the actual frame index for each frame (.getFrameIndex) so I was able to compare the indices. After getting 1000 images there where 18 wrong color images, 17 of which had an index error of 53 and 1 had an error of 46. The depth images had an almost constant index error of 9.
The second thing I found was after adding a sleep of 1 ms (also tried 10ms but the results where the same) before the waitForAnyStream function the returned indices change significantly. The indices dont make large jumps anymore but the color images have a standard offset of 53 and the depth images have an offset of 9.
When I change the seek function to seek in the depth stream then with the delay the color stream has a constant error of 46 and the depth stream has an error of 1. Without the delay the color stream has an error of 45 and the depth stream has no error with occasional spikes of errors of 1.
I also looked at the indices when I dont use seek and waitForAnyStream but just do it as is proposed as an answer in "OpenNI2: Reading in .oni Recording". This shows that when the file is read by just calling m_readframe multiple times then the first color frame has index 59 instead of 0. After checking it turns out that frame 0 does excist and is an earlier frame than frame 59. So just opening the file and using readframe doesnt work at all. There are however no index errors, just like when I added the sleep to my own implementation.
UPDATE2:
I have come to the conclusion that either the openni2 library doesnt work properly or I have installed it incorrectly. This is because I am also having problems setting the Xtion to a resolution of 640x480 for both streams. When I do this the depth image only gets updated once every ~100 seconds. I have written a quick fix for my original problem by just filtering out the frames who's indices are wrong and just continuing with the next image.
I would still like to know how to fix this, so if you know please tell me.
UPDATE 3:
The code I am using for setting the framerate and fps for recording is:
//colorstream
openni::VideoMode color_videoMode = m_colorStream.getVideoMode();
color_videoMode.setResolution(WIDTH,HEIGHT);
color_videoMode.setFps(30);
m_colorStream.setVideoMode(color_videoMode);
//depth stream
openni::VideoMode depth_videoMode = m_depthStream.getVideoMode();
depth_videoMode.setResolution(WIDTH,HEIGHT);
depth_videoMode.setFps(30);
m_depthStream.setVideoMode(depth_videoMode);
I forgot to mention that I also tested the resolution by running the sampleviewer example program (I think, it was a few months ago) and changing the resolution in the xml config file. Here the color images would be shown fine but the depth images would update verry rarely.
In OpenCV is there a way to dramatically increase the frame rate of a video (.mp4). I have tried to methods to increase the playback of a video including :
Increasing the frame rate:
cv.SetCaptureProperty(cap, cv.CV_CAP_PROP_FPS, int XFRAMES)
Skipping frames:
for( int i = 0; i < playbackSpeed; i++ ){originalImage = frame.grab();}
&
video.set (CV_CAP_PROP_POS_FRAMES, (double)nextFrameNumber);
is there another way to achieve the desired results? Any suggestions would be greatly appreciated.
Update
Just to clarify, the play back speed is NOT slow, I am just searching for a way to make it much faster.
You're using the old API (cv.CaptureFromFile) to capture from a video file.
If you use the new C++ API, you can grab frames at the rate you want. Here is a very simple sample code :
#include "opencv2/opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap("filename"); // put your filename here
namedWindow("Playback",1);
int delay = 15; // 15 ms between frame acquisitions = 2x fast-forward
while(true)
{
Mat frame;
cap >> frame;
imshow("Playback", frame);
if(waitKey(delay) >= 0) break;
}
return 0;
}
Basically, you just grab a frame at each loop and wait between frames using cvWaitKey(). The number of milliseconds that you wait between each frame will set your speedup. Here, I put 15 ms, so this example will play a 30 fps video at approx. twice the speed (minus the time necessary to grab a frame).
Another option, if you really want control about how and what you grab from video files, is to use the GStreamer API to grab the images, and then convert to OpenCV for image tratment. You can find some info about this on this post : MJPEG streaming and decoding
Marked question as outdated as using the deprecated avcodec_decode_video2
I'm currently experiencing artifacts when decoding video using ffmpegs api. On what I would assume to be intermediate frames, artifacts build slowly only from active movement in the frame. These artifacts build for 50-100 frames until I assume a keyframe resets them. Frames are then decoded correctly and the artifacts proceed to build again.
One thing that is bothering me is I have a few video samples that are 30fps(h264) that work correctly, but all of my 60fps videos(h264) experience the problem.
I don't currently have enough reputation to post an image, so hopefully this link will work.
http://i.imgur.com/PPXXkJc.jpg
int numBytes;
int frameFinished;
AVFrame* decodedRawFrame;
AVFrame* rgbFrame;
//Enum class for decoding results, used to break decode loop when a frame is gathered
DecodeResult retResult = DecodeResult::Fail;
decodedRawFrame = av_frame_alloc();
rgbFrame = av_frame_alloc();
if (!decodedRawFrame) {
fprintf(stderr, "Could not allocate video frame\n");
return DecodeResult::Fail;
}
numBytes = avpicture_get_size(PIX_FMT_RGBA, mCodecCtx->width,mCodecCtx->height);
uint8_t* buffer = (uint8_t *)av_malloc(numBytes*sizeof(uint8_t));
avpicture_fill((AVPicture *) rgbFrame, buffer, PIX_FMT_RGBA, mCodecCtx->width, mCodecCtx->height);
AVPacket packet;
while(av_read_frame(mFormatCtx, &packet) >= 0 && retResult != DecodeResult::Success)
{
// Is this a packet from the video stream?
if (packet.stream_index == mVideoStreamIndex)
{
// Decode video frame
int decodeValue = avcodec_decode_video2(mCodecCtx, decodedRawFrame, &frameFinished, &packet);
// Did we get a video frame?
if (frameFinished)// && rgbFrame->pict_type != AV_PICTURE_TYPE_NONE )
{
// Convert the image from its native format to RGB
int SwsFlags = SWS_BILINEAR;
// Accurate round clears up a problem where the start
// of videos have green bars on them
SwsFlags |= SWS_ACCURATE_RND;
struct SwsContext *ctx = sws_getCachedContext(NULL, mCodecCtx->width, mCodecCtx->height, mCodecCtx->pix_fmt, mCodecCtx->width, mCodecCtx->height,
PIX_FMT_RGBA, SwsFlags, NULL, NULL, NULL);
sws_scale(ctx, decodedRawFrame->data, decodedRawFrame->linesize, 0, mCodecCtx->height, rgbFrame->data, rgbFrame->linesize);
//if(count%5 == 0 && count < 105)
// DebugSavePPMImage(rgbFrame, mCodecCtx->width, mCodecCtx->height, count);
++count;
// Viewable frame is a struct to hold buffer and frame together in a queue
ViewableFrame frame;
frame.buffer = buffer;
frame.frame = rgbFrame;
mFrameQueue.push(frame);
retResult = DecodeResult::Success;
sws_freeContext(ctx);
}
}
// Free the packet that was allocated by av_read_frame
av_free_packet(&packet);
}
// Check for end of file leftover frames
if(retResult != DecodeResult::Success)
{
int result = av_read_frame(mFormatCtx, &packet);
if(result < 0)
isEoF = true;
av_free_packet(&packet);
}
// Free the YUV frame
av_frame_free(&decodedRawFrame);
I'm attempting to build a queue of the decoded frames that I then use and free as needed. Is my seperation of the frames causing the intermediate frames to be decoded incorrectly? I also break the decoding loop once I've successfully gathered a frame(Decode::Success, most examples I've seen tend to loop through the whole video.
All codec contect, video stream information, and format contexts are setup up exactly as shown in the main function of https://github.com/chelyaev/ffmpeg-tutorial/blob/master/tutorial01.c
Any suggestions would be greatly appreciated.
For reference if someone finds themselves in a similar position. Apparently with some of the older versions of FFMPEG there's an issue when using sws_scale to convert an image and not changing the actual dimensions of the final frame. If instead you create a flag for the SwsContext using:
int SwsFlags = SWS_BILINEAR; //Whatever you want
SwsFlags |= SWS_ACCURATE_RND; // Under the hood forces ffmpeg to use the same logic as if scaled
SWS_ACCURATE_RND has a performance penalty but for regular video it's probably not that noticeable. This will remove the splash of green, or green bars along the edges of textures if present.
I wanted to thank Multimedia Mike, and George Y, they were also right in that the way I was decoding the frame wasn't preserving the packets correctly and that was what caused the video artifacts building from previous frames.