I have a scientific application which captures a video4Linux video stream. It's crucial that we capture each frame and no one gets lost. Unfortunately frames are missing here and there and I don't know why.
To detect dropped frames I compare the v4l2_buffer's sequence number with my own counter directly after reading a frame:
void detectDroppedFrame(v4l2_buffer* buffer) {
_frameCounter++;
auto isLastFrame = buffer->sequence == 0 && _frameCounter > 1;
if (!isLastFrame && _frameCounter != buffer->sequence+1)
{
std::cout << "\n####### WARNING! Missing frame detected!" << std::endl;
_frameCounter = buffer->sequence+1; // re-sync our counter with correct frame number from driver.
}
}
My running 1-file example gist can be found at github (based on official V4L2 capture example): https://gist.github.com/SebastianMartens/7d63f8300a0bcf0c7072a674b3ea4817
Tested with webcam on Ubuntu 18.04 virtual machine on notebook-hardware (uvcvideo driver) as well as with CSI camera on our embedded hardware running ubuntu 18.04 natively. Frames are not processed and buffers seems to be grabbed fast enough (buffer status checked with VIDIOC_QUERYBUF which shows that all buffers are in the driver's incoming queue and V4L2_BUF_FLAG_DONE flag is not set). I have lost frames with MMAP as well as with UserPtr method. Also it seems to be independent of pixelformat, image size and framerate!
To me it looks like if the camera/v4l2 driver is not able to fill available buffers fast enough but also increasing the file descriptor priority with VIDIOC_S_PRIORITY command does not help (still likely to be a thread scheduling problem?).
=> What are possible reasons why V4L2 does not forward frames (does not put them into it's outgoing queue)?
=> Is my method to detect lost frames correct? Are there other options or tools for that?
I had a similar problem when using the bttv driver. All attempts to capture at full resolution would result in dropped frames (usually around 10% of frames, often in bursts). Capturing at half resolution worked fine.
The solution that I found, though far from ideal, was to apply a load to the linux scheduler. Here is an example, using the "tvtime" program to do the capture:
#! /bin/bash
# apply a load in the background
while true; do /bin/true; done &> /dev/null &
# do the video capture
/usr/bin/tvtime "$#"
# kill the loop running in the background
pkill -P $$
This creates a loop that repeatedly runs /bin/true in the background. Almost any executable will do (I used /bin/date at first). The loop will create a heavy load, but with a multi-core system there is still plenty of headroom for other tasks.
This obviously isn't an ideal solution, but for what it's worth, it allows me to capture full-resolution video with no dropped frames. I have little desire to poke around in the driver/kernel code to find a better solution.
FYI, here are the details of my system:
OS: Ubuntu 20.04, kernel 5.4.0-42
MB: Gigabyte AB350M-D3H
CPU: AMD Ryzen 5 2400G
GPU: AMD Raven Ridge
Driver name : bttv
Card type : BT878 video (Hauppauge (bt878))
Bus info : PCI:0000:06:00.0
Driver version : 5.4.44
Capabilities : 0x85250015
Related
I am looking for a way to receive as an input any video (that is supported on iOS) and save on the device a new video with a new Frame Per Second rate. The motivation is to decrease the video size, and as well make it as lite weighted as possible.
Tried using ffmpeg library from command line (need it to run directly from application)
Tried working with SDAVAssetExportSessionDelegate, but managed only to change the bit per second (each frame quality is lower)
Though to work with OpenCV - but preferring something lighter and build in if possible
Objective C:
'''
compressionEncoder.videoSettings = #
{
AVVideoCodecKey: AVVideoCodecTypeH264,
AVVideoWidthKey: [NSNumber numberWithInt:width], //Set your resolution width here
AVVideoHeightKey: [NSNumber numberWithInt:height], //set your resolution height here
AVVideoCompressionPropertiesKey: #
{
AVVideoAverageBitRateKey: [NSNumber numberWithInt:bitRateKey], // Give bitrate for lower size low values
AVVideoProfileLevelKey: AVVideoProfileLevelH264High40,
// Does not change - quality setting and not reletaed to playback framerate!
//AVVideoMaxKeyFrameIntervalKey: #800,
},
};
compressionEncoder.audioSettings = #
{
AVFormatIDKey: #(kAudioFormatMPEG4AAC),
AVNumberOfChannelsKey: #2,
AVSampleRateKey: #44100,
AVEncoderBitRateKey: #128000,
};
'''
Expected a video with less Frame Per Second, each frame is in the same quality. Similar to a brief thumbnail summary of the video
The type of conversion you are doing will be time and power consuming on a mobile device, but I am guessing you are already aware of that.
Given your end goal is to reduce size, while presumably maintaining a reasonable quality, you may find you want to experiment with different settings etc in the encodings.
For this type of video manipulation, ffmpeg is a good choice as you probably saw from your command line usage. To use ffmpeg from an application, a common approach is to use a well supported 'ffmpeg wrapper' - this effectively runs the Ffmpeg command line commands from wihtin your application.
The advantage is that all the usual syntax should work and you can leverage the vast amount of info on ffmpeg command line syntax on the web. The downsides are that ffmpeg was not not designed to be wrapped like this so you may see some issues, although with a well supported wrapper you should find either help or that others have already worked around the issues.
Some examples of popular iOS ffmpeg wrappers:
https://github.com/tanersener/mobile-ffmpeg
https://github.com/sunlubo/SwiftFFmpeg
Get MobileFFMpeg up and running:
https://stackoverflow.com/a/59325680/1466453
Once you can make MobileFFMpeg calls in your IOS code then changing frame rate is pretty straightforward with this code:
[MobileFFmpeg execute: #"-i -filter:v fps=fps=30 "];
I'm having trouble with SDL_Mixer (my lack of experience). Chunks and Music play just fine (using Mix_PlayChannel and Mix_PlayMusic), and playing two different chunks simultaneously isn't an issue.
My problem is that I would like to play some chunk1, and then play second iteration of chunk1 overlapping the first. I am trying to play a single chunk in rapid succession, but it instead plays the sound repeatedly at a much longer interval (not as quickly as I want). I've tested console output and my method of playing/looping is not at fault, since I can see console messages printing, looped at the right speed.
I have an array of Chunks that I periodically load during initialization, using Mix_LoadWAV();
Mix_Chunk *sounds[32];
I also have a function reserved for playing these chunks:
void PlaySound(int snd_id)
{
if(snd_id >= 0 && snd_id < 32)
{
if(Mix_PlayChannel(-1, sounds[snd_id], 0) == -1)
{
printf("Mix_PlayChannel: %s\n",Mix_GetError());
}
}
}
Attempting to play a single sound several times in rapid succession(say, 100ms delay/10bps), I am given the sound playing at a set, slower interval(some 500ms or so/2bps) despite the function being called at 10bps.
I already used "Mix_AllocateChannels(16);" to ensure I have allocated channels (let me know if I'm using that incorrectly) and still, a single chunk from the array refuses to play at a certain rate.
Any ideas/help is appreciated, as well as critique on how I posted this question.
As said in the documentation of SDL_Mixer (https://www.libsdl.org/projects/SDL_mixer/docs/SDL_mixer_28.html) :
"... -1 for the first free unreserved channel."
So if your chunk is longer than 1.6 seconds (16 channels*100ms) you'll run out of channels after 1.6 seconds, and so you wont be enabled to play new chunks until one of the channels end playing.
So there are basically 2 solutions :
Allocate more channels (more than : ChunkDuration (in sec) / Delay (in sec))
Stop a channel, so that you can use it. (and to do it properly, you should not use -1 as channel but a variable that you increment each time you play a chunk (don't forget to set it back to 0 when it's equal to your number of channels) )
I have searched all around and can not find any examples or tutorials on how to access a webcam using ffmpeg in C++. Any sample code or any help pointing me to some documentation, would greatly be appreciated.
Thanks in advance.
I have been working on this for months now. Your first "issue" is that ffmpeg (libavcodec and other ffmpeg libs) does NOT access web cams, or any other device.
For a basic USB webcam, or audio/video capture card, you first need driver software to access that device. For linux, these drivers fall under the Video4Linux (V4L2 as it is known) category, which are modules that are part of most distros. If you are working with MS Windows, then you need to get an SDK that allows you to access the device. MS may have something for accessing generic devices, (but from my experience, they are not very capable, if they work at all) If you've made it this far, then you now have raw frames (video and/or audio).
THEN you get to the ffmpeg part - libavcodec - which takes the raw frames (audio and/or video) and encodes them into a streams, which ffmpeg can then mux into your final container.
I have searched, but have found very few examples of all of these, and most are piece-meal.
If you don't need to actually code of this yourself, the command line ffmpeg, as well as vlc, can access these devices, capture and save to files, and even stream.
That's the best I can do for now.
ken
For windows use dshow
For Linux (like ubuntu) use Video4Linux (V4L2).
FFmpeg can take input from V4l2 and can do the process.
To find the USB video path type : ls /dev/video*
E.g : /dev/video(n) where n = 0 / 1 / 2 ….
AVInputFormat – Struct which holds the information about input device format / media device format.
av_find_input_format ( “v4l2”) [linux]
av_format_open_input(AVFormatContext , “/dev/video(n)” , AVInputFormat , NULL)
if return value is != 0 then error.
Now you have accessed the camera using FFmpeg and can continue the operation.
sample code is below.
int CaptureCam()
{
avdevice_register_all(); // for device
avcodec_register_all();
av_register_all();
char *dev_name = "/dev/video0"; // here mine is video0 , it may vary.
AVInputFormat *inputFormat =av_find_input_format("v4l2");
AVDictionary *options = NULL;
av_dict_set(&options, "framerate", "20", 0);
AVFormatContext *pAVFormatContext = NULL;
// check video source
if(avformat_open_input(&pAVFormatContext, dev_name, inputFormat, NULL) != 0)
{
cout<<"\nOops, could'nt open video source\n\n";
return -1;
}
else
{
cout<<"\n Success !";
}
} // end function
Note : Header file < libavdevice/avdevice.h > must be included
This really doesn't answer the question as I don't have a pure ffmpeg solution for you, However, I personally use Qt for webcam access. It is C++ and will have a much better API for accomplishing this. It does add a very large dependency on your code however.
It definitely depends on the webcam - for example, at work we use IP cameras that deliver a stream of jpeg data over the network. USB will be different.
You can look at the DirectShow samples, eg PlayCap (but they show AmCap and DVCap samples too). Once you have a directshow input device (chances are whatever device you have will be providing this natively) you can hook it up to ffmpeg via the dshow input device.
And having spent 5 minutes browsing the ffmpeg site to get those links, I see this...
I am trying to create a webcam chat program in C++, and while I have been able to get the images to be captured sent and played, I am having trouble with doing the same with the audio: the audio lags and very quickly goes out of sync with the video, even when I just played it to myself.
I found this answer and sample code to be really useful.
Are there any modifications I can make to this code to get it to be nearly lag free, or is OpenAL not right for this? I am using Windows, but I plan on making a linux version later.
From the code linked:
ALCdevice* inputDevice = alcCaptureOpenDevice(NULL,FREQ,AL_FORMAT_MONO16,FREQ/2);
Try using a larger buffer:
ALCdevice* inputDevice = alcCaptureOpenDevice(NULL,FREQ,AL_FORMAT_MONO16,FREQ*4);
The polling is very aggressive. Try sleeping in the loop:
while (!done) {
...
}
To:
int sleepSeconds = 1;
while (!done) {
...
Sleep(sleepSeconds/10) //windows, miliseconds
//sleep(sleepSeconds) //linux, seconds
}
Can someone explain how snd_pcm_writei
snd_pcm_sframes_t snd_pcm_writei(snd_pcm_t *pcm, const void *buffer,
snd_pcm_uframes_t size)
works?
I have used it like so:
for (int i = 0; i < 1; i++) {
f = snd_pcm_writei(handle, buffer, frames);
...
}
Full source code at http://pastebin.com/m2f28b578
Does this mean, that I shouldn't give snd_pcm_writei() the number of
all the frames in buffer, but only
sample_rate * latency = frames
?
So if I e.g. have:
sample_rate = 44100
latency = 0.5 [s]
all_frames = 100000
The number of frames that I should give to snd_pcm_writei() would be
sample_rate * latency = frames
44100*0.5 = 22050
and the number of iterations the for-loop should be?:
(int) 100000/22050 = 4; with frames=22050
and one extra, but only with
100000 mod 22050 = 11800
frames?
Is that how it works?
Louise
http://www.alsa-project.org/alsa-doc/alsa-lib/group___p_c_m.html#gf13067c0ebde29118ca05af76e5b17a9
frames should be the number of frames (samples) you want to write from the buffer. Your system's sound driver will start transferring those samples to the sound card right away, and they will be played at a constant rate.
The latency is introduced in several places. There's latency from the data buffered by the driver while waiting to be transferred to the card. There's at least one buffer full of data that's being transferred to the card at any given moment, and there's buffering on the application side, which is what you seem to be concerned about.
To reduce latency on the application side you need to write the smallest buffer that will work for you. If your application performs a DSP task, that's typically one window's worth of data.
There's no advantage in writing small buffers in a loop - just go ahead and write everything in one go - but there's an important point to understand: to minimize latency, your application should write to the driver no faster than the driver is writing data to the sound card, or you'll end up piling up more data and accumulating more and more latency.
For a design that makes producing data in lockstep with the sound driver relatively easy, look at jack (http://jackaudio.org/) which is based on registering a callback function with the sound playback engine. In fact, you're probably just better off using jack instead of trying to do it yourself if you're really concerned about latency.
I think the reason for the "premature" device closure is that you need to call snd_pcm_drain(handle); prior to snd_pcm_close(handle); to ensure that all data is played before the device is closed.
I did some testing to determine why snd_pcm_writei() didn't seem to work for me using several examples I found in the ALSA tutorials and what I concluded was that the simple examples were doing a snd_pcm_close () before the sound device could play the complete stream sent it to it.
I set the rate to 11025, used a 128 byte random buffer, and for looped snd_pcm_writei() for 11025/128 for each second of sound. Two seconds required 86*2 calls snd_pcm_write() to get two seconds of sound.
In order to give the device sufficient time to convert the data to audio, I put used a for loop after the snd_pcm_writei() loop to delay execution of the snd_pcm_close() function.
After testing, I had to conclude that the sample code didn't supply enough samples to overcome the device latency before the snd_pcm_close function was called which implies that the close function has less latency than the snd_pcm_write() function.
If the ALSA driver's start threshold is not set properly (if in your case it is about 2s), then you will need to call snd_pcm_start() to start the data rendering immediately after snd_pcm_writei().
Or you may set appropriate threshold in the SW params of ALSA device.
ref:
http://www.alsa-project.org/alsa-doc/alsa-lib/group___p_c_m.html
http://www.alsa-project.org/alsa-doc/alsa-lib/group___p_c_m___s_w___params.html