Error on Video Indexing process with .mp4 videos - video-indexer

I'm trying without any success to index a .mp4 video. I get an error arount the 70% of the indexing process.
I've received an email saying that the file format it's not supported and it seems strange to me.
This is the ID:
Trial---7a3be47a47---e9dfcb48-8e35-4e04-b682-73cbc20310ee---job-7a3be47a47-input-251f3-SingleBitrate720pEncode-EncodingJob-251f3-r1
Can you please help me on this issue?

it seems that your video has avg_frame_rate of 1000 fps and r_frame_rate of 1000 fps which causes it to fail. We don't support such high fps.
Can you encode the video to a lower frame rate, for example 30 fps, and try again?

Related

how to programmatically modify the FPS of a video

I'm using OpenCV's cv::VideoCapture class to read frames from videos.
My guess is I could drop every 2nd frame to go from 30 FPS to 15 FPS, or I could drop every 3rd frame to go from 30 FPS to 20 FPS, etc.
...but I suspect this is not the right way to do it. Isn't there some sort of interpolation or re-interpretation of frames that needs to happen to smoothly modify the FPS?
Assuming there is, what would this be called so I can search for it? I believe projects like VLC can re-encode videos to use a different FPS, but I'm more curious to know about how to programmatically do this work in C++ with OpenCV.

OpenCV Video Recording Timing Errors

In an application recording sensor data I record video with OpenCV2 to have a video as a reference to the sensor data. Now dropping frames here and there is not a problem, but the loss seems to get worse the longer the measurement is. After filming a stop watch for 45 minutes the video was only about 33 minutes long(but obviously showed the 45 minutes from the watch on the screen).
What are good ways to use the recorded footage in a syncronized way? I could for example for every frame I add to the video a timestamp into a vector or I could do a timestamp each minute and spread the video out if need be. Are there any known methods to do this or do I have implement this myself? Is there any better way to achieve this?

Add buffering to real time input stream with c++ ffmpeg

I am writing a c++ program in which I am handling a real time udp video stream with the ffmpeg library.
The video input fps is 25 frames per second in avarage. The gap between two frames can be 10 , 20, 40 milliseconds, but sometimes it can be around 80 milliseconds.
In those times, when the gap is around 80 milliseconds, the video seems choppy or stuck,
When I open the same stream with the ffplay player(using the ffplay.exe), using a simple
"ffplay.exe udp://ip:port" command, the video has a little delay (around 50 millisends) with respect to the origin stream, but it runs much more smoothly.
Can I set a buffering time or any minimal delay so that the delay between the frames will be much more steady, in my c++ program?
If so, how can I do this?
Thanks,
Joel

Erratic fps from VideoCapture opencv

I get the fps from video files using opencv.
It works well for all videos that I have, except those videos recorded by my phone (samsung note). I am getting fps=90000 by calling VideoCapture::get(CV_CAP_PROP_FPS) for. I verified the properties of video files taken by phone, they seem that no problem with recorded files (fps = 30), but when I get fps by opencv its erratic value!
Does any get this problem? any suggestion?
EDIT:
VideoCapture input_video("20.mp4"); // here I read a file, recorded by cameraphone
double fps=input_video.get(CV_CAP_PROP_FPS);
cout<<fps<<endl; // prints 90000 !!!!!
// continue without problem

Limiting data transferred through a socket in c++

I am working on a USB redirection software which redirects USB device on network by adding a virtual USB device on client machine. Everything is working fine but client complains that when he connects a webcam with 640X480 resolution, the 100 Mbps network chokes. I have tested the webcam on 1 Gbps Adapter and it utilizes around 16% (160 Mbps) bandwidth. Should a webcam take this much bandwidth? Anyways he wants network usage to be under 50 Mbps.
I have tried compressing data which I get from DeviceIoControl and then decompressing it on client side before passing it to DeviceIoControl. Works fine for file transfer but video stops working and bandwidth goes down to around 50 Mbps. I have tried adding short delays before sending data but this also results in a black screen. Now I am thinking of somehow forcefully lowering the camera resolution to 320X240. I am not sure if there is any other way of decreasing data thrown by DeviceIoControl.
I would really appreciate if you could share your thoughts and lead me in right direction. Thanks in advance.
Edit:
Its a YUV2 format webcam.
Is there any opensource library I can use to decrease frame rate or resolution of webcam on windows platform?
If the data is uncompressed: 640 px/line × 480 lines/frame × 30 frames/sec × 24 bit/px ≈ 211 Mbps
You can check the documentation of the webcam whether it supports some sort of compression or frame rate control.