I need to use a batch file with FFmpeg pipe query. I have a set of images (img0.bmp, img1.bmp, img2.bmp) and I need FFmpeg to iterate through them and pass raw data to my custom .exe.
So, the query looks like this
ffmpeg -y -hide_banner -i img%01d.bmp -vf format=gray -f rawvideo pipe: | MY_CUSTOM_EXE
and code of the custom exe is really simple like this
int main()
{
return 0;
}
The trick of this story is that if I pass to FFmpeg exe just one image like this ... -i img0.bmp ... it works, but if there is a set ... -i img%01d.bmp ..., so I get such an error after the very first interaction:
Input #0, image2, from 'img%01d.bmp':
Duration: 00:00:00.12, start: 0.000000, bitrate: N/A
Stream #0:0: Video: bmp, pal8, 4096x3000, 25 tbr, 25 tbn, 25 tbc
Stream mapping:
Stream #0:0 -> #0:0 (bmp (native) -> rawvideo (native))
Press [q] to stop, [?] for help
Output #0, rawvideo, to 'pipe:':
Metadata:
encoder : Lavf58.29.100
Stream #0:0: Video: rawvideo (Y800 / 0x30303859), gray, 4096x3000, q=2-31, 2457600 kb/s, 25 fps, 25 tbn, 25 tbc
Metadata:
encoder : Lavc58.54.100 rawvideo
av_interleaved_write_frame(): Invalid argument
Error writing trailer of pipe:: Invalid argument
frame= 1 fps=0.0 q=-0.0 Lsize= 12000kB time=00:00:00.04 bitrate=2457600.0kbits/s speed=0.999x
video:12000kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.000000%
Conversion failed!
Press any key to continue . . .
In addition if I use this query like this
ffmpeg -y -hide_banner -i img%01d.bmp -vf format=gray -f rawvideo pipe:
or with other ffmpeg pipe commands
ffmpeg -y -hide_banner -i %input% -vf format=gray -f rawvideo pipe: | ffmpeg -hide_banner -y -framerate 30 ...
it also works perfectly.
So the problem in the MY_CUSTOM_EXE, but what could it be if it has only one line of code?
You must let your program consume the output from ffmpeg otherwise you get the errors you describe.
I tested my hunch and came back with:
av_interleaved_write_frame(): Broken pipe
Error writing trailer of pipe:: Broken pipe
So a simple
#include<iostream>
int main(){
char c;
while(std::cin >> c){} // consume everything the pipe offers
} // return 0 implied.
Will fix this particular error.
Related
I am using cImg to generate images in a CImgList.
when I save that stack to video by calling the save_video method on the stack, it ignores the fps and the output seems to be always 25fps. I opened the file in different players (VLC, windows movie,...) and its always 25fps.
cImg is using ffmpeg to create the video. I'm not specifying any codec so i assume the default mpeg2 is used (based on what VLC tells me).
I also have no specfic settings for cImg.
I have a fixed amount of images of 500 and it always produces around 20 seconds which is 25fps.
What do I need to do to output it to for example 60fps?
I fixed it but not sure if it's a bug in cImg or just me not knowing how to use it properly but from what I found is that cImg doesn't pass the --framerate parameter to ffmpeg, only the -r parameter.
I updated the code to include the framerate parameter and it does seem to work. This is the updated cImg code in the save_ffmpeg_external function:
cimg_snprintf(command,command._width,
"\"%s\" -framerate %u -v -8 -y -i \"%s_%%6d.ppm\" -pix_fmt yuv420p -vcodec %s -b %uk -r %u \"%s\"",
cimg::ffmpeg_path(),
fps,
CImg<charT>::string(filename_tmp)._system_strescape().data(),
_codec,bitrate,fps,
CImg<charT>::string(filename)._system_strescape().data());
I try to upload a video via the graph API but as the title states, I get an error that states that I use an unsupported video format.
I thought maybe the original video file doesn't match the video specifications as defined on https://developers.facebook.com/docs/instagram-api/reference/ig-user/media#video-specifications so I wrote a function to convert the file to match the specification, using ffmpeg with the following command (I tried many different ones, but this is the last one I tried)
ffmpeg -i ${tmpInFile} -vf format=yuv420p -vf scale=-2:720 -pix_fmt yuv420p -r 30 -movflags +faststart -c:v libx264 -b:v 3M -c:a aac -b:a 128k -ac 2 -ar 44100 -shortest -f mp4 ${tmpOutFile}
Unfortunately, the error persists.
Here's the complete process:
First I use await fetch('/api/convert-mp4', { method: 'POST', body: file }); to send the uploaded video file to the backend.
Next I get the blob data from the request with const blob = await request.blob();.
Then I create a temporary file with await fs.writeFile(tmpInFile, await blob.stream()).
Then I call ffmpeg with the above command mentioned above and then read the file with const buffer = await fs.readFile(tmpOutFile);.
Then I send the buffer as response body back to the client with return {status: 200,body: buffer}.
Then I get the blob data from the response with const blob = await convertVideoResponse.blob();.
Finally I convert it back into a file object with
const convertedFile = new File([blob], file.name, { type: 'video/mp4' });
This file I upload to Supabase Storage (https://supabase.com/storage), and get a publicly accessible url (which I confirmed by opening it in an incognito tab).
In the supabase dashboard I can see the video file has the correct media container (video/mp4) and the file size is small enough.
Does anyone have an idea what could be the issue?
Edit:
By changing the ffmpeg command to use h265 instead of h254 ffmpeg -i ${tmpInFile} -vf format=yuv420p -vf scale=-2:1350 -pix_fmt yuv420p -r 30 -movflags +faststart -c:v libx265 -vtag hvc1 -an -x265-params crf=25 -b:v 3M -c:a copy -c:a aac -b:a 128k -ac 2 -ar 44100 -shortest -f mp4 ${tmpOutFile} I managed to get it to work for some videos but not all, which confuses me, as I assumed that the video attributes should be the same for all videos processed by the same command.
I had the very same issue, and GOT IT TO WORK!
TL;DR
Make sure your video is not too long
More details
In the end I came up with this as an example:
ffmpeg -i input.mp4 \
-c:v libx264 -aspect 16:9 -crf 18 \
-vf "scale=iw*min(1280/iw\,720/ih):ih*min(1280/iw\,720/ih),pad=1280:720:(1280-iw)/2:(720-ih)/2" \
-fpsmax 60 -preset ultrafast -c:a aac -b:a 128k -ac 1 -pix_fmt yuv420p -movflags +faststart -t 59 -y output.mp4
On some flags some details about what and why:
-c:v libx264 → You need to compress with Video codec: HEVC or H264, progressive scan, closed GOP, 4:2:0 chroma subsampling
--aspect and -vf "scale=iw*min(1280/iw\,720/ih):ih*min(1280/iw\,720/ih),pad=1280:720:(1280-iw)/2:(720-ih)/2" → As I want my video's always to be within 1280x720, padding added if needed.
fpsmax → Make sure you do not exceed the framerate of 60fps
-c:a aac -b:a 128k -ac 1 → For "AAC, 48khz sample rate maximum, 1 or 2 channels (mono or stereo)"
-t 59 → Limit to under 1 minute, as that is the max for a video!
I think the most important thing here is the -t 59 as the API only supports files up to one minute!
After this, everything worked 🎉
I’m developing an app that needs to clone an MP4 video file with all the streams using FFmpeg C++ API and have successfully made it work based on the FFmpeg remuxing example.
This works great for video and audio streams, but when the video includes a data stream (actually a QuickTime Time Code according to MediaInfo) I get this error.
Output #0, mp4, to 'C:\Users\user\Desktop\shortOut.mp4':
Stream #0:0: Video: hevc (Main 10) (hev1 / 0x31766568), yuv420p10le(tv,progressive), 3840x2160 [SAR 1:1 DAR 16:9], q=2-31, 1208 kb/s
Stream #0:1: Audio: mp3 (mp4a / 0x6134706D), 48000 Hz, stereo, s16p, 32s
Stream #0:2: Data: none (tmcd / 0x64636D74), 0 kb/s
[mp4 # 0000000071edf600] Could not find tag for codec none in stream #2, codec not currently supported in container
I’ve found this happens in the call to avformat_write_header().
It makes sense that if FFmpeg doesn’t know the codec it can’t write to the header about it, but I found out that using the ffmpeg command line I can make it to work perfectly using the copy command for the stream, something like:
ffmpeg -i input.mp4 -c:v copy -c:a copy -c:a copy output.mp4
I have been analyzing ffmpeg.c implementation to try to understand how they do a stream copy, but it’s been very painful following along the huge pipeline.
What would be a proper way to remux a data stream of this type with FFmpeg C++ API? Any tip or pointers?
I'm new here. I'm trying to stream some images processed with opencv on a LAN using ffmpeg.
I saw this:
Pipe raw OpenCV images to FFmpeg
but it doesn't work for me, it creates only noise. I think the data I'm sending are not in the right format.
I also have a look to this:
How to send opencv video's to ffmpeg
but (looking at the last answer) the option -f jpeg_pipe give me error.
What I do now:
I have a RGB Mat called "composedImage"
I send in output with:
std::cout << composedImage;
The output are the pixels values separated by comma
then I call:
./example | ffmpeg -f rawvideo -pixel_format bgr24 -video_size 160x120 -framerate 20 -i - -f udp://192.168.1.79:1234
I try to read using VLC (it didn't work) and with ffplay:
ffplay -f rawvideo -pixel_format gray -video_size 160x120 -framerate 30 udp://192.168.1.79:1234
Here it seems so easy:
http://ffmpeg.org/ffmpeg-formats.html#rawvideo
I have also tried to write the image and sent it, but I have errors. Probably it tries to send before the image is complete.
Thank you for your help.
I managed to stream, albeit with potato quality, maybe some ffmpeg guru can help out with the ffmpeg commands.
#include <iostream>
#include <opencv2/opencv.hpp>
#include <vector>
using namespace cv;
int main() {
VideoCapture cap("Your video or stream goes here");
Mat frame;
std::ios::sync_with_stdio(false);
while (cap.read(frame)) {
for (size_t i = 0; i < frame.dataend - frame.datastart; i++)
std::cout << frame.data[i];
}
return 0;
}
And then pipe it to ffmpeg like
./test | ffmpeg -f rawvideo -pixel_format bgr24 -video_size 1912x796 -re -framerate 20 -i - -f mpegts -preset ultrafast udp://127.0.0.1:1234
And play it with ffplay mpv or whatever
ffplay udp://127.0.0.1:1234
I use the library of ffmpeg to decode stream from [TTQ HD Camera] and encode it to a rtmp stream.
but I receive a lot of warnings like the picture below.
i try to set qmin and qmax , it seems a little better. but still not totally resolve the problem.
encoder_context->qmin = 10;
encoder_context->qmax = 51;
who knows this is why ?
[dshow # 04bfc640] real-time buffer [TTQ HD Camera] [video input] too full or near too full (101% of size: 3041280 [rtbufsize parameter])! frame dropped!
Have you tried increasing the -rtbufsize parameter to something larger than 3041280? If you have the RAM for it, try something like 2000M. It should be defined before the -i of the camera.
So something like:
ffmpeg -f dshow -video_size 1920x1080 -rtbufsize 2147.48M -framerate 30 -pixel_format bgr0 -i video=...
Note that the resolution and frame rate are just examples and you would have to fill in your values that you have used in the ffmpeg command already.