I try to upload a video via the graph API but as the title states, I get an error that states that I use an unsupported video format.
I thought maybe the original video file doesn't match the video specifications as defined on https://developers.facebook.com/docs/instagram-api/reference/ig-user/media#video-specifications so I wrote a function to convert the file to match the specification, using ffmpeg with the following command (I tried many different ones, but this is the last one I tried)
ffmpeg -i ${tmpInFile} -vf format=yuv420p -vf scale=-2:720 -pix_fmt yuv420p -r 30 -movflags +faststart -c:v libx264 -b:v 3M -c:a aac -b:a 128k -ac 2 -ar 44100 -shortest -f mp4 ${tmpOutFile}
Unfortunately, the error persists.
Here's the complete process:
First I use await fetch('/api/convert-mp4', { method: 'POST', body: file }); to send the uploaded video file to the backend.
Next I get the blob data from the request with const blob = await request.blob();.
Then I create a temporary file with await fs.writeFile(tmpInFile, await blob.stream()).
Then I call ffmpeg with the above command mentioned above and then read the file with const buffer = await fs.readFile(tmpOutFile);.
Then I send the buffer as response body back to the client with return {status: 200,body: buffer}.
Then I get the blob data from the response with const blob = await convertVideoResponse.blob();.
Finally I convert it back into a file object with
const convertedFile = new File([blob], file.name, { type: 'video/mp4' });
This file I upload to Supabase Storage (https://supabase.com/storage), and get a publicly accessible url (which I confirmed by opening it in an incognito tab).
In the supabase dashboard I can see the video file has the correct media container (video/mp4) and the file size is small enough.
Does anyone have an idea what could be the issue?
Edit:
By changing the ffmpeg command to use h265 instead of h254 ffmpeg -i ${tmpInFile} -vf format=yuv420p -vf scale=-2:1350 -pix_fmt yuv420p -r 30 -movflags +faststart -c:v libx265 -vtag hvc1 -an -x265-params crf=25 -b:v 3M -c:a copy -c:a aac -b:a 128k -ac 2 -ar 44100 -shortest -f mp4 ${tmpOutFile} I managed to get it to work for some videos but not all, which confuses me, as I assumed that the video attributes should be the same for all videos processed by the same command.
I had the very same issue, and GOT IT TO WORK!
TL;DR
Make sure your video is not too long
More details
In the end I came up with this as an example:
ffmpeg -i input.mp4 \
-c:v libx264 -aspect 16:9 -crf 18 \
-vf "scale=iw*min(1280/iw\,720/ih):ih*min(1280/iw\,720/ih),pad=1280:720:(1280-iw)/2:(720-ih)/2" \
-fpsmax 60 -preset ultrafast -c:a aac -b:a 128k -ac 1 -pix_fmt yuv420p -movflags +faststart -t 59 -y output.mp4
On some flags some details about what and why:
-c:v libx264 → You need to compress with Video codec: HEVC or H264, progressive scan, closed GOP, 4:2:0 chroma subsampling
--aspect and -vf "scale=iw*min(1280/iw\,720/ih):ih*min(1280/iw\,720/ih),pad=1280:720:(1280-iw)/2:(720-ih)/2" → As I want my video's always to be within 1280x720, padding added if needed.
fpsmax → Make sure you do not exceed the framerate of 60fps
-c:a aac -b:a 128k -ac 1 → For "AAC, 48khz sample rate maximum, 1 or 2 channels (mono or stereo)"
-t 59 → Limit to under 1 minute, as that is the max for a video!
I think the most important thing here is the -t 59 as the API only supports files up to one minute!
After this, everything worked 🎉
Related
I am using cImg to generate images in a CImgList.
when I save that stack to video by calling the save_video method on the stack, it ignores the fps and the output seems to be always 25fps. I opened the file in different players (VLC, windows movie,...) and its always 25fps.
cImg is using ffmpeg to create the video. I'm not specifying any codec so i assume the default mpeg2 is used (based on what VLC tells me).
I also have no specfic settings for cImg.
I have a fixed amount of images of 500 and it always produces around 20 seconds which is 25fps.
What do I need to do to output it to for example 60fps?
I fixed it but not sure if it's a bug in cImg or just me not knowing how to use it properly but from what I found is that cImg doesn't pass the --framerate parameter to ffmpeg, only the -r parameter.
I updated the code to include the framerate parameter and it does seem to work. This is the updated cImg code in the save_ffmpeg_external function:
cimg_snprintf(command,command._width,
"\"%s\" -framerate %u -v -8 -y -i \"%s_%%6d.ppm\" -pix_fmt yuv420p -vcodec %s -b %uk -r %u \"%s\"",
cimg::ffmpeg_path(),
fps,
CImg<charT>::string(filename_tmp)._system_strescape().data(),
_codec,bitrate,fps,
CImg<charT>::string(filename)._system_strescape().data());
I need to use a batch file with FFmpeg pipe query. I have a set of images (img0.bmp, img1.bmp, img2.bmp) and I need FFmpeg to iterate through them and pass raw data to my custom .exe.
So, the query looks like this
ffmpeg -y -hide_banner -i img%01d.bmp -vf format=gray -f rawvideo pipe: | MY_CUSTOM_EXE
and code of the custom exe is really simple like this
int main()
{
return 0;
}
The trick of this story is that if I pass to FFmpeg exe just one image like this ... -i img0.bmp ... it works, but if there is a set ... -i img%01d.bmp ..., so I get such an error after the very first interaction:
Input #0, image2, from 'img%01d.bmp':
Duration: 00:00:00.12, start: 0.000000, bitrate: N/A
Stream #0:0: Video: bmp, pal8, 4096x3000, 25 tbr, 25 tbn, 25 tbc
Stream mapping:
Stream #0:0 -> #0:0 (bmp (native) -> rawvideo (native))
Press [q] to stop, [?] for help
Output #0, rawvideo, to 'pipe:':
Metadata:
encoder : Lavf58.29.100
Stream #0:0: Video: rawvideo (Y800 / 0x30303859), gray, 4096x3000, q=2-31, 2457600 kb/s, 25 fps, 25 tbn, 25 tbc
Metadata:
encoder : Lavc58.54.100 rawvideo
av_interleaved_write_frame(): Invalid argument
Error writing trailer of pipe:: Invalid argument
frame= 1 fps=0.0 q=-0.0 Lsize= 12000kB time=00:00:00.04 bitrate=2457600.0kbits/s speed=0.999x
video:12000kB audio:0kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.000000%
Conversion failed!
Press any key to continue . . .
In addition if I use this query like this
ffmpeg -y -hide_banner -i img%01d.bmp -vf format=gray -f rawvideo pipe:
or with other ffmpeg pipe commands
ffmpeg -y -hide_banner -i %input% -vf format=gray -f rawvideo pipe: | ffmpeg -hide_banner -y -framerate 30 ...
it also works perfectly.
So the problem in the MY_CUSTOM_EXE, but what could it be if it has only one line of code?
You must let your program consume the output from ffmpeg otherwise you get the errors you describe.
I tested my hunch and came back with:
av_interleaved_write_frame(): Broken pipe
Error writing trailer of pipe:: Broken pipe
So a simple
#include<iostream>
int main(){
char c;
while(std::cin >> c){} // consume everything the pipe offers
} // return 0 implied.
Will fix this particular error.
Hi stackoverflow community,
I have a tricky problem and I need your help to understand what is going on here.
My program captures frames from a video grabber card (Blackmagic) which just works fine so far, at the same time I display the captured images with opencv (cv::imshow) which works good as well (But pretty cpu wasting).
The captured images are supposed to be stored on the disk as well, for this I put the captured Frames (cv::Mat) on a stack, to finally write them async with opencv:
cv::VideoWriter videoWriter(path, cv::CAP_FFMPEG, fourcc, fps, *size);
videoWriter.set(cv::VIDEOWRITER_PROP_QUALITY, 100);
int id = metaDataWriter.insertNow(path);
while (this->isRunning) {
while (!this->stackFrames.empty()) {
cv:Mat m = this->stackFrames.pop();
videoWriter << m;
}
}
videoWriter.release();
This code is running in an additional thread and will be stopped from outside.
The code is working so far, but it is sometimes pretty slow, which means my stack size increases and my system runs out of ram and get killed by the OS.
Currently it is running on my developing system:
Ubuntu 18.04.05
OpenCV 4.4.0 compiled with Cuda
Intel i7 10. generation 32GB RAM, GPU Nvidia p620, M.2 SSD
Depending on the codec (fourcc) this produces a high CPU load. So far I used mainly "MJPG", "x264". Sometimes even MJPG turns one core of the CPU to 100% load, and my stack raises until the programs run out of run. After a restart, sometimes, this problem is fixed, and it seems the load is distributed over all cores.
Regarding to the Intel Doc. for my CPU, it has integrated hardware encoding/decoding for several codecs. But I guess opencv is not using them. Opencv even uses its own ffmpeg and not the one of my system. Here is my build command of opencv:
cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D WITH_TBB=ON \
-D WITH_CUDA=ON \
-D BUILD_opencv_cudacodec=OFF \
-D ENABLE_FAST_MATH=1 \
-D CUDA_FAST_MATH=1 \
-D WITH_CUBLAS=1 \
-D WITH_V4L=ON \
-D WITH_QT=OFF \
-D WITH_OPENGL=ON \
-D WITH_GSTREAMER=ON \
-D OPENCV_GENERATE_PKGCONFIG=ON \
-D OPENCV_ENABLE_NONFREE=ON \
-D WITH_FFMPEG=1 \
-D OPENCV_EXTRA_MODULES_PATH=../../opencv_contrib/modules \
-D WITH_CUDNN=ON \
-D OPENCV_DNN_CUDA=ON \
-D CUDA_ARCH_BIN=6.1 ..
I just started development with linux and C++, before I was working with Java/Maven, so the use of cmake is still a work in progress, pls go easy on me.
Basically my question is, how can I make the video encoding/writing faster, use the hardware acceleration at best?
Or if you think there is something else fishy, pls let me know.
BR Michael
-------- old - look up answer on bottom --------
Thank #Micka for the many advises, I found the right thing on the way.
Using cudacodec::VideoWriter is not that easy, after compiling I was not able to use it because of this error, and even if I can make it run, the deployment PC does not have a nvidia GPU.
Since I am going to use PCs with AMD CPUs as well, I can't use the cv::CAP_INTEL_MFX for the api-reference parameter of the cv::VideoWriter.
But there is also the cv::CAP_OPENCV_MJPEG, which works fine for the MJPG codec (not all video container are supported, I use .avi, sadly .mkv was not working with this configuration). If the user does not use MJPG as a codec I use cv::CAP_ANY, and opencv decides what is to use.
So,
cv::VideoWriter videoWriter(path, cv::CAP_OPENCV_MJPEG, fourcc, fps, *size);
works pretty well, even on my old system.
Unfortunately I never changed the api-reference parameter before, only from ffmpeg to gstreamer, I read in the doc of opencv only the last line "cv::CAP_FFMPEG or cv::CAP_GSTREAMER." and I did not see that there is an "e.g." before...
Thank you #Micka to make me read again.
P.S. for my performance problem with cv::imshow I changed from
cv::namedWindow(WINDOW_NAME, cv::WINDOW_NORMAL);
to
cv::namedWindow(WINDOW_NAME, cv::WINDOW_OPENGL);
Which obviously uses OpenGL, and does a better job. Also changing from cv::Mat to cv::UMat can speed up the performance, see here
-------------- EDIT better solution ----------------
Since I still had problems with the OpenCV VideoWriter for some systems, I was looking for another solution. Now I write the frames with FFMPEG.
For FFMPEG I can use the GPU or CPU depending on the codec I use.
If FFMPEG is installed via snapd (Ubuntu 18.04) it comes with cuda enabled by default:
sudo snap install ffmpeg --devmode
(--devmode is optional, but I had problems writing files on specific location, this was the only way for me to fix it)
And here is my code:
//this string is automatically created in my program, depending on user input and the parameters of the input frames
string ffmpegCommand = "ffmpeg -y -f rawvideo -vcodec rawvideo -framerate 50 -pix_fmt bgr24 -s 1920x1080 -i - -c:v h264_nvenc -crf 14 -maxrate:v 10M -r 50 myVideoFile.mkv";
FILE *pipeout = popen(ffmpegCommand.data(), "w");
int id = metaDataWriter.insertNow(path);
//loop will be stopped from another thread
while (this->isRunning) {
//this->frames is a stack with cv::Mat elements in the right order
//it is filled by another thread
while (!this->frames.empty()) {
cv::Mat mat = frames.front();
frames.pop();
fwrite(mat.data, 1, s, pipeout);
}
}
fflush(pipeout);
pclose(pipeout);
So a file (pipeout) is used to write the mat.data to ffmpeg, ffmpeg itself is doing the encoding and file writing. To the parameters:
-y = Overwrite output files without asking
-f = format, in this case used for input rawvideo
-vcodec = codec for input which is rawvideo as well, because the used cv::Mat.data has no compression/codec
-framerate = the input framerate I receive from my grabber card/OpenCv
-pix_fmt = the format of my raw data, in this case bgr24, so 8 bit each channel, because I use a regular OpenCV bgr cv::Mat
-s = size of each frame, in my case 1920x1080
-i = input, in this case we read from the stdinput you can see it here "-", so the file (pipeout) is captured by ffmpeg
-c:v = output codec, so this is to encode the video, here h264_nvenc is used, which is a GPU codec
-r = frame output rate, also 50 in this case myVideoFile.mkv = this is just the name of the file which is produced by ffmpeg, you can change this file and path
Additional parameters for higher quality: -crf 14 -maxrate:v 10M
This works very good for me and uses my hardware acceleration of the GPU or with another codec in charge the CPU.
I hope this helps other developers as well.
I’m developing an app that needs to clone an MP4 video file with all the streams using FFmpeg C++ API and have successfully made it work based on the FFmpeg remuxing example.
This works great for video and audio streams, but when the video includes a data stream (actually a QuickTime Time Code according to MediaInfo) I get this error.
Output #0, mp4, to 'C:\Users\user\Desktop\shortOut.mp4':
Stream #0:0: Video: hevc (Main 10) (hev1 / 0x31766568), yuv420p10le(tv,progressive), 3840x2160 [SAR 1:1 DAR 16:9], q=2-31, 1208 kb/s
Stream #0:1: Audio: mp3 (mp4a / 0x6134706D), 48000 Hz, stereo, s16p, 32s
Stream #0:2: Data: none (tmcd / 0x64636D74), 0 kb/s
[mp4 # 0000000071edf600] Could not find tag for codec none in stream #2, codec not currently supported in container
I’ve found this happens in the call to avformat_write_header().
It makes sense that if FFmpeg doesn’t know the codec it can’t write to the header about it, but I found out that using the ffmpeg command line I can make it to work perfectly using the copy command for the stream, something like:
ffmpeg -i input.mp4 -c:v copy -c:a copy -c:a copy output.mp4
I have been analyzing ffmpeg.c implementation to try to understand how they do a stream copy, but it’s been very painful following along the huge pipeline.
What would be a proper way to remux a data stream of this type with FFmpeg C++ API? Any tip or pointers?
I'm new here. I'm trying to stream some images processed with opencv on a LAN using ffmpeg.
I saw this:
Pipe raw OpenCV images to FFmpeg
but it doesn't work for me, it creates only noise. I think the data I'm sending are not in the right format.
I also have a look to this:
How to send opencv video's to ffmpeg
but (looking at the last answer) the option -f jpeg_pipe give me error.
What I do now:
I have a RGB Mat called "composedImage"
I send in output with:
std::cout << composedImage;
The output are the pixels values separated by comma
then I call:
./example | ffmpeg -f rawvideo -pixel_format bgr24 -video_size 160x120 -framerate 20 -i - -f udp://192.168.1.79:1234
I try to read using VLC (it didn't work) and with ffplay:
ffplay -f rawvideo -pixel_format gray -video_size 160x120 -framerate 30 udp://192.168.1.79:1234
Here it seems so easy:
http://ffmpeg.org/ffmpeg-formats.html#rawvideo
I have also tried to write the image and sent it, but I have errors. Probably it tries to send before the image is complete.
Thank you for your help.
I managed to stream, albeit with potato quality, maybe some ffmpeg guru can help out with the ffmpeg commands.
#include <iostream>
#include <opencv2/opencv.hpp>
#include <vector>
using namespace cv;
int main() {
VideoCapture cap("Your video or stream goes here");
Mat frame;
std::ios::sync_with_stdio(false);
while (cap.read(frame)) {
for (size_t i = 0; i < frame.dataend - frame.datastart; i++)
std::cout << frame.data[i];
}
return 0;
}
And then pipe it to ffmpeg like
./test | ffmpeg -f rawvideo -pixel_format bgr24 -video_size 1912x796 -re -framerate 20 -i - -f mpegts -preset ultrafast udp://127.0.0.1:1234
And play it with ffplay mpv or whatever
ffplay udp://127.0.0.1:1234