I’m developing an app that needs to clone an MP4 video file with all the streams using FFmpeg C++ API and have successfully made it work based on the FFmpeg remuxing example.
This works great for video and audio streams, but when the video includes a data stream (actually a QuickTime Time Code according to MediaInfo) I get this error.
Output #0, mp4, to 'C:\Users\user\Desktop\shortOut.mp4':
Stream #0:0: Video: hevc (Main 10) (hev1 / 0x31766568), yuv420p10le(tv,progressive), 3840x2160 [SAR 1:1 DAR 16:9], q=2-31, 1208 kb/s
Stream #0:1: Audio: mp3 (mp4a / 0x6134706D), 48000 Hz, stereo, s16p, 32s
Stream #0:2: Data: none (tmcd / 0x64636D74), 0 kb/s
[mp4 # 0000000071edf600] Could not find tag for codec none in stream #2, codec not currently supported in container
I’ve found this happens in the call to avformat_write_header().
It makes sense that if FFmpeg doesn’t know the codec it can’t write to the header about it, but I found out that using the ffmpeg command line I can make it to work perfectly using the copy command for the stream, something like:
ffmpeg -i input.mp4 -c:v copy -c:a copy -c:a copy output.mp4
I have been analyzing ffmpeg.c implementation to try to understand how they do a stream copy, but it’s been very painful following along the huge pipeline.
What would be a proper way to remux a data stream of this type with FFmpeg C++ API? Any tip or pointers?
Related
I'm using ffmpeg library for live streaming via RTMP. I want to know how to give my choice of audio and video codec for the particular format in avformat_alloc_output_context2.
In Detail:
The following command works perfectly for me.
ffmpeg -re -stream_loop -1 -i ~/Downloads/Microsoft_Surface.mp4 -vcodec copy -c:a aac -b:a 160k -ar 44100 -strict -2 -f flv -flvflags no_duration_filesize rtmp://192.168.1.7/live/surface
In the output, I have set my audio codec to be aac and copied the video codec from input, which is H264.
I want to emulate this in the library, but don't know how to.
avformat_alloc_output_context2(&_ctx, NULL, "flv", NULL);
Above code sets oformat audio codec to ADPCM_SWF and video codec to FLV1. How to change that to AAC and H264 ?
So far, used av_guess_format to construct AVOutputFormat. It accepts only format as input. And I don't know where to mention audio and video codec.
AVOutputFormat* output_format = av_guess_format("flv", NULL, NULL);
Also tried giving filename to avformat_alloc_output_context2 with the rest of the parameters NULL.
AVOutputFormat* output_format = av_guess_format(NULL, "flv_acc_sample.flv", NULL);
This file has AAC audio and H264 video. But still ffmpeg loads oformat with ADPCM_SWF audio and FLV1 video codecs.
Searched stackoverflow for similar questions, but could not find the solution I was looking for.
Any hint/guidance is hugely appreciated. Thank you.
Working hard for 4 days now to fix the google cloud speech to text api to work, but still see no light at the end of the tunnel. Searched on the net a lot, read the documentations a lot but see no result.
Our site is bbsradio.com, we are trying to auto extract transcript from our mp3 files using google speech-to-text api. Code is written on PHP and almost exact copy of this: https://github.com/GoogleCloudPlatform/php-docs-samples/blob/master/speech/src/transcribe_async.php
I see process is completed and its reached out here "$operation->pollUntilComplete();" but its not showing it was successful at "if ($operation->operationSucceeded()) {" and its not returning any error either at $operation->getError().
I am converting the mp3 to raw file like this: ffmpeg -y -loglevel panic -i /public_html/sites/default/files/show-archives/audio-clips-9-23-2020/911freefall2020-05-24.mp3 -f s16le -acodec pcm_s16le -vn -ac 1 -ar 16000 -map_metadata -1 /home/mp3_to_raw/911freefall2020-05-24.raw
While tried with FLAC format as well, not worked. I tested converted FLAC file using windows media player, I can listen conversation clearly. I checked the files its Hz 16000, channel = 1 and its 16 bit. I see file is uploaded in cloud storage. Checked this:
https://cloud.google.com/speech-to-text/docs/troubleshooting and
https://cloud.google.com/speech-to-text/docs/best-practices
There are lot of discussion and documentation, seems nothing is helpful at this moment. If some one can really help me out to find out the issue, it will be really really really great!
TLDR; convert from MP3 to a 1-channel FLAC file with the same sample rate as your MP3 file.
Long explanation:
Since you're using MP3 files as your process input, probably you MP3 compression artifacts might be hurting you when you resample to to 16KHz (you cannot hear this, but the algoritm will).
To confirm this theory:
Execute ffprobe -hide_banner filename.mp3 it will output something like this:
Metadata:
...
Duration: 00:02:12.21, start: 0.025057, bitrate: 320 kb/s
Stream #0:0: Audio: mp3, 44100 Hz, stereo, s16p, 320 kb/s
Metadata:
encoder : LAME3.99r
In this case, the sample rate is OK for Google-Spech-Api. Just transcode the file without changing the sample rate (remove the -ar 16000 from your ffmpeg command)
You might get into trouble if the original MP3 bitrate is low. 320kb/s seems safe (unless the recording has a lot of noise).
Take into account that voice recoded under 64kb/s (ISDN line quality) can be understood only by humans if there is some noise.
At last I found the solution and reason of the issue. Actually getting empty results is a bug of the php api code. What you need to do:
Replace this:
$operation->pollUntilComplete();
by this:
while(!$operation->isDone()){
$operation->pollUntilComplete();
}
Read this: enter link description here
So I have a DVD, part of a boxset, and I would like to extract the subtiles
Information about the disk:
$ gst-discoverer-1.0 /the/mounted/disk/VIDEO_TS/VIDEO_TS.VOB
Analyzing file:///the/mounted/disk/VIDEO_TS/VIDEO_TS.VOB
Done discovering file:///the/mounted/disk/VIDEO_TS/VIDEO_TS.VOB
Topology:
container: MPEG-2 System Stream
audio: DVD AC-3 (ATSC A/52)
audio: AC-3 (ATSC A/52)
video: MPEG-2 Video (Main Profile)
Properties:
Duration: 0:06:59.480000000
Seekable: yes
Live: no
Tags:
audio codec: DVD AC-3 (ATSC A/52)
bitrate: 384000
video codec: MPEG-2 Video
$ gst-discoverer-1.0 /the/mounted/disk/VIDEO_TS/VTS_01_0.VOB
Analyzing file:///the/mounted/disk/VIDEO_TS/VTS_01_0.VOB
Done discovering file:///the/mounted/disk/VIDEO_TS/VTS_01_0.VOB
Topology:
container: MPEG-2 System Stream
subtitles: DVD subpicture
subtitles: DVD subpicture
video: MPEG-2 Video (Main Profile)
Properties:
Duration: 0:00:00.049444444
Seekable: yes
Live: no
Tags:
video codec: DVD subpicture
Ideally I'd end up with something like a vtt file at the end. I do not want the audio or video
I've played around with gst-launch and have watched the disk with playbin as well as doing various filesrc experiments, but looking at docs and old mailing list posts hasn't got me very far
I see webvttenc exists, but I'm really not sure how I get to the point where I can use it (how do I get from subpicture/x-dvd to text/x-raw?)
Really I've got no idea what I'm doing
I encode h264 data by libavcodec.
ex.
while (1) {
...
avcodec_encode_video(pEnc->pCtx, OutBuf, ENC_OUTSIZE, pEnc->pYUVFrame);
...
}
If I directly save OutBuf data as a .264 file, it can`t be play by player. Now I want to save OutBuf
as a mp4 file. Anyone know how to do this by ffmpeg lib? thanks.
You use avformat_write_header, av_interleaved_write_frame, avformat_write_trailer and friends.
Their usage is shown in the muxing example of FFmpeg.
See a similar topic: Raw H264 frames in mpegts container using libavcodec with also writing to a file (different container, same API)
See also links from answer here: FFMpeg encoding RGB images to H264
My C++ application receives a H.264 RTP video stream.
Right now it decodes the stream, saves it into a YUV file and later I use ffmpeg to re-ecode the file into something suitable to watch on a Windows PC (eg. Mpeg4 AVI).
Shouldn't it be possible to save the H.264 stream into a AVI (or similar) container without having to decode and re-encode it ? That would require some H.264 decoder on the PC to watch, but it should be much more efficient.
How could that be done ? Are there any libraries supporting that ?
using ffmpeg is correct but the answers posted so far dont look right to me.
the correct switch should be:
-vcodec copy
Your program could pipe the rtp itself through ffmpeg - even invoking it using popen3().
It seems that you need to use an intermediate SDP file - I speculate that you can specify a file you created as a named pipe or with tmpfile() which your application writes to - using the file as an intermediary.
The command-line would be something like:
int p[3];
const char* const out_fmt = "avi";
const char* cmd[] = {"ffmpeg","-f",,"-i",temp_sdp_filename,"-vcodec","copy","-f",out_fmt,"-",NULL};
if(-1 == popen3(p,cmd)) ...
// write the rtp that you receive to p[STDIN_FILENO]
// read the avi from p[STDOUT_FILENO]
// read any messages and error text from p[STDERR_FILENO]
I believe that in this circumstance ffmpeg is clever enough to repackage the container (rtp stream vs AVI) without transcoding the video and audio (this is the -vcodec copy switch); therefore, you'd have no loss of quality and it'd be blazingly fast.