I am muxing h264 encoded video data and PCM g711 encoded audio data into a .mov media container. I am trying to write metadata on header but the metadata is not showing when I go to file->right click->properties->details on windows and likewise in Ubuntu. This is my code -
// Instead of creating new AVDictionary object, I also tried following way
// stated here: http://stackoverflow.com/questions/17024192/how-to-set-header-metadata-to-encoded-video
// but no luck
AVDictionary* pMetaData = m_pFormatCtx->metadata;
av_dict_set(&pMetaData, "title", "Cloud Recording", 0);
av_dict_set(&pMetaData, "artist", "Foobar", 0);
av_dict_set(&pMetaData, "copyright", "Foobar", 0);
av_dict_set(&pMetaData, "filename", m_sFilename.c_str(), 0);
time_t now = time(0);
struct tm tStruct = *localtime(&now);
char date[100];
strftime(date, sizeof(date), "%c", &tStruct); // i.e. Thu Aug 23 14:55:02 2001
av_dict_set(&pMetaData, "date", date, 0);
av_dict_set(&pMetaData, "creation_time", date, 0);
av_dict_set(&pMetaData, "comment", "This video has been created using Eyeball MSDK", 0);
// ....................
// .................
/* write the stream header, if any */
int ret = avformat_write_header(m_pFormatCtx, &pMetaData);
I also tried to see if the file contains any metadata using mediainfo and exiftools in linux. Also I tried ffmpeg -i output.mov but no metadata is shown.
Whats the problem? Is the flags value 0 in av_dict_set okay? DO I need to set different flags for different platform (windows/linux) ?
I saw this link and it stated that for windows, I have to use id3v2_version 3 and -write_id3v1 1 to make metadata working. If so, how can I do this in C++?
I have something similar to your code, but I'm adding the AVDictionary to my AVFormatContext metadata parameter and it works for me that way. Here's a snippet based on your code.
AVDictionary *pMetaData = NULL;
av_dict_set(&pMetaData, "title", "Cloud Recording", 0);
m_pFormatCtx->metadata = pMetaData;
avformat_write_header(m_pFormatCtx, NULL);
Related
I need to make a GStreamer audio pipeline to redirect audio stream.
GStreamer has been built from vcpg (v 1.19.2).
I also made an install from msi file,
with same issue.
Project is made with Visual Studio 2019.
I succeed to get some elements from factories :
GstElement* queue2 = gst_element_factory_make("queue", "queue");
GstElement* audio_sink = gst_element_factory_make("autoaudiosink", "sink");
but still failed to get appsrc element:
GstElement* app_source = gst_element_factory_make("appsrc", "source"); // null !!!
It appears that relevant plugin exists (gst-inpect):
appsrc: Factory Details:
appsrc: Rank none (0)
appsrc: Long-name AppSrc
appsrc: Klass Generic/Source
appsrc: Description Allow the application to feed buffers to a pipeline
appsrc: Author David Schleef <ds#schleef.org>, Wim Taymans <wim.taymans#gmail.com>
appsrc:
appsrc: Plugin Details:
appsrc: Name app
appsrc: Description Elements used to communicate with applications
appsrc: Filename C:\src\vcpkg\installed\x64-windows\bin\gstapp.dll
appsrc: Version 1.19.2
appsrc: License LGPL
appsrc: Source module gst-plugins-base
appsrc: Source release date 2021-09-23
appsrc: Binary package GStreamer Base Plug-ins source release
appsrc: Origin URL Unknown package origin
appsrc:
appsrc: GObject
appsrc: +----GInitiallyUnowned
appsrc: +----GstObject
appsrc: +----GstElement
appsrc: +----GstBaseSrc
appsrc: +----GstAppSrc
...
I tested:
GstPlugin* pl = gst_plugin_load_file("C:\\src\\vcpkg\\installed\\x64-windows\\bin\\gstapp.dll",&error);
// still NULL... after warning message "specified module not found"
However same code works with other plugins. e.g. "gstcoreelements.dll"
Strangely, looping on element factories and plugins show me what I need:
GList* list, * walk;
list = gst_registry_feature_filter(registry, filter_vis_features, FALSE, NULL);
for (walk = list; walk != NULL; walk = g_list_next(walk)) {
const gchar* name;
GstElementFactory* factory;
factory = GST_ELEMENT_FACTORY(walk->data);
name = gst_element_factory_get_longname(factory);
g_print(" %s\n", name);
// returns notably: AppSink and AppSrc
}
GList *list2 = gst_registry_plugin_filter(registry, filter_vis_plugins, FALSE, NULL);
for (walk = list2; walk != NULL; walk = g_list_next(walk)) {
const gchar* name;
GstPlugin* plugin;
plugin = GST_PLUGIN(walk->data);
name = gst_plugin_get_filename(plugin);
g_print(" ---- %s\n", name);
name = gst_plugin_get_name(plugin);
g_print(" ---- %s\n", name);
// ---- C:\src\vcpkg\installed\x64-windows\bin\gstapp.dll
// ---- app
}
Using break points, it seems that module loading failed in glib
"g_module_open_full" function.
I don't know why at the moment, because debug infos stopped with "gmodule-win32.c not found".
Any idea, tip, clue (,solution) will be highly appreciated.
Finally get it working by adding gstapp-1.0-0.dll in .exe directory.
I am using the Google Cloud Platform to convert some audio into text files through the Google Speech-to-Text API. I keep getting the error: google.api_core.exceptions.InvalidArgument: 400 Must use single channel (mono) audio, but WAV header indicates 1 channels.
Here is my code:
config_wave_enhanced = speech.types.RecognitionConfig(
#sample_rate_hertz=44100,
encoding = 'LINEAR16',
enable_automatic_punctuation=True,
language_code='en-US',
#use_enhanched=True,
model='video',
enable_separate_recognition_per_channel = True,
audio_channel_count = 2
)
operation = speech_client.long_running_recognize(
config=config_wave_enhanced,
audio=long_audi_wave
)
response = str(operation.result(timeout=90))
Can anyone help me solve this error? I'm going crazy here.
Setting audio_channel_count = 1 might help.
Convert your audio to 1-channel. You can do this using command line ffmpeg -i stereo.wav -ac 1 mono.wav. Also set audio_channel_count = 1 as Christian Adib mentioned.
I am using QtFFMPEG wrapper(https://code.google.com/p/qtffmpegwrapper/) with Qt 5.4 and MSCV 2012. I want to encode a mp4 video from image files at 25 fps and high profile.
I used the createFile() and encodeImage() functions from here
I am using the below parameters:
pCodecCtx=pVideoStream->codec;
pCodecCtx->codec_id = pOutputFormat->video_codec;
pCodecCtx->codec_type = ffmpeg::AVMEDIA_TYPE_VIDEO;
pCodecCtx->profile=FF_PROFILE_H264_HIGH;
pCodecCtx->bit_rate = Bitrate;
pCodecCtx->width = getWidth();
pCodecCtx->height = getHeight();
pCodecCtx->time_base.den = fps;
pCodecCtx->time_base.num = 1;
pCodecCtx->gop_size = 10;
pCodecCtx->pix_fmt = ffmpeg::PIX_FMT_YUV420P;
pCodecCtx->qmin = 10;
pCodecCtx->qmax = 51;
The FFMPEG variables are:
License: %s
GPL version 3 or later
AVCodec version %d
3476480
AVFormat configuration: %s
--disable-static --enable-shared --enable-gpl --enable-version3 --enable-runtime-cpudetect --enable-avisynth --enable-bzlib --enable-frei0r --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libfreetype --enable-libgsm --enable-libmp3lame --enable-libopenjpeg --enable-librtmp --enable-libschroedinger --enable-libspeex --enable-libtheora --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libx264 --enable-libxavs --enable-libxvid --enable-zlib
Now I currently get a video with below properties:
ID : 1
Format : AVC
Format/Info : Advanced Video Codec
Format profile : Main#L3.2
Format settings, CABAC : No
Format settings, ReFrames : 1 frame
Format settings, GOP : M=1, N=10
Codec ID : avc1
Codec ID/Info : Advanced Video Coding
Duration : 4s 320ms
I want the profile to be "High" and the CABAC to be yes with 3 ReFrames. How do I achieve that? I tried setting the profile, coder_type and max_b_frames but did not help. At times the generated file did not even play. Can anyone help please. Thanks.
I also tried using the av_opt_set() way but could not find that function. Only function I have is av_opt_set_dict(), am I missing something - outdated FFMPEG or missing #include.
Tried this too, didnt help-
ffmpeg::AVDictionary *opt = NULL;
int iRes = av_dict_set(&opt, "profile", "high", 0);
av_opt_set_dict(pFormatCtx->priv_data, &opt);
av_opt_set_dict(pFormatCtx, &opt);
Please help.
EDIT:
I got a high quality mp4 by changing the qmin and qmax values and then reencoding the big sized output via command line. I will try to upgrade the FFMPEG as suggested by Ronald below. Please consider the question closed for now.
AVCodec version %d
3476480
That version (libavcodec 53.12.0) is from October 2011, please update to something newer. As you can see from the H264 encoding wiki docs, your settings will work with recent versions of ffmpeg. (Also please share the rest of your code, you're just showing the code that sets your settings, but not any other part of your code, so I can't reproduce anything.)
ALL,
I tried to search the suggested threads here but to no availability.
I need to send the file to the web server using libCURL POST request.
I am creating the file I'm sending so I know it is exist.
I also have a WireShark installed here so I know what is sent over.
During the debug session I can see that the file has been read and the buffer it is read to is sending out. However, in the WireShark it is all displayed as FF FF FF FF sequence.
And in the end the operation fail with "Error writing the body" error message returning to CURL.
Any idea what might be the problem?
Here is what I tried so far:
1.
result = curl_formadd( &first, &last, CURLFORM_COPYNAME, "MyFile",
CURLFORM_FILE, (const char *) fileName.c_str(),
CURLFORM_FILENAME, (const char *) fileName.c_str(),
CURLFORM_CONTENTSLENGTH, (long) file.Length(),
CURLFORM_CONTENTTYPE, "image/bitmap", CURLFORM_END );
2.
result = curl_formadd( &first, &last, CURLFORM_COPYNAME, "MyFile",
CURLFORM_FILENAME, (const char *) fileName.c_str(),
CURLFORM_STREAM, &file, CURLFORM_CONTENTSLENGTH,
(long) file.Length(), CURLFORM_CONTENTTYPE,
"image/bitmap", CURLFORM_END );
Then in both cases:
curl_easy_setopt( handle, CURLOPT_HTTPPOST, first );
curl_easy_setopt( handle, CURLOPT_POSTFIELDSIZE, file.Length() );
The header in both cases are formed correctly, but the data itself is not sent.
Does anybody have an idea? I feel like I'm missing something very simple...
ALL,
Turns out that the service I tried to connect to was down.
Thank you for listening.
And exact the media type then save the captured image with correct extension?
The browser usually sends a http GET request to the server (that holds the video stream) when it wants to play the video and that is what starts the file transfer, but this handshake may change according to the server you are negotiating with.
Below is a short code snippet that shows how to setup curl and download a file when you have the complete url of the file:
#include <curl/curl.h>
#include <stdio.h>
void get_page(const char* url, const char* file_name)
{
CURL* easyhandle = curl_easy_init();
curl_easy_setopt( easyhandle, CURLOPT_URL, url ) ;
FILE* file = fopen( file_name, "w");
curl_easy_setopt( easyhandle, CURLOPT_WRITEDATA, file) ;
curl_easy_perform( easyhandle );
curl_easy_cleanup( easyhandle );
}
int main()
{
get_page( "http://blog.stackoverflow.com/wp-content/themes/zimpleza/style.css", "style.css" ) ;
return 0;
}
Websites like youtube don't give the URL of the video so easily, and may even redirect you to another html page that you could parse to find the magic information needed to assembly the full URL of the video. I wrote a small bash script a long time ago to automate the process for finding a youtube's video URL and downloading the video. I know it doesn't work anymore so I'll paste it for education purposes only:
if [ -z "${1}" ]
then
echo 'Error !!! Missing URL or video_id !!!'
exit 1
fi
URL="http://www.youtube.com"
# Retrieve video_id from url passed by the user
VAR_VIDEO_ID="${1/*=}"
# Retrive t variable located in var swfHTML (javascript)
VAR_T=$(wget -qO - $URL/watch?v=$VAR_VIDEO_ID 2>&1 | perl -e 'undef $/; <STDIN> =~ m/&t=([^&]*)&/g; print "$1\n"';)
# Assemble magical string
FLV_URL="$URL/get_video?video_id="$VAR_VIDEO_ID"&t="$VAR_T"=&eurl=&el=detailpage&ps=default&gl=US&hl=en"
# Download flv from Youtube.com. Add 2>&1 before wget cmd to suppress logs
WGET_OUTPUT=$(wget $FLV_URL -O $VAR_VIDEO_ID.flv)
# Making sure the download went okay
if [ $? -ne 0 ]
then
# wget failed
echo -e 1>&2 $0: "!!! ERROR: wget failed !!!\n"
rm $VAR_VIDEO_ID.flv
exit 1
fi
And to answer your 2nd question, I believe that to identify the file/media type you will have to download the first bytes of the video stream since it contains the file header, and then check it for known file signatures. For instance, the first bytes of a FLV file should be:
46 4C 56 01
EDIT:
Downloading a video stream is not so different as one may believe. You will need to tell curl that you have your own method to save the stream data, and this can be specified by:
curl_easy_setopt(curl_handle, CURLOPT_WRITEFUNCTION, cb_writer);
curl_easy_setopt(curl_hndl, CURLOPT_WRITEDATA, url_data);
Where *cb_writter* is your callback that will be called by curl when new data arrives. Check the documentation on Callback Options for more info about these functions.
If you need a full example you can check this thread.
One more thing, if you are working with M-JPEG streams you should take a look at the cambozola implementation.