Can youtube-dl download a video and an audio extraction to separate folders in one call? - youtube-dl

I like downloading YouTube videos and having mp3 versions in a subfolder to play on my phone during work.
Is it possible to call youtube-dl and download videos (from a playlist, with archive...) and save the MP3 extraction to a subfolder titled "MP3".
youtube-dl --download-archive "F:\Videos\Online Videos\Comics
Explained\Marvel Major Storylines (Constantly Updated)\Marvel Major
Storylines Archive.txt"
"https://www.youtube.com/playlist?list=PL9sO35KZL50yZh5dXW-7l93VZp7Ct4vYA"
-o "F:\Videos\Online Videos\Comics Explained\%(playlist_title)s\%(title)s.%(ext)s" -f
"bestvideo[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best"
--ffmpeg-location "H:\Documents\FFMpeg\bin"
Then -x --audio-format mp3 --audio-quality 320k
But - I want the audio to output to:
"F:\Videos\Online Videos\Comics Explained\%(playlist_title)s\MP3\%(title)s.%(ext)s"

There are multiple ways to do that:
Do the audio converting yourself, by invoking --exec with a script of yours that creates the necessary directories and then calls avconv, ffmpeg, vlc, or so.
Run youtube-dl twice with different parameters - once for video, once for audio. You'll download a little bit more data, but in many cases (especially when you download from YouTube instead of other sites) only the audio will be downloaded twice, the video only once.
Run youtube-dl twice with the same -o parameters: Once to download the files (pass in -k to keep the originals in your case), once to convert to mp3. In most cases, no additional download will be necessary the second time. Afterwards, write a small script that moves the mp3 files to the correct direction and cleans up.

Related

Use ffmpeg as external tool to stream 2 or more different sources via pipeline

I have an application running on an embedded system. This application has 2 video sources (and, theorically, 1 audio source). Concentrating to the video sources, I have 2 subprocess that computes different frames sets (unrelated each others). I want to send these frames to 2 differents streams.
I would to avoid to write a lot of ffmpeg/libav code. I have ffmpeg compiled for the embeeded system and I can use it as tool. For example, I can write to the stdout the first frames set and pass it to the ffmpeg like this:
./my_app|ffmpeg -an -i - -vcodec copy -f rtp rtp://"remote_ip
This basically works. But, now i would to send the other one frame set. How to do that? Theorically I need anoter ffmpeg instance that read from another source that can't be the stdout of "my_app", because is already busy.
I'm thinking to use 2 video files as support. I can record the 2 frames sets into 2 video files and then run 2 ffmpeg instances from these sources. In this case I think I need a way to limit the video files dimensions (like a circular buffer), because 2 streams can become really huge in time. It is a possibility?
This can sound "weird" to me: I need to record a video source in realtime and stream it via ffmpeg (always in realtime). I don't know if it is a good idea, there are realtime problems for sure:
loop:
my_app --write_into--> video_stream1.mp4
ffmpeg <--read_from-- video_stream1.mp4
my_app --write_into--> video_stream2.mp4
ffmpeg <--read_from-- video_stream2.mp4
Have you some suggestion to address this kind of situation?
many thanks, bye.

Is there a way to download a video at the playback rate (a rate that would simulate user watching it)?

So I know that one can adjust the rate of download with youtube-dl using the -r or --limit-rate flags; however, as part of a simulation testing, I am trying to simulate a user watching a video , and so I want to download a video at a rate such that the download would take as long as the video's duration is if one were to watch the video, so that a 2min long video would take 2min to download, and so on and so forth.
I have meticulously reviewed the available options on their github page, but it seems like there are no options natively to do that. But then the next best thing I can think of is to get the video duration in seconds (lets call it t) and the video size in bytes (lets call it s) and then use s/t as a value for the --limit-rate flag.
However now the problem is that there doesn't seem to be any options/flags to get the video file-size in bytes!
Is there anyway I can accomplish what my goal is here? I am open to using other tools/programs if this is outside the capabilities of youtube-dl.
To be more specific, I am working in linux server environment (no video-card and needs to be able headlessly), and the videos I'm dealing with are MPEG Dash videos from an MPD file, so whatever tool I use needs to be able to parse and work with MPD files.
Thank you for your help,

Convert frames to video on demand

I'm working on a c++ project that generates frames to be converted to a video later.
The project currently dumps all frames as jpg or png files in a folder and then I run ffmpeg manually to generate a mp4 video file.
This project runs on a web server and an ios/android app (under development) will call this web server to have the video generated and downloaded.
The web service is pretty much done and working fine.
I don't like this approach for obvious reasons like a server dependency, cost etc...
I successfully created a POC that exposes the frame generator lib to android and I got it to save the frames in a folder, my next step now is to convert it to video. I considered using any ffmpeg for android/ios lib and just call it when the frames are done.
Although it seems like I fixed half of the problem, I found a new one which is... each frame depending on the configuration could end up having 200kb+ in size, so depending on the amount of frames, it will take a lot of space from the user's device.
I'm sure this will become a huge problem very easily.
So I believe that the ideal solution would be to generate the mp4 file on demand as each frame is created, so in the end there would be no storage space being taken as I woudn't need to save a file for the frame.
The problem is that I don't know how to do that, I don't know much about ffmpeg, I know it's open source but I have no idea how to include a reference to it from the frames generator and generate the video "on demand".
I heard about libav as well but again, same problem...
I would really appreciate any sugestion on how to do it. What I need is basically a way to generate a mp4 video file given a list of frames.
thanks for any help!

FFMpeg- Raw compressed data to video

I'm trying to use FFMpeg to create a video. So far i've been playing with a multiplexing example:
http://ffmpeg.org/doxygen/trunk/muxing_8c-source.html, and i'm able to create a compressed video from an already existing video.
Because my program is going to run on an embedded platform I would like to use some custom code (generated by a colleague) to compress the video data and place it into the video file.
So I'm looking for a way to create a video file in c/c++ using ffmpeg in which i have full control over the compression part (to basically circumvent ffmpeg from doing the compression for me and inserting my own code).
To clarify i'm planning to use this to save film from an intelligent camera into a compressed h264 mpeg-4 file.
You could pipe the output with -vcodec rawvideo to your custom program, or write it as a codec and have ffmpeg handle it.
By the way, ffmpeg was superceded by avconv. ffmpeg only exists for backwards compatibility now.
Edit: apparently avconv is a newer fork of ffmpeg, and seems to have more support. Either way, the options are almost the same.

Writing uncompressed AVI video

Is there any simple and small library or code sample to write uncompressed avi video files in c++?
Every resource I find uses video for windows, and I need this to be portable.
I tried looking at the AVI container documentation but it looks really cumbersome.
I haven't done it myself but I suggest using libavformat, part of ffmpeg (LGPLv2.1/GPLv2+).
If you want something that is super easy, then write the BMPs to disk as a numbered sequence, then invoke ffmpeg via system() or something similar like this:
ffmpeg -i picture_%04d.bmp -r 15 -vcodec rawvideo -y output.avi
Note: the -r argument indicates the frame rate you want the final movie to be.
You could achieve the same result by calling into the libavformat/libavcodec libs that ffmpeg is based on, but that may not fit the "small" and "simple" requirement you have.
There is always OpenCV, which is multiplatform and relatively easy to push data into.
OR - you can always pipe the video data to another executable.
For instance, in unix you can do:
In your program:
char myimage[640*480*4];
// read data into myimage
fputs(myimage,1,640*480*4,stdout);
And in a script that runs your program:
./myprogram | \
mencoder /dev/stdin -demuxer rawvideo -rawvideo w=640:h=480:fps=30:format=rgba \
-ovc lavc -lavcopts vcodec=mpeg4:vbitrate=9000000 \
-oac copy -o output.avi
(Note that this does compress the video - uncompressed is just a matter of changing the mencoder command-line)
I believe you can also use ffmpeg this way, or x264. You can also start the encoder from within your program, and write to a pipe (making the whole process as simple if you were using a library) - and this should work on Windows, Linux or Mac (with minor modifications to the mencoder command-line).
Obviously there are no licensing issues either (apart from distribution of the mencoder executable), as you are not linking to any code or compiling anything in.
While not quite what you want, it does have the advantage that a second processor will automatically be used for the encoding.