How to tell which video failed to download from a youtube playlist using youtube-dl - youtube-dl

I've been using youtube-dl for a while now to batch download playlists.
Sometimes, youtube-dl begins to run, and prints a message like "getting 200 video id's, downloading 199 of them" - or something like that.
1 video is missing (199 of 200 successful). Is there any way to find out which one(s) failed?

The numbers in the output are a result of --playlist-start and --playlist-end. If you're passing in neither option, then the output should always be 200 of 200. If you're passing them in, check the values. Pass in 1 for playliststart and None (or -1 in older versions) for playlistend to get the whole list.
If that does not solve your problem, then post the entire output, then create an issue in the youtube-dl issuetracker. Please include the entire output of a problematic download, and pass in the --verbose option to youtube-dl. This allows the developers to find where your problem is.

Related

How to download and get video informations at the same time in youtube-dl?

In youtube-dl cli, How can i get information about the video (in json output) while the video beign downloaded by the app?
When i use this command:
youtube-dl https://www.youtube.com/watch?v=jNQXAC9IVRw
It only shows me output filename, But what a about the duration, resolution, etc... ?
I do this with 2 requests, But is it possible in on go?
It would be great if it dump the video meta data into a json file as well as output filename, Because i also struggling to pragmatically get the path of downloaded file (i have to use regex)
Just add --print-json in your command line.
youtube-dl --print-json https://www.youtube.com/watch?v=jNQXAC9IVRw
This outputs a big JSON, while video is still downloading

Is there any command in specific to record a part of a livestream that still is in transmission?

The livestream doesn't end (for now, and if ended, I think will be erased of YouTube, that's my reason to download), actually, the video is still in transmission, you know, like the streams of the NASA or streams of channels news. The detail is that the transmission lasts about 10-11 hours, and the transmission has lasted about 3 days. So it was a matter of time before the first concerts were no longer available to watch on the broadcast.
This is the video: https://www.youtube.com/watch?v=rE6QI0ywr0c
I want to download some concerts, but the things that I wanted, are disappearing with the passing of time. Right now, I'm only interested in the Disclosure concert. His concert starts at approximately -3:38:12. I mention it in case someone wants to help me.
I was trying this command, but only appear a text that i don't understand (I'll post it in the comments, all the images with his info). The command is this → yt-dlp.exe -f (bestvideo+bestaudio/best) "link" --postprocessor-args "ffmpeg:-ss 00:00:00 -to 00:00:00" -o "%(title)s_method1.%(ext)s"
The idea of that command emerged on this ideas
https://www.reddit.com/r/youtubedl/wiki/howdoidownloadpartsofavideo/
https://github.com/yt-dlp/yt-dlp/issues/686
Also, I was trying to do this How do you use youtube-dl to download live streams (that are live)?, but I can't get the HLS m3u8 URL in Chrome and Chrome Dev (yes, I go to F12 (Chrome Developer Tools) - Network and I write m3u8, I didn't find anything.
I should mention that I don't have extensive knowledge on codes and yt-dlp. I only learned the necessary to download videos, you know, yt-dlp.exe -F (link) and then yt-dlp.exe -f (numbers of resolution and audio) (link).
So if you recommend any programs or commands, please let me know as precisely as possible.
Any new info I'm gonna update in the comments.
PS: sorry for my english

pagedown::chrome_print() extremely slow for some xaringan html slides

I'm creating pdf versions of html slides generated from R markdown files. I've come across a puzzling behaviour. When I run pagedown::chrome_print on the html with output specified as xaringan::moon_reader, the operation fails with the timeout message:
Error in force(expr) : Failed to generate output in 30 seconds (timeout)
Here is an example of a call to convert such a xaringan html file which produces this timeout error on my machine:
pagedown::chrome_print("https://stat540-ubc.github.io/lectures/lectures_2020/lect09_MultipleTesting.html")
The Rmd source for this html is located here. If I increase the timeout argument of chrome_print to something very large (several thousand), the operation appears to take a lot of resources (computer fans turn on, machine becomes hot), but the pdf output is eventually produced. However, if I instead change the output to slidy_presentation in the Rmd instead of xaringan::moon_reader, chrome_print runs successfully on the html and produces a pdf in just a few seconds (with no change to the default timeout argument).
I have the same issue with other slide decks that I have created with a similar template to the one I linked above, but this doesn't happen with every xaringan html file. For example, I am able to use chrome-print to successfully convert this xaringan html file to pdf (with no change to the default timeout argument):
pagedown::chrome_print("https://annakrystalli.me/talks/xaringan/xaringan.html")
Other things I tried:
I installed decktape and used the xaringan::decktape on the xaringan html file, which also produced a timeout error. Though I'm not sure how to increase the time using this method, so I don't know if it would eventually work if given enough time.
I tried using the latest versions of Google Chrome and Chromium with the chrome_print function and got the same results as described above. I'm using Mac OSX 10.15.5.
I would like to stick with xaringan html slides as they have some features I prefer. However, the current method of conversion to pdf is not sustainable for me since I will need to convert many similar htmls, as well as update them periodically. If anyone has come across this or can suggest what might be causing this extreme slowdown when converting my xaringan htmls to pdfs, I'd appreciate your input.

youtube-dl "best" option doesn't do anything

I'm trying to download a 4k video from youtube. For this, I used the command
youtube-dl -f best https://youtu.be/VcR5RCzWfeY
However, using this command only downloads the video in 720p. Manually specifying the resolution, however, seems to work:
youtube-dl https://youtu.be/VcR5RCzWfeY -f 313+bestaudio
The documentation states that using nothing should download the best quality possible, but I always get the default quality of 720p. This tends to be an issue when I am downloading playlists with multiple file qualities. So what gives? Is there some other code I should be using?
youtube-dl downloads the best quality by default. (This may not be the highest resolution for all of the supported sites, but it tends to be that one for YouTube.)
-f best is not the default. It advises youtube-dl to download the best single file format. For many supported sites, the best single format will be the best overall, but that does not apply to YouTube.
To get the highest quality, simply run youtube-dl without any -f:
youtube-dl https://youtu.be/VcR5RCzWfeY
For your example video, this will produce an 7680x4320 video file weighing 957MB.
Note that this requires ffmpeg to be installed on your machine and available in your PATH (or specified with --ffmpeg-location). To find out which version of ffmpeg you have, type ffmpeg.

Resume youtube-dl download for MP3s

I am trying to download an entire playlist using youtube-dl, this way :
youtube-dl -citwx --audio-format mp3 --audio-quality 320K <playlist>
I believe it extracts the audio without having to download the actual video.
The problem is that I want to be able to stop and resume this download, which is impossible using only these arguments. However, if I add the -k option, the program will download the original videos (which takes a lot longer), convert them, and keep the original files (which takes a lot more space).
Is there any way for me to resume such a transfer without having to download the actual video files?
Sounds to me like there is no way. If it takes just the audio, seems like it needs to be done in one go. Maybe try writing a script that takes the file path and url as arguments, and pass those into a youtube dl script, then when that's done also deletes the video file. takes more time that way, but the space issue is gone.
I found the answer while browsing the man page :
--download-archive FILE Download only videos not listed in the
archive file. Record the IDs of all
downloaded videos in it.
youtube-dl -citwx --download-archive progress.txt --audio-format mp3 --audio-quality 320K <playlist> is the correct command.
A note, --title is deprecated. The correct command should be youtube-dl -ciwx -o "%(title)s.%(ext)s" --download-archive progress.txt --audio-format mp3 --audio-quality 320K <playlist>