I am using youtube-dl downloading videos and wish to number the videos and put them in relevant folders. It does work however the numbered videos start with 000001 and continue upwards. I want the videos to be in the hundreds. For example 001 and upwards. I've looked everywhere for a solution. It says the syntax is autonumber-size Number. I've used the following output template
youtube-dl -o "%(chapter_number)s - %(chapter)s/%(autonumber-size 3)s %(title)s.%(ext)s" -u USERNAME -p PASSWORD URL
This creates files being prefixed with NA instead of 001 and above.
Can anyone please help me. Thanks
%(autonumber)03d.
This is clearly explained in youtube-dl's FAQ.
Related
I am new to ImageNet and would like to download full sized images of one of the subsets/synsets however I have found it incredibly difficult to actually find what subsets are available and where to find the ID code so I can download this.
All previous answers (from only 7 months ago) contain links which are now all invalid. Some seem to imply there is some sort of algorithm to making up an ID as it is linked to wordnet??
Essentially I would like a dataset of plastic or plastic waste or ideally marine debris. Any help on how to get the relevant ImageNet ID or suggestions on other datasets would be much much appreciated!!
I used this repo to achieve what you're looking for. Follow the following steps:
Create an account on Imagenet website
Once you get the permission, download the list of WordNet IDs for your task
Once you've the .txt file containing the WordNet IDs, you are all set to run main.py
As per your need, you can adjust the number of images per class
By default ImageNet images are automatically resized into 224x224. To remove that resizing, or implement other types of preprocessing, simply modify the code in line #40
Source: Refer this medium article for more details.
You can find all the 1000 classes of ImageNet here.
EDIT:
Above method doesn't work post March 2021. As per this update:
The new website is simpler; we removed tangential or outdated functions to focus on the core use case—enabling users to download the data, including the full ImageNet dataset and the ImageNet Large Scale Visual Recognition Challenge (ILSVRC).
So with this, to parse and search imagenet now you may have to use nltk.
More recently, the organizers hosted a Kaggle challenge based on the original dataset with additional labels for object detection. To download the dataset you need to register a Kaggle account and join this challenge. Please note that by doing so, you agree to abide by the competition rules.
Please be aware that this file is very large (168 GB) and the download will take anywhere from minutes to days depending on your network connection.
Install the Kaggle CLI and set up credentials as per this guideline.
pip install kaggle
Then run these:
kaggle competitions download -c imagenet-object-localization-challenge
unzip imagenet-object-localization-challenge.zip -d <YOUR_FOLDER>
Additionally to understand ImageNet hierarchy refer this.
The livestream doesn't end (for now, and if ended, I think will be erased of YouTube, that's my reason to download), actually, the video is still in transmission, you know, like the streams of the NASA or streams of channels news. The detail is that the transmission lasts about 10-11 hours, and the transmission has lasted about 3 days. So it was a matter of time before the first concerts were no longer available to watch on the broadcast.
This is the video: https://www.youtube.com/watch?v=rE6QI0ywr0c
I want to download some concerts, but the things that I wanted, are disappearing with the passing of time. Right now, I'm only interested in the Disclosure concert. His concert starts at approximately -3:38:12. I mention it in case someone wants to help me.
I was trying this command, but only appear a text that i don't understand (I'll post it in the comments, all the images with his info). The command is this → yt-dlp.exe -f (bestvideo+bestaudio/best) "link" --postprocessor-args "ffmpeg:-ss 00:00:00 -to 00:00:00" -o "%(title)s_method1.%(ext)s"
The idea of that command emerged on this ideas
https://www.reddit.com/r/youtubedl/wiki/howdoidownloadpartsofavideo/
https://github.com/yt-dlp/yt-dlp/issues/686
Also, I was trying to do this How do you use youtube-dl to download live streams (that are live)?, but I can't get the HLS m3u8 URL in Chrome and Chrome Dev (yes, I go to F12 (Chrome Developer Tools) - Network and I write m3u8, I didn't find anything.
I should mention that I don't have extensive knowledge on codes and yt-dlp. I only learned the necessary to download videos, you know, yt-dlp.exe -F (link) and then yt-dlp.exe -f (numbers of resolution and audio) (link).
So if you recommend any programs or commands, please let me know as precisely as possible.
Any new info I'm gonna update in the comments.
PS: sorry for my english
I'm using "Safe Search and Replace on Database with Serialized Data v3.1.0" to do a search and replace of my database. I've been trying to write it myself but haven't had any luck and Google seems to not have the answer I'm looking for.
Basically I need a string that will target upload folders from 2010-2015 with jpg|jpeg|png|gif file types that I can replace the string with a simple placeholder.png file I created.
Here's what I've managed to make that doesn't seem to work lol sorry if it's just terrible:
(uploads)(.*?).(jpg|png|gif|jpeg)
or
^\/wp-content\/uploads\/((2014|2013|2012|2011)|(2015\/(01|02|03|04|05|06|07|08)))\/(.+)(jpg|png|gif|jpeg)
I've tried other variations I've made but when conducting a "dry run" it states that 0 cells would have been changed.
The image urls are full and not relative.
Can anyone assist?
Please try:
(uploads\/201[0-5]\/(?:0[1-9]|1[0-2])\/.*?\.(?:jpg|png|gif|jpeg))
REGEX 101 DEMO
Hope this works for you:
^\/wp-content\/uploads\/(201[0-4]\/\d{2}|2015\/0[0-8])\/(.+\.(jpg|png|gif|jpeg))$
I tested it here: regexp101.
You will find more in depth-explanation there.
Be aware that the uploads folder format can be changed in the Dashboard.
What I am trying to accomplish is to optimize one parameter at a time, for one learning algorithm. Take for example Ridor and lets say I want to optimize the number of folds (-F) parameter and run it from 2-10 or whatever. I then want output on a format that is easy to parse and then choose a final value myself. I think this should be possible with CVParameterSelection. Even if not I would like help to get it to work on at least a basic level.
I have selected CVParameterSelection as my classifier, and as a parameter to CVParameterSelection I have chosen Ridor as the classifier to optimize. What I have trouble doing is telling CVParameterSelection that it is the -F parameter I want to optimize, and I want to go from 2 to 10 in 1 increments on the format 2 10 9 as per instructions here http://weka.wikispaces.com/Optimizing+parameters. The choice of Ridor and parameter here is completely arbitrary. I want to run any algorithm, with any parameter and have it vary the parameter in a range.
I can not find the ArrayEditor that this tutorial speaks of, I have clicked literally everything everywhere. Nothing that looks like an array editor, nothing that is named ArrayEditor. The total command line per default is weka.classifiers.meta.CVParameterSelection -X 10 -S 1 -W weka.classifiers.rules.Ridor -- -F 3 -S 1 -N 2.0.
I have tried sending -F 2 10 9 on the command line to both CVParameterSelection and Ridor. I have also tried reading section 11.5 on optimizing performance in the Weka book but I do not understand the instructions there either.
This feels like it should be really simple and obvious. Can someone point out what I am doing wrong and post a detailed description of exactly how to do this. Please assume I am a total idiot because it really should not take many many hours to do this.
During your configuration of CVParameterSelection, you will find field named "CVParameters", by clicking it, new window named "weka.gui.GenericArrayEditor" will open. Write inside it your parameter and its range as showing in weka tutorial, finally close this window.
I've been using youtube-dl for a while now to batch download playlists.
Sometimes, youtube-dl begins to run, and prints a message like "getting 200 video id's, downloading 199 of them" - or something like that.
1 video is missing (199 of 200 successful). Is there any way to find out which one(s) failed?
The numbers in the output are a result of --playlist-start and --playlist-end. If you're passing in neither option, then the output should always be 200 of 200. If you're passing them in, check the values. Pass in 1 for playliststart and None (or -1 in older versions) for playlistend to get the whole list.
If that does not solve your problem, then post the entire output, then create an issue in the youtube-dl issuetracker. Please include the entire output of a problematic download, and pass in the --verbose option to youtube-dl. This allows the developers to find where your problem is.