Issue while downloading a large playlist ( 200+ videos) using YouTube -dl - youtube-dl

[youtube] h6863vjJ9Ds: Downloading webpage ERROR: unable to download
video data: HTTP Error 403: Forbidden
I am facing this issue while downloading and extracting audio from a music playlist of 250+ videos.
I start the process and it continues for 20+ videos but then it stops.
And display the message above.
I tried removing Cache but it didn't help.

Try it again but add the flag after the command:
youtube-dl --ignore-errors
In my experience, if a playlist has an entry that is restricted by age or location or has been deleted, then it will bork and stop right away. The above will skip the entry that is causing difficulties and continue.
Note, though, that when downloading playlists, due to the response youtube-dl gets from YouTube, it is limited to download the first 100 entries in a playlist, annoyingly.
youtube-dl is hosted on GitHub, and given the enormous number of pending pull requests (900+) and reported issues (3.9k), I don't think anyone is actively putting much work into maintaining it anymore.

Related

Problem with AR.js / Access to Custom NFT

at the start - please be gentle, I'm absolutely new in html and javascript.
I want to use Web AR with tracking of my own, custom images. I set up the image tracking example
on glitch and generated nft descriptors for the image but got stuck accessing them.
I found this discussion on
Gitter which says:
So I connected github to glitch, pulled the glitch repo, pushed the files to a branch, pushed the branch, opened the glitch terminal, merged the new branch, refreshed, and voila.
but it's waaaay too fast for me. I understand what githack is used for in this code snippet
url="https://arjs-cors-proxy.herokuapp.com/https://raw.githack.com/AR-js-org/AR.js/master/aframe/examples/image-tracking/nft/trex/trex-image/trex"
but how can I upload the nft files to gitHub? When I try to upload, it says these types are not supported.
And how can I establish a folder structure like this one proposed on:
Image Tracking using AR.js - Problem with Custom Image Descriptors?

Amazon transcoded video getting slow when loading it first time

We are working on video site, for that we are adding video in s3 bucket and transcoding that video, it is working fine for us, but the issue is when i run that transcoded video first time it is getting slowed, if i will run it second time then it is working fine for us, so what can be the issue for that? I tried searching lot on google but didn't get any proper help. I need solution for this. Can anyone please tell me how can i resolve this issue ?
For your application AWS cloud front will be an important parameter. configure cloud front with the S3 bucket as source and try calling these videos using cloud front URL and see the difference.
cloud front is a content delivery network and well known for delivering static content.
And as your video runs slow when requested first time and runs smooth when requested 2nd or 3rd time this may be because of browser cache at user machine.
This is a normal behaviour for any static contents on web application or websites.
Thanks

Use AWS Elastic Transcoder and S3 to stream HLSv4 without making everything public?

I am trying to stream a video with HLSv4. I am using AWS Elastic Transcoder and S3 to convert the original file (eg. *.avi or *.mp4) to HLSv4.
Transcoding is successful, with several *.ts and *.aac (with accompanying *.m3u8 playlist files for each media file) and a master *.m3u8 playlist file linking to the media-file specific playlist files. I feel fairly comfortable everything is in order here.
Now the trouble: This is a membership site and I would like to avoid making every video file public. The way to do this typically with S3 is to generate temporary keys server-side which you can append to the URL. Trouble is, that changes the URLs to the media files and their playlists, so the existing *.m3u8 playlists (which provide references to the other playlists and media) do not contain these keys.
One option which occurred to me would be to generate these playlists on the fly as they are just text files. The obvious trouble is overhead, it seems hacky, and these posts were discouraging: https://forums.aws.amazon.com/message.jspa?messageID=529189, https://forums.aws.amazon.com/message.jspa?messageID=508365
After spending some time on this, I feel like I'm going around in circles and there doesn't seem to be a super clear explanation anywhere for how to do this.
So as of September 2015, what is the best way to use AWS Elastic Transcoder and S3 to stream HLSv4 without making your content public? Any help is greatly appreciated!
EDIT: Reposting my comment below with formatting...
Thank you for your reply, it's very helpful
The plan that's forming in my head is to keep the converted ts and aac files on S3 but generate the 6-8 m3u8 files + master playlist and serve them directly from app server So user hits "Play" page and jwplayer gets master playlist from app server (eg "/play/12/"). Server side, this loads the m3u8 files from s3 into memory and searches and replaces the media specific m3u8 links to point to S3 with a freshly generated URL token
So user-->jwplayer-->local master m3u8 (verify auth server side)-->local media m3u8s (verify auth server side)-->s3 media files (accessed with signed URLs and temporary tokens)
Do you see any issues with this approach? Such as "you can't reference external media from a playlist" or something similarly catch 22-ish?
Dynamically generated playlists is one way to go. I actually implemented something like this as a Nginx module and it works very fast, though it's written in C and compiled and not PHP.
The person in your first link is more likely to have issues because of his/hers 1s chunk duration. This adds a lot of requests and overhead, the value recommended by Apple is 10s.
There are solutions like HLS encrypted with AES-128 (supported on the Elastic Transcoder), which also adds overhead if you do it on the-fly, and HLS with DRM like PHLS/Primetime which will most likely get you into a lot of trouble on the client-side.
There seems to be a way to do it with Amazon CloudFront. Please note that I haven't tried it personally and you need to check if it works on Android/iOS.
The idea is to use Signed Cookies instead of Signed URLs. They were apparently introduced in March 2015. The linked blog entry even uses HLS as an example.
Instead of dynamic URLs you send a Set-Cookie header after you authenticate the user. The cookie (hopefully) gets passed around with every request (playlist and segments) and CloudFront decides whether to allow the access to your S3 bucket or not:
You can find the documentation here:
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html

download speed in google cloud storage

I am using the MediaIoBaseDownload to implement download from GCS.
But I found that the download response is always about 5 seconds between each response.
if I download two files at the same time, the gap between each response will around 10 seconds.
upload speed is fine, it only occurred while downloading.
Is there any limitation about the download API cause I could not found the limitation.
after add some log information I found that the most time spent at the response.read() in httplib2
Could I think this is the limitation that GCS server holds or is there any setting of buckets(e.g. like DRA) that will affect the download speed?
I am using python of 2.7.8.
thanks!

Setting Request Time Limit in Drupal

Anyone knows how to up the request time length in drupal? I'm trying to download large files via the web services module but my token keeps expiring because the request takes so long. I know there is a setting in drupal to do this but I just can't find it.
UPDATE
So I found out how to up the request time (/admin/build/services/settings) but that didn't work. I'm still getting "The request timed out" on files about 10mb large. Anyone has any ideas? Also, I'm using ASIHTTPRequest and drupal-ios-sdk and downloading the files to an iPad.
Turns out it was the default timeOutSeconds property on the ASIHTTPRequest was too small (10 seconds). When I uped it, my large files downloaded ok