downlaod adobe connect recorded with youtube-dl - youtube-dl

I'm trying to download this adobe connect recorded class wtih youtube-dl but it shows me error. can help me?
http://webinar2.um.ac.ir/p235bgxwomm/?OWASP_CSRFTOKEN=02580f34ad3e972c95f8a52eee4700b15783db104789f9c87909d1dedcb8f6bf
error:
youtube-dl.exe http://webinar2.um.ac.ir/p235bgxwomm/?OWASP_CSRFTOKEN=02580f34ad3e972c95f8a52eee4700b15783db104789f9c87909d1dedcb8f6bf
[generic] ?OWASP_CSRFTOKEN=02580f34ad3e972c95f8a52eee4700b15783db104789f9c87909d1dedcb8f6bf: Requesting header
WARNING: Falling back on generic information extractor.
[generic] ?OWASP_CSRFTOKEN=02580f34ad3e972c95f8a52eee4700b15783db104789f9c87909d1dedcb8f6bf: Downloading webpage
[generic] ?OWASP_CSRFTOKEN=02580f34ad3e972c95f8a52eee4700b15783db104789f9c87909d1dedcb8f6bf: Extracting information
ERROR: Unsupported URL: http://webinar2.um.ac.ir/p235bgxwomm/?OWASP_CSRFTOKEN=02580f34ad3e972c95f8a52eee4700b15783db104789f9c87909d1dedcb8f6bf
I'm sure that it is a adobe connect record!
thanks!

I wanted to use youtube-dl for adobe connect recorded class too, but I was unsuccessful. There is a simple solution for downloading your recorded class; your adobe connect URL should look like this:
http://webinar2.um.ac.ir/abcdefghijk/
Add this to the end:
output/filename.zip?download=zip
So your URL should look like this:
http://webinar2.um.ac.ir/abcdefghijk/output/filename.zip?download=zip
Then, it will give you a zip file containing the resources of the recorded class. One of the disadvantages of this method is that you may need to add the audio to the video yourself. Sometimes there is more than one file for a whole class; for example, in my case, there was 3 file each containing about 20 minutes of a 60 minutes class.

Related

Unable to PUT big file (2gb) to aws s3 bucket (nodejs) | RangeError: data is too long

I scouted trough all of the internet and everybody gives out different advice but none of them helped me.
Im currently trying to simply send file.buffer that gets send to my endpoint directly to aws bucket.
im using PutObjectCommand have correctly entered all the details in but there's apparently problem with me using simple await s3.send(command) because my 2.2gbs video is way too big.
i get this error when attempting to upload said file to cloud.
RangeError: data is too long at Hash.update (node:internal/crypto/hash:113:22) at Hash.update (C:\Users\misop\Desktop\sebi\sebi-auth\node_modules\#aws-sdk\hash-node\dist-cjs\index.js:12:19) at getPayloadHash (C:\Users\misop\Desktop\sebi\sebi-auth\node_modules\#aws-sdk\signature-v4\dist-cjs\getPayloadHash.js:18:18) at SignatureV4.signRequest (C:\Users\misop\Desktop\sebi\sebi-auth\node_modules\#aws-sdk\signature-v4\dist-cjs\SignatureV4.js:96:71) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) { code: 'ERR_OUT_OF_RANGE', '$metadata': { attempts: 1, totalRetryDelay: 0 } }
I browsed quite a lot,there's lots of people saying that i should be using presigned url,i did try however if i do await getSignedUrl(s3,putCommand,{expires:3600}); then i do get generated url but there's not PUT send to cloud. when i read little more into it getSignedUrl is just for generating signed url therefore there's no way for me to use Put command there so im not sure how to approach this situation.
Im currently working with :
"#aws-sdk/client-s3": "^3.238.0",
"#aws-sdk/s3-request-presigner": "^3.238.0",
Honestly i've been testing lots of different ways i saw online but i wasnt successful following even amazon's official documentation where they mention these thing and i trully dont want to implement multipart upload for smaller than 4 ~ 5gbs of videos.
I'll be honored to hear any advice on this topic, thank you.
Get advice on how to implement simple video upload to aws s3 because of my many failed attempts on doing so since there's lots of information and vast majority doesnt work.
The solution to my problem was essentially using multer's s3 "addon" that had s3 property and had pre-done solution.
"multer-s3": "^3.0.1" version worked even with file that have 5gbs and such. solutions such as using PutObject command inside presigned url method or presigned-post methods were unable to work with multer's file.buffer that node server receives after its being submitted.
If you experienced same problem and want quick and easy solution. use this Multer-s3 npm

How to facilitate downloading both CSV and PDF from API Gateway connected to S3

In the app I'm working on, we have a process whereby a user can download a CSV or PDF version of their data. The generation works great, but I'm trying to get it to download the file and am running into all sorts of problems. We're using API Gateway for all the requests, and the generation happens inside a Lambda on a POST request. The GET endpoint takes in a file_name parameter and then constructs the path in S3 and then makes the request directly there. The problem I'm having is when I'm trying to transform the response. I get a 500 error and when I look at the logs, it says Execution failed due to configuration error: Unable to transform response. So, clearly that's where I've spent most of my time. I've tried at least 50 different iterations of templates and combinations with little success. The closest I've gotten is the following code, where the CSV downloads fine, but the PDF is not a valid PDF anymore:
CSV:
#set($contentDisposition = "attachment;filename=${method.request.querystring.file_name}")
$input.body
#set($context.responseOverride.header.Content-Disposition = $contentDisposition)
PDF:
#set($contentDisposition = "attachment;filename=${method.request.querystring.file_name}")
$util.base64Encode($input.body)
#set($context.responseOverride.header.Content-Disposition = $contentDisposition)
where contentHandling = CONVERT_TO_TEXT. My binaryMediaTypes just has application/pdf and that's it. My goal is to get this working without having to offload the problem into a Lambda so we don't have that overhead at the download step. Any ideas how to do this right?
Just as another comment, I've tried CONVERT_TO_BINARY and just leaving it as Passthrough. I've tried it with text/csv as another binary media type and I've tried different combinations of encoding and decoding base64 and stuff. I know the data is coming back right from S3, but the transformation is where it's breaking. I am happy to post more logs if need be. Also, I'm pretty sure this makes sense on StackOverflow, but if it would fit in another StackExchange site better, please let me know.
Resources I've looked at:
https://docs.aws.amazon.com/apigateway/latest/developerguide/request-response-data-mappings.html
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html#util-template-reference
https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-payload-encodings-workflow.html
https://docs.amazonaws.cn/en_us/apigateway/latest/developerguide/api-gateway-payload-encodings-configure-with-control-service-api.html.
(But they're all so confusing...)
EDIT: One Idea I've had is to do CONVERT_TO_BINARY and somehow base64 encode the CSVs in the transformation, but I can't figure out how to do it right. I keep feeling like I'm misunderstanding the order of things, specifically when the "CONVERT" part happens. If that makes any sense.
EDIT 2: So, I got rid of the $util.base64Encode in the PDF one and now I have a PDF that's empty. The actual file in S3 definitely has things in it, but for some reason CONVERT_TO_TEXT is not handling it right or I'm still not understading how this all works.
Had similar issues. One major thing is the Accept header. I was testing in chrome which sends Accept header as text/html,application/xhtml.... api-gateway ignores everything except the first one(text/html). It will then convert any response from S3 to base64 to try and conform to text/html.
At last after trying everything else I tried via Postman which defaults the Accept header to */*. Also set your content handling on the Integration response to Passthrough. And everything was working!
One other thing is to pass the Content-Type and Content-Length headers through(Add them in method response first and then in Integration response):
Content-Length integration.response.header.Content-Length
Content-Type integration.response.header.Content-Type

Get metadata info without downloading the complete file

As I read the different posts here and libtorrent documentation, I know (as documented), I have to download the torrent file in order to get the metadata. but how the uTorrent App works, when I just start downloading, I get the metadata within a second then after getting the metadata, I can pause downloading. So, it doesn't restrict me to download a complete file in order to return metadata.
So, is there a way to get metadata without downloading the complete file
libtorrent's metadata_received_alert is what you want. This will be sent once the metadata finishes downloading. Do make sure that youre receiving status notifications though.

WireMock returns image that's corrupt

I've recorded a mock through WireMock that contains an image in the body. When I try to get the stub using Postman the response back is an image that won't load and the size of the content is roughly 20-50% larger than when I get the same image from the production server. In Google Chrome it says Resource interpreted as Document but transferred with MIME type image/jpeg.
I can't tell if this is an underlying issue with Jetty or WireMock. I read some related chatter on the user group about images being returned incorrectly, but I've tried the suggestion of removing the mapping stub and just keeping the __file - no luck. This seems like an encoding issue, but I don't know how to debug it further.
If you can hang in there until next week we're putting the finishing touches on a brand new recorder and I've been specifically working through the encoding issues the current recorder suffers from.
Meanwhile, you might want to try turning off gzip in your client code.

Display ".doc" ".docx" in browser

My users can upload their CV and this CV should be seen by any employer.
My problem is that my client want this CV to appear in the web browser without any download.
PDF work fine but doc & docx don't.
I've tried to use both gem ("docx" and "doc_ripper") but each one can just handle basic thing (table won't work ...)
The cv is attached to one user and stored on Amazon with Dragonfly
I've try the google view : http://googlesystem.blogspot.be/2009/09/embeddable-google-document-viewer.html
But as I do : user.cv_file.remote_url(expires: 5.minutes.from_now)
The url doesn't work anymore (this solution only work if the document is public)
I thought to make a second field which have the cv_file convert as a pdf if it's not.
Any possibilities to give a public permission to aws file for 2-3 min (time to render it with google view tool)
Thanks.
I assume you are talking about a file stored on S3. To make a file on S3 temporarily public you can generate a pre-signed URL with an expiration date/time: http://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html
I've used the gem htmltoword a few times now and it's done a good job from that end of the translations.
I did a quick search and there are a few promising gems that might help you out here - converting the resumes from Word (.doc, .docx) into an format that you can get to HTML for your views (perhaps storing this converted content in a DB table/column?).
Word docx_converter
Google Groups discussion of the issue
ydocx
docx
Thanks for answering but after many research, I finally found :
https://view.officeapps.live.com/op/view.aspx?src=
which work as well as the pdf reader from browser.
Be sure to have a public url for the file you want to display