I am trying to grasp how to store video files. I know I can store .mp4's on Google Cloud Store. However, I have had a hard time interfacing my application to stream these video files.
I have found video URLs like:
http://clips.vorwaerts-gmbh.de/big_buck_bunny.mp4
Versus what the file on the Cloud Store is, which probably somehow refers to the mp4 I uploaded (right?)
https://firebasestorage.googleapis.com/v0/b/packfeed-e027b.appspot.com/o/Stories%2F0%2FM41WiOceiQTs3ELETIT5evcfsJm1_1520646187885.mp4?alt=media&token=201a831b-c239-4563-8178-cec3c4567212
Is there a difference between these two URLs, one points directly to the mp4, and then the other URL which is a "downloadlink"? is there a difference?
Are there any options to store files in the Google Cloud Platform like this?
Your first link points to the file stored on clips.vorwaerts-gmbh.de server. The second link points to the file stored on Google Cloud Storage server.
You can upload your files to Google Cloud Storage, then share the file publicly by checking the box "Share publicly" on the file. The "Public link" appeared will be the link available to public, similar to the second link you posted.
https://cloud.google.com/storage/docs/access-control/making-data-public#objects
Related
I am receiving csv files from different users (from the same organisation) over Microsoft Teams. I have to download each file and import them into a bucket on Google Cloud Storage.
What would be the most efficient way to directly store those files directly into Google Cloud Storage everytime I am receiving a file from a given user over Teams? Files must be imported using Microsoft Teams.
I was thinking to trigger from Pub/Sub using Cloud Run but I am a bit confused how to connect this with teams.
I imagine you should be able to do this fine using Power Automate, but it might depend on how you're receiving the files (for instance are users sending them 1-1 to you directly, or uploading them into a Files tab in a specific Team/Channel).
Here's an example template for moving files from OneDrive for Business to Google Drive, that sounds like it should help: https://flow.microsoft.com/en-us/galleries/public/templates/02057296acac46e9923e8a842ab9911d/sync-onedrive-for-business-files-to-google-drive-files/
I'm trying to get the file that was uploaded to Google Cloud Storage, do some work with its content, and move it to a different bucket using Google Cloud Functions with python3.7. Following their documentation I was only able to get file name. I tried using import cloudstorage but it errors module 'cloudstorage' has no attribute 'NotFoundError', and googling did not get me anywhere.
Does any one have a sample code that could do what I need?
The cloudstorage library is specific to the Standard environment of App Engine.
A library compatible with Cloud Storage would be google-cloud-storage. You must declare it in your requirements.txt file for your function.
This example on how to copy from one bucket to another should suffice. After copying it, you can just do source_blob.delete() to get rid of it.
I want to view files Such as excel or zip or any other files in the browser without getting downloaded.
I am able to display image and pdf files in the browser but unable to view any other format's such as zip or xls.
I am storing my files in S3.
What should i do?
Web browsers are not able to natively display most file types. They can render HTML and can display certain types of images (eg JPG, PNG), but only after these files are actually downloaded to your computer.
The same goes for PDFs -- they are downloaded, then a browser plug-in renders the content.
When viewing file (eg Excel spreadsheets and PDF files) within services like Gmail and Google Drive, the files are typically converted into images on the server-end and those images are sent to your computer. Amazon S3 is purely a storage service and does not offer a conversion service like this.
Zip files are a method of compressing files and also storing multiple files within a single archive file. Some web services might offer the ability to list files within a Zip, but again Amazon S3 is purely a storage service and does not offer this capability.
To answer your "What should I do?" question, some options are:
Download the files to your computer to view them, or
Use a storage service that offers these capabilities (many of which store the actual files in Amazon S3, but add additional services to convert the files for viewing online)
I might be a bit too late but did you try Filestash? (I made it)
That's what it looks like when you open a xls document on S3:
I aim to support all the common formats and the already supported list is rather big already
So far to manage getting data from the bucket I use download_to_file() to get it downloaded on the instance that it is using and access the files/folders locally. Though what I want to achieve is being able to just read from the cloud. How can I go about doing that? There doesn't seem to be a way for me create a relative path from the ML Job instance and the google cloud bucket.
You can use TensorFlow's file_io.FileIO class to create file_like objects to read/write files on gcs, local, or any other supported file system.
See this post for some examples.
I am trying to stream a video with HLSv4. I am using AWS Elastic Transcoder and S3 to convert the original file (eg. *.avi or *.mp4) to HLSv4.
Transcoding is successful, with several *.ts and *.aac (with accompanying *.m3u8 playlist files for each media file) and a master *.m3u8 playlist file linking to the media-file specific playlist files. I feel fairly comfortable everything is in order here.
Now the trouble: This is a membership site and I would like to avoid making every video file public. The way to do this typically with S3 is to generate temporary keys server-side which you can append to the URL. Trouble is, that changes the URLs to the media files and their playlists, so the existing *.m3u8 playlists (which provide references to the other playlists and media) do not contain these keys.
One option which occurred to me would be to generate these playlists on the fly as they are just text files. The obvious trouble is overhead, it seems hacky, and these posts were discouraging: https://forums.aws.amazon.com/message.jspa?messageID=529189, https://forums.aws.amazon.com/message.jspa?messageID=508365
After spending some time on this, I feel like I'm going around in circles and there doesn't seem to be a super clear explanation anywhere for how to do this.
So as of September 2015, what is the best way to use AWS Elastic Transcoder and S3 to stream HLSv4 without making your content public? Any help is greatly appreciated!
EDIT: Reposting my comment below with formatting...
Thank you for your reply, it's very helpful
The plan that's forming in my head is to keep the converted ts and aac files on S3 but generate the 6-8 m3u8 files + master playlist and serve them directly from app server So user hits "Play" page and jwplayer gets master playlist from app server (eg "/play/12/"). Server side, this loads the m3u8 files from s3 into memory and searches and replaces the media specific m3u8 links to point to S3 with a freshly generated URL token
So user-->jwplayer-->local master m3u8 (verify auth server side)-->local media m3u8s (verify auth server side)-->s3 media files (accessed with signed URLs and temporary tokens)
Do you see any issues with this approach? Such as "you can't reference external media from a playlist" or something similarly catch 22-ish?
Dynamically generated playlists is one way to go. I actually implemented something like this as a Nginx module and it works very fast, though it's written in C and compiled and not PHP.
The person in your first link is more likely to have issues because of his/hers 1s chunk duration. This adds a lot of requests and overhead, the value recommended by Apple is 10s.
There are solutions like HLS encrypted with AES-128 (supported on the Elastic Transcoder), which also adds overhead if you do it on the-fly, and HLS with DRM like PHLS/Primetime which will most likely get you into a lot of trouble on the client-side.
There seems to be a way to do it with Amazon CloudFront. Please note that I haven't tried it personally and you need to check if it works on Android/iOS.
The idea is to use Signed Cookies instead of Signed URLs. They were apparently introduced in March 2015. The linked blog entry even uses HLS as an example.
Instead of dynamic URLs you send a Set-Cookie header after you authenticate the user. The cookie (hopefully) gets passed around with every request (playlist and segments) and CloudFront decides whether to allow the access to your S3 bucket or not:
You can find the documentation here:
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html