I have a video file stored in my S3 bucket. I want to take that file as the input, do some processing with FFMPEG and directly upload it to S3. How can I do this?
You can either do it yourself, or use Amazon Elastic Transcoder.
If you wish to use FFMPEG yourself, you will need somewhere to run it. This could be on an Amazon EC2 instance, or on your own computer. Your program or script would need to download the video from the S3 bucket, process it with FFMPEG and then upload the resulting file to S3.
Or, you could use Amazon Elastic Transcoder to transcode the file. It is not FFMPEG, but it has many of the same capabilities. It can read the video directly from S3 and output the result to S3. Pricing is based on the length of the input video file (eg 3c per minute).
Actually, a newer product called AWS Elemental MediaConvert is also an option. It is a professional video system used by the broadcasting industry, so it has a lot more options.
Related
I want to extract one frame or screenshot of a video stored in s3 at specific time, what can i use to make it ?
Lambda functions
Using the SDK
Amazon Elastic Transcoder has the ability to create videos from sources files. For example, it can stitch together multiple videos, or extract a portion of video(s).
Elastic Transcoder also has the ability to generate thumbnails of videos that it is processing.
Thus, you should be able to:
Create a job in Elastic Transcoder to create a very short-duration video from the desired time in the source video
Configure it to output a thumbnail of the new video to Amazon S3
You can then dispose of the video (configure S3 to delete it after a day) and just use the thumbnail.
Please note that Elastic Transcoder works asynchronously, so you would create a Job to trigger the above activities, then come back later to retrieve the results.
The benefit of the above method is that there is no need to download or process the video file on your own Amazon EC2 instance. It is all done within Elastic Transcoder.
The AWS SDK does not have an API that extracts pictures from a video. You can use AWS to analyze videos - such as the Amazon Rekognition service. For example:
Creating AWS video analyzer applications using the AWS SDK for Java
You can use the Amazon Rekognition to detect faces, objects, and text in videos. For example, this example detects text in a video:
https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/javav2/example_code/rekognition/src/main/java/com/example/rekognition/VideoDetectText.java
The Amazon S3 API has many operations, but extracting a pic from a video is not one of them. You can get an inputstream of an object located in a bucket.
To extract a pic from a video, you would need to use a 3rd party API.
I try to create an Amazon Transcribe Job with an input file located on S3, the size of the file is 4.3 GB and when i try to create the job show me this error.
What can i do with my video? should i convert the video, extract the audio or is there any way to do it with some AWS Service
From Amazon Transcribe FAQs – Amazon Web Services (AWS):
What kind of inputs does Amazon Transcribe support?
Amazon Transcribe supports both 16 kHz and 8kHz audio streams, and multiple audio encodings, including WAV, MP3, MP4 and FLAC.
Since Amazon Transcribe is a service for converting spoken speech into text, it would be best to provide it with an audio file rather than a video.
If you convert that 4.3 GB video into an audio file, it will probably be small enough to use with Amazon Transcribe. If you require a service to perform that conversion, you could use Amazon Elastic Transcoder.
What I want: To add watermarks to all video files that are uploaded to the S3 bucket (mov, mp4, etc.). Then overwrite the file with it's same name with the newly transcoded file that has the watermark on it.
So, I was able to manually do this by creating a pipeline and job with elastic transcoder, but this is manual. I want this done the moment a file is uploaded to the server, overwrite the file with the new file and boom.
One, this should be a feature already but not sure why it isnt.
And two, How can I have this automatically done? Any advise? I know its possible just not sure exactly where to start here
You need S3 bucket, a lambda along with your transcoder pipeline.
Elastic transcoder is backbone of your process.
To automate transcoding, create lambda function which gets triggered by an S3 event .
More detailed explanation is here .
I'm using AWS S3 to store my video and audio files. Before I upload any video file, I convert the video file into different resolutions from my laptop using ffmpeg and then I upload those files onto my AWS S3 account. I want to know if its possible to convert the video file stored on my AWS S3 to different resolutions. As in the conversion to happen on the AWS.
I'm using ffmpeg command in CMD: ffmpeg -i video.mp4 -s 256x144 -c:a copy video_144p.mp4
You can set up a transcoding pipeline with AWS Elastic Transcoder. It allows you to take objects from one S3 bucket, transcode the objects (change frame rate, resolution, etc.), and put the altered versions in a different S3 bucket. You could set it up to transcode to multiple different resolutions outputted to different S3 buckets if you wanted.
Is there any way to upload 50000 image files to Amazon S3 Bucket. The 50000 image file URLs are saved in a .txt file. Can someone please tell me a better way to do this.
It sounds like your requirement is: For each image URL listed in a text file, copy the images to an Amazon S3 bucket.
There is no in-built capability with Amazon S3 to do this. Instead, you would need to write an app that:
Reads the text file and, for each URL
Downloads the image
Uploads the image to Amazon S3
Doing this on an Amazon EC2 instance would be the fastest, due to low latency between S3 and EC2.
You could also get fancy and do it via Amazon EMR. It would be the fastest due to parallel processing, but would require knowledge of how to use Hadoop.
If you have a local copy of the images, you could order an AWS Snowball and use it to transfer the files to Amazon S3. However, it would probably be faster just to copy the files over the Internet (rough guess... at 1MB per file, total volume is 50GB).