How to upload real time video to AWS S3 Bucket - amazon-web-services

I am trying to upload a real-time video stream from the video camera to the AWS S3 bucket. Here assuming that camera settings like target IP or domain are fully controllable. But I don't know how to implement this on AWS services. Any advice would be appreciated. Thanks in advance.

In AWS you can achieve this by using Amazon Kinesis Video Streams.
You can use the Gstreamer Docker image to stream video from the camera to Kinesis Video Streams. Have a look at the docs here

Related

FFMPEG upload output to S3

I have a video file stored in my S3 bucket. I want to take that file as the input, do some processing with FFMPEG and directly upload it to S3. How can I do this?
You can either do it yourself, or use Amazon Elastic Transcoder.
If you wish to use FFMPEG yourself, you will need somewhere to run it. This could be on an Amazon EC2 instance, or on your own computer. Your program or script would need to download the video from the S3 bucket, process it with FFMPEG and then upload the resulting file to S3.
Or, you could use Amazon Elastic Transcoder to transcode the file. It is not FFMPEG, but it has many of the same capabilities. It can read the video directly from S3 and output the result to S3. Pricing is based on the length of the input video file (eg 3c per minute).
Actually, a newer product called AWS Elemental MediaConvert is also an option. It is a professional video system used by the broadcasting industry, so it has a lot more options.

Extract picture from video stored on S3

I want to extract one frame or screenshot of a video stored in s3 at specific time, what can i use to make it ?
Lambda functions
Using the SDK
Amazon Elastic Transcoder has the ability to create videos from sources files. For example, it can stitch together multiple videos, or extract a portion of video(s).
Elastic Transcoder also has the ability to generate thumbnails of videos that it is processing.
Thus, you should be able to:
Create a job in Elastic Transcoder to create a very short-duration video from the desired time in the source video
Configure it to output a thumbnail of the new video to Amazon S3
You can then dispose of the video (configure S3 to delete it after a day) and just use the thumbnail.
Please note that Elastic Transcoder works asynchronously, so you would create a Job to trigger the above activities, then come back later to retrieve the results.
The benefit of the above method is that there is no need to download or process the video file on your own Amazon EC2 instance. It is all done within Elastic Transcoder.
The AWS SDK does not have an API that extracts pictures from a video. You can use AWS to analyze videos - such as the Amazon Rekognition service. For example:
Creating AWS video analyzer applications using the AWS SDK for Java
You can use the Amazon Rekognition to detect faces, objects, and text in videos. For example, this example detects text in a video:
https://github.com/awsdocs/aws-doc-sdk-examples/blob/master/javav2/example_code/rekognition/src/main/java/com/example/rekognition/VideoDetectText.java
The Amazon S3 API has many operations, but extracting a pic from a video is not one of them. You can get an inputstream of an object located in a bucket.
To extract a pic from a video, you would need to use a 3rd party API.

How to Live Stream an exist video store in S3?

I upload a video to S3, use AWS MediaConvert to transcoding then transfer to the end-user using MediaPackage (VOD). And now, I need Live stream a video available in S3.
I know about MediaLive, the document told me the input of MediaLive is Live stream source like Camera or Broadcast Software. I'm not sure MediaLive accepts the source from a video available in S3.
Please let me know how to solve my problem?

How to perform face recognition on a streaming video using amazon Rekognition?

I am streaming video the amazon kinesis from raspberry pi (This is done). Now i want to perform face detection/recognition on that video using amazon Rekognition how to do it explain in detail with links. Thanks
From Working with Streaming Videos - Amazon Rekognition:
You can use Amazon Rekognition Video to detect and recognize faces in streaming video. A typical use case is when you want to detect a known face in a video stream. Amazon Rekognition Video uses Amazon Kinesis Video Streams to receive and process a video stream. The analysis results are output from Amazon Rekognition Video to a Kinesis data stream and then read by your client application. Amazon Rekognition Video provides a stream processor (CreateStreamProcessor) that you can use to start and manage the analysis of streaming video.
In simple terms, your application sends video to Amazon Kinesis Video. It then calls Amazon Rekognition on your behalf and detected faces are provided via an Amazon Kinesis stream. You can write an application to consume this stream and react to faces detected. There is a delay of several seconds for the video to be processed.

How to transfer video from kinesis video stream to AWS Rekognition and perform data analytics on that video?

I am streaming video from raspberry pi to amazon kinesis video stream (this part is done). Now i want to send video to AWS Rekognition and perform face detection on the live video. Kindly answer in detail and with links. Thankyou!
First you need to setup a Streaming processor in Rekognition.
https://docs.aws.amazon.com/rekognition/latest/dg/streaming-video-starting-analysis.html
Once you setup these steps you can use these examples on how to draw bounding boxes on detected faces.
https://github.com/aws/amazon-kinesis-video-streams-parser-library#kinesisvideo---rekognition-examples