I'am trying to make an Android application which can send images taken by a camera on an Android tablet to AWS Rekognition. It's the intention that pictures will be directly send to the AWS Rekognition service without the need of an S3 bucket. The picture itself don't need to be saved in the cloud. Only the face metadata need to be stored on AWS in a collection. Afterwards the ultimate goal is for a person to be able to capture his face again, and AWS says there is a match with a previous face in the collection.
There is a lot of information on the internet. But most of the times AWS suggest the Amplify framework. And I don't really know if this is necessary in such simple case.
I have already done all the steps in the AWS CLI (and these work) but I don't succeed to do these steps into Android studio. Below I describe the steps that I have done in the AWS CLI. I would do these steps in Android Studio, but I'm not a pro in programming this language.
(I have already made a collection within the AWS CLI.)
First I index a face which can be found by AWS into a picture. In my AWS CLI code I use S3 as an example. It should be the intention that I can send the picture directly to AWS Rekognition. This action only need to be done if someone specially push a button. So taking a picture and sending it to AWS Rekognition to index the face in a specific collection.
aws rekognition index-faces --image '{"S3Object":{"Bucket":"bucketName","Name":"picture1.jpg"}}' --collection-id "collectionName" --max-faces 1 --quality-filter "AUTO" --detection-attributes "DEFAULT" --external-image-id "picture1.jpg"
Then when a user push another button it need to take a picture again and it need to be send to AWS Rekognition to search the collection by the image which was sent. I have already succeeded this with the following AWS CLI code. It should also be the intention to send the picture directly to AWS without the need of S3. AWS returns a match with a face which is already in the collection.
aws rekognition search-faces-by-image --image '{"S3Object":{"Bucket":"bucketName","Name":"picture.jpg"}}' --collection-id "collectionName"
Again I'am not a professional in Android studio, so it would be very nice if someone haves a quite easy solution. It would also be very nice if someone can tell me if the Amplify framework is really necessary.
Thanks in advance!
You don't have to use Amplify, you can use Rekognition through the AWS Java SDK.
To achieve this same functionality you're getting with the CLI, you can first index the face(s) in the collection using IndexFacesRequest, or you can forego this and create the collection manually over the CLI if this is a one-time operation.
To search the collection's faces by image, you would simply need to modify the following code snippet to pass in a byte-64 encoded image instead of the S3 URL. Full documentation for the searchFacesByImage() method here.
AmazonRekognition client = AmazonRekognitionClientBuilder.standard().build();
SearchFacesByImageRequest request = new SearchFacesByImageRequest().withCollectionId("myphotos")
.withImage(new Image().withS3Object(new S3Object().withBucket("mybucket").withName("myphoto"))).withMaxFaces(5).withFaceMatchThreshold(95f);
SearchFacesByImageResult response = client.searchFacesByImage(request);
Related
I need to create a AWS container with ECS. However, I don't have programatic access for push the image in the ECR repository and I don't found other way to create a container.
My question is: Is there another way to create a container without programatic access?
I found a way to upload the image in Amazon S3 (compress the image into a .zip), but I don't know how to use the image after the upload.
I found a way to upload the image in Amazon S3 (compress the image into a .zip)
That isn't going to work. You need to upload the image to ECR, not S3. ECS does not support loading images from S3.
Is there another way to create a container without programatic access?
You can't upload images to ECR without programmatic access. It isn't possible to upload images to ECR with only username/password access.
Formal (Correct) way:
Probably a CodeBuild job that builds the image and pushes it, possibly wrapped up with CodePipeline
Hacky way:
Maybe a lambda that pulls the zip, unpacks the image and pushes to ecr? Would definitely not be a pattern you want to house long term but might get the job done?
I am new to AWS. Most of example I have seen need an input file name from S3 bucket for media convert. I want to automate this process. What is the best way to do it. I want to achieve following.
API to upload a video(mp4) to a S3 bucket.
Trigger MediaConvert Job to process newly updated video and convert it to HLS.
I know how to create an API as well as MediaConvert job. What I need help with it is automating this workflow. How can I pass recently uploaded video to MediaConvert job dynamically?
I think this should actually cover what you're looking for, and is straight from the source:
https://aws.amazon.com/blogs/media/vod-automation-part-1-create-a-serverless-watchfolder-workflow-using-aws-elemental-mediaconvert/
Essentially, you'll be making use of AWS Lambda, a serverless code execution product. Lambda functions by allowing you to hook directly into "triggers" or events from within the AWS ecosystem (like uploading a file to S3).
The lambda can then execute code in a number of supported languages like Javascript or Python, which can be used to execute a MediaConvert job on the triggering object (the file uploaded to S3).
I'm using the Agora SDK & RestAPI to recording life streams and upload them on AWS S3, the sdk comes with snapshoting feature that's Im planning to use for stream thumbnails.
Both Cloud Recording and Snapshot Recording are integrated with my app and works perfectly,
the Remaining problem is that the snapshots are named down to the microseconds
Agora snapshot services image file naming
From my overview, The services works as follow, Mobile App sends data to my Server, My server make requests to the Agora API so It joins the Life Stream channel and starts snapshoting and saving the images to AWS, so I suppose it's impossible to have time synchronization between AWS, AgoraRestAPI, my Server & my mobile app.
I've gone through their docs and I can't find anything about retrieving the file names.
I was thinking maybe I can have a Lambda Function that retrieves the last added file on a given bucket/folder? but due to my lack of knowledge on AWS and Lambda functions I don't know how's that or if it's possible.
any suggestions would be appreciated
currently trying to process a number of images simultaneously using custom labels via postman. I'm a business client with AWS and have been on hold for over 30 minutes to speak with an engineer but because AWS customer sucks I'm asking the community if they can help. Rather than analyze an image one at a time, is there away to analyze images all at once? Any help would be great, really need it at this time.
Nick
I don't think there is a direct API or SDK by AWS for asynchronous image processing with custom labels.
But the right workaround here can be introducing an event-based architecture yourself.
You can upload images in batch to S3 and configure S3 events to send the event notification to an SNS topic.
You can have your API subscribed to this S3 topic which takes in the object name and bucket name. And then within the API, you have the logic to use custom labels and store results in a Database like DynamoDB. This way, you can process images asynchronously.
Just make sure you have the right inference hours configured so you don't flood your systems thus making them unavailable
Hope this process solves your problem
You can achieve this by using a batch processing solution published by AWS.
Please refer this blog for the solution: https://aws.amazon.com/blogs/machine-learning/batch-image-processing-with-amazon-rekognition-custom-labels/
Also, the solution can be deployed from github where it is published as a AWS Sample: https://github.com/aws-samples/amazon-rekognition-custom-labels-batch-processing. If you are in a region for which the deployment button is not provided, please raise a issue.
Alternatively, you can deploy this solution using SAM. The solution is developed as a AWS Serverless Application Model. So it can be deployed using sam with the following steps:
Install the sam cli - https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install.html
Download the code repository on your local machine
from within the folder execute the following steps. The folder name is referrenced as sam-app in the below example.
a. #Step 1 - Build your application
i. cd sam-app
ii. sam build
b. #Step 2 - Deploy your application
i. sam deploy --guided
Some images which is already uploaded on AWS S3 bucket and of course there is a lot of image. I want to edit and replace those images and I want to do it on AWS server, Here I want to use aws lambda.
I already can do my job from my local pc. But it takes a very long time. So I want to do it on server.
Is it possible?
Unfortunately directly editing file in S3 is not supported Check out the thread. To overcome the situation, you need to download the file locally in server/local machine, then edit it and re-upload it again to s3 bucket. Also you can enable versions
For node js you can use Jimp
For java: ImageIO
For python: Pillow
or you can use any technology to edit it and later upload it using aws-sdk.
For lambda function you can use serverless framework - https://serverless.com/
I have made youtube videos long back. This is related to how get started with aws-lambda and serverless
https://www.youtube.com/watch?v=uXZCNnzSMkI
You can trigger a Lambda using the AWS SDK.
Write a Lambda to process a single image and deploy it.
Then locally use the AWS SDK to list the images in the bucket and invoke the Lambda (asynchronously) for each file using invoke. I would also save somewhere which files have been processed so you can continue if something fails.
Note that the default limit for Lambda is 1000 concurrent executions, so to avoid reaching the limit you can send messages to an SQS queue (which then triggers the Lambda) or just retry when invoke throws an error.