In Azure Cognitive Image processing the returned json have a "caption" field which summarizes the content of the image. However, I didn't find anything similar in AWS.
In Amazon Rekognition for image processing how do I get the caption for an image?
You would use the DetectLabels - Amazon Rekognition command:
Detects instances of real-world entities within an image (JPEG or PNG) provided as input. This includes objects like flower, tree, and table; events like wedding, graduation, and birthday party; and concepts like landscape, evening, and nature.
Related
I'm using AWS Rekognition to perform single-class object detection. I'm assigning image-level labels with only one label in my entire dataset.
This is based on a new feature released by AWS,
https://aws.amazon.com/about-aws/whats-new/2020/06/amazon-rekognition-custom-labels-now-supports-single-object-training/
I have created my dataset with the following configuration.
When I Train my model, my model fails with the following status message,
The manifest file has too few usable labels.
Any ideas on what I might be missing?
The minimum unique label count for the object location (bounding box / detection) use case is 1 label, but the minimum label count for "Objects, Scenes, and Concepts (classification)" i.e. image-level data is 2. If you were auto-assigning image level labels and there was only 1 label assigned, this is likely why you were getting the "manifest file has too few usable labels" error.
Source: https://docs.aws.amazon.com/rekognition/latest/customlabels-dg/limits.html
We would also like to invite users of Amazon Rekognition Custom Labels with questions about the service to consider asking your questions on AWS re:Post: https://repost.aws/
Thank you for using Amazon Rekognition Custom Labels.
Christian Dunn
I'm developing a prototype of a video analysis service on AWS.
The question is: am thinking in the right direction or I will fail to implement this architecture?
Architecture:
Flask on EC2.
User(authenticated) upload file via web view, I'm saving it to S3.
Lambda triggers SageMaker.
SageMaker taking a file from S3, making preparation and analysis then: 1) Saving the results to PostgreSQL DB. 2) Triggers lambda that sends a notification to Flask that analysis is Done
User receives a notification from Flask that the analysis is done.
Flask web page visualizes data from the analysis for the user.
It has only a prototyping purpose, I'm trying to keep it as simple as possible.
will appreciate any comments and recommendations.
rekognition can find labels, text, faces, and expression in images and video. I demonstrate how to find labels in a image that you have stored in a s3 bucket. use the key of the image object in the bucket for rekognition to use to label.
def detect_labels(bucket, key, max_labels=10, min_confidence=95, region="us-east-1"):
rekognition = boto3.client("rekognition", region,
aws_access_key_id=AWS_KEY_ID,
aws_secret_access_key=AWS_SECRET
)
response = rekognition.detect_labels(
Image={
"S3Object": {
"Bucket": bucket,
"Name": key,
}
},
MaxLabels=max_labels,
MinConfidence=min_confidence,
)
return response['Labels']
I'm trying to do a quick PoC on the AWS Rekognition custom labels feature. I'd like to try using it for object detection.
I've had a couple of attempts at setting it up using only tools in the AWS Console. I'm using images imported from the Rekognition bucket in S3, then I added bounding boxes using the tools in the Rekognition console.
All of my images are marked up with bounding boxes, no whole image labels have been used. I have 9 labels, all of which appear in at least 1 drawing.
I've ensured my images are less than 4096x4096 in size (which is mentioned on this AWS forums thread as a possible cause of this issue.
When I attempt to train my model I get the "The manifest file contains too many invalid data objects" error.
What could be wrong here? An error message complaining about the format of a file I didn't create manually, that I can't see or edit isn't exactly intuitive.
I have a collection of profile images from customers I need to be able to pass a selfie of the person and scan it across the collection of images and pull up the customer information.
Need to do the following using AWS Rekognition -
Create a collection - Done
Add Images to the collection - Whats the REST API syntax for this
While adding the images to the collection also tag it with the customer name.
Take a selfie portrait and search across the collection and return the tag information which matches.
Im using Flutter as a platform hence there is no support for AWS SDK so will need to make REST API calls.
However the AWS docs don't provide much information for REST support.
The APIs are documented. For example to detect faces in an image and add them to a collection, see IndexFaces.
I'd personally recommend getting comfortable with Rekognition via the awscli (or Python/boto3) briefly before you move to the Rest API.
On the name tagging front, you assign an 'external ID' to faces when adding them to a collection. That external ID is the correlator that you supply and that Rekognition stores. Later, when you ask Rekognition if a given face matches one already in a collection, Rekognition will return you the external ID. That can then be used as a lookup into some database that you have to identify the person's name, date of birth, or whatever.
I'm finding the right way to use AWS Rekognition service.
My problem is How to verify a person image on multi collections, I'm reading Build Your Own Face Recognition Service Using Amazon Rekognition | AWS Machine Learning Blog from Amazon but cannot find the implementation document for it. My point is Face verification title.
Update 1:
My target is: Using AWS Rekognition to get person's info by their face.
My problem is: How to make AWS Rekognition improves its accuracy when recognizing a face.
What I tried:
Upload multi captured portraits of a person with same ExternalImageID but I'm not sure it works or not.
Finding a way to create Collection for each person, then upload person's portraits to their Collection but I don't how to search a face through multiple Collections.
I'm trying use S3 for storage people's images then using Lambda function to do something that I've not got yet.
Update 2:
What is your input material: Input materials are some people's portrait photo with ExternalImageID is their name (eg: my portrait photo will have ExternalImageID is "Long").
What are you trying to do: I'm trying to get ExternalImageID when I send a portrait photo of a registered person. (eg: with my other portrait photo, AWS has to response ExternalImageID is "Long").
Do you have it working, but it is not recognizing some people? Yes, it's work but sometimes it cannot recognize exactly people.
Please tell us your use-case / scenario and what you are trying to accomplish:
Create an AWS Rekognition collection with sample name (eg facetest).
Register some people with their name is ExternalImageID.
Submit an image to AWS Rekognition API to get ExternalImageID - his name.
Okay, so basically you have it working but it doesn't always recognise the person. I'll assume it does not even list the person in the response, even with a low percentage.
I would recommend adding multiple images of the same person to the Face Collection, specifying the same ExternalImageId for each image. (Use one Face Collection with all people in it, including multiple images of the same person.)
Please note that "If you provide the same image, specify the same collection, and use the same external ID in the IndexFaces operation, Amazon Rekognition doesn't save duplicate face metadata." However, adding different images with the same ExternalImageId should be fine.