How can I compare two AWS Rekognition collections? - amazon-web-services

I have two images with 40+ faces of people in it. I want to detect which faces are repeated in both images using AWS Rekognition service.
The original approach was to use the IndexFaces function of Rekognition and store all the faces of one image in one collection and the faces of the other image in another collection and then compare them using their FaceId. I thought that IndexFaces would provide a fingerprint for each face but it happens to be the FaceId is just a random identifier and not a fingerprint of the face.
I found this answer How to compare faces in a Collection to faces in a Stored Video using AWS Rekognition? but that compares all the faces in a collection with faces appearing in a video, so I would be forced to convert one of the images to a 1 second video containing the image as only frame.. which I think defeats the purpose of easy usage.
It has to be a way to compare two rekognition collections in order to check for repeated images that Im failing to find.

There are two ways you could go about this:
Option 1: Use ExternalImageID
This is similar to your method.
The important part is that when a face is added to a collection, you can provide an ExternalImageID. Later, when this face is matched with an image, Amazon Rekognition will return the ExternalImageID for the face.
For example, you could store a person's name or unique identifier in the ExternalImageID.
So, your process could look like this:
Call DetectFaces() on image 1
It will return a list of FaceDetails with a bounding box for each face
Loop through each returned face and use the provided bounding box to call IndexFaces() for each individual face, providing an ExternalImageID each time (it could just be an incrementing number)
Then, call IndexFaces() on image 2
If it finds any faces in the collection generated from image 1, it will provide the ExternalImageID of the matching face
Option 2: Use CompareFaces()
Compares a face in the source input image with each of the 100 largest faces detected in the target input image.
This takes one input face (the largest in the source image) and compares it to all faces in the target image. Therefore, you would follow a similar process to above:
Call DetectFaces() on image 1
It will return a list of FaceDetails with a bounding box for each face
Loop through each returned face and use the provided bounding box to call CompareFaces() for each individual face, comparing it to image 2
You will be provided with a confidence level of each potentially matching face
See: Comparing Faces in Images - Amazon Rekognition
So, the second method is easier if you are just comparing two images. The first method is better if you have already stored individual faces that you wish to use again in future calls.

Thanks to #John Rotenstein I was able to make a quick prototype using the was console:
Assuming that we have all the permissions and AWS console installed on the system and a S3 bucket called 'TestBucket' where all the images are stored, I did the following:
1.- Created a "Main Collection"
> aws rekognition create-collection --collection-id "MainCollection"
2.- Added one of the people I want to detect, I extracted the face from an individual face and ran IndexFace
> aws rekognition index-faces --image '{"S3Object":{"Bucket":"TestBucket","Name":"cristian.jpg"}}' --collection-id "MainCollection" --max-faces 100 --quality-filter "AUTO" --detection-attributes "ALL" --external-image-id "cristian.jpg"
The resulting FaceID is 'a54ef57e-7003-4721-b7e1-703d9f039da9'
3.- I added the second image to the collection:
> aws rekognition index-faces --image '{"S3Object":{"Bucket":"TestBucket","Name":"ImageContaining40plusfaces.jpg"}}' --collection-id "MainCollection" --max-faces 100 --quality-filter "AUTO" --detection-attributes "ALL" --external-image-id "ImageContaining40plusfaces.jpg"
resulted in 40+ entries like this one, showing only one for brevity:
{
"FaceRecords": [
{
"FaceDetail": {
"Confidence": 99.99859619140625,
"Eyeglasses": {
"Confidence": 54.99907684326172,
"Value": false
},
"Sunglasses": {
"Confidence": 54.99971389770508,
"Value": false
},
"Gender": {
"Confidence": 54.747318267822266,
"Value": "Male"
},
"Landmarks": [
{
"Y": 0.311367392539978,
"X": 0.1916557103395462,
"Type": "eyeLeft"
},
{
"Y": 0.3120582699775696,
"X": 0.20143891870975494,
"Type": "eyeRight"
},
{
"Y": 0.3355730175971985,
"X": 0.19253292679786682,
"Type": "mouthLeft"
},
{
"Y": 0.3361922800540924,
"X": 0.2005564421415329,
"Type": "mouthRight"
},
{
"Y": 0.32276451587677,
"X": 0.19691102206707,
"Type": "nose"
},
{
"Y": 0.30642834305763245,
"X": 0.1876278519630432,
"Type": "leftEyeBrowLeft"
},
{
"Y": 0.3037400245666504,
"X": 0.19379760324954987,
"Type": "leftEyeBrowRight"
},
{
"Y": 0.3029193580150604,
"X": 0.19078010320663452,
"Type": "leftEyeBrowUp"
},
{
"Y": 0.3041592836380005,
"X": 0.1995924860239029,
"Type": "rightEyeBrowLeft"
},
{
"Y": 0.3074571192264557,
"X": 0.20519918203353882,
"Type": "rightEyeBrowRight"
},
{
"Y": 0.30346789956092834,
"X": 0.2024637758731842,
"Type": "rightEyeBrowUp"
},
{
"Y": 0.3115418553352356,
"X": 0.1898096352815628,
"Type": "leftEyeLeft"
},
{
"Y": 0.3118479251861572,
"X": 0.1935078650712967,
"Type": "leftEyeRight"
},
{
"Y": 0.31028062105178833,
"X": 0.19159308075904846,
"Type": "leftEyeUp"
},
{
"Y": 0.31250447034835815,
"X": 0.19164365530014038,
"Type": "leftEyeDown"
},
{
"Y": 0.31221893429756165,
"X": 0.19937492907047272,
"Type": "rightEyeLeft"
},
{
"Y": 0.3123391270637512,
"X": 0.20295380055904388,
"Type": "rightEyeRight"
},
{
"Y": 0.31087613105773926,
"X": 0.2013435810804367,
"Type": "rightEyeUp"
},
{
"Y": 0.31308478116989136,
"X": 0.20125225186347961,
"Type": "rightEyeDown"
},
{
"Y": 0.3264555335044861,
"X": 0.19483911991119385,
"Type": "noseLeft"
},
{
"Y": 0.3265785574913025,
"X": 0.19839303195476532,
"Type": "noseRight"
},
{
"Y": 0.3319154679775238,
"X": 0.196599081158638,
"Type": "mouthUp"
},
{
"Y": 0.3392537832260132,
"X": 0.19649912416934967,
"Type": "mouthDown"
},
{
"Y": 0.311367392539978,
"X": 0.1916557103395462,
"Type": "leftPupil"
},
{
"Y": 0.3120582699775696,
"X": 0.20143891870975494,
"Type": "rightPupil"
},
{
"Y": 0.31476160883903503,
"X": 0.18458032608032227,
"Type": "upperJawlineLeft"
},
{
"Y": 0.3398161828517914,
"X": 0.18679481744766235,
"Type": "midJawlineLeft"
},
{
"Y": 0.35216856002807617,
"X": 0.19623762369155884,
"Type": "chinBottom"
},
{
"Y": 0.34082692861557007,
"X": 0.2045571506023407,
"Type": "midJawlineRight"
},
{
"Y": 0.3160339295864105,
"X": 0.20668834447860718,
"Type": "upperJawlineRight"
}
],
"Pose": {
"Yaw": 4.778820514678955,
"Roll": 1.7387386560440063,
"Pitch": 11.82911205291748
},
"Emotions": [
{
"Confidence": 47.9405403137207,
"Type": "CALM"
},
{
"Confidence": 45.432857513427734,
"Type": "ANGRY"
},
{
"Confidence": 45.953487396240234,
"Type": "HAPPY"
},
{
"Confidence": 45.215728759765625,
"Type": "SURPRISED"
},
{
"Confidence": 50.013206481933594,
"Type": "SAD"
},
{
"Confidence": 45.30225372314453,
"Type": "CONFUSED"
},
{
"Confidence": 45.14192199707031,
"Type": "DISGUSTED"
}
],
"AgeRange": {
"High": 43,
"Low": 26
},
"EyesOpen": {
"Confidence": 54.95812225341797,
"Value": true
},
"BoundingBox": {
"Width": 0.02271346002817154,
"Top": 0.28692546486854553,
"Left": 0.1841897815465927,
"Height": 0.06893482059240341
},
"Smile": {
"Confidence": 53.493797302246094,
"Value": false
},
"MouthOpen": {
"Confidence": 53.51670837402344,
"Value": false
},
"Quality": {
"Sharpness": 53.330047607421875,
"Brightness": 81.31917572021484
},
"Mustache": {
"Confidence": 54.971839904785156,
"Value": false
},
"Beard": {
"Confidence": 54.136474609375,
"Value": false
}
},
"Face": {
"BoundingBox": {
"Width": 0.02271346002817154,
"Top": 0.28692546486854553,
"Left": 0.1841897815465927,
"Height": 0.06893482059240341
},
"FaceId": "570eb8a6-72b8-4381-a1a2-9112aa2b348e",
"ExternalImageId": "ImageContaining40plusfaces.jpg",
"Confidence": 99.99859619140625,
"ImageId": "7f09400e-2de8-3d11-af05-223f13f9ef76"
}
}
]
}
3.- Then I issued a SearchFacesById using the FaceId detected previously:
> aws rekognition search-faces --face-id "a54ef57e-7003-4721-b7e1-703d9f039da9" --collection-id "MainCollection"
and Voila! I got the face detected on the second source image as needed...
{
"SearchedFaceId": "a54ef57e-7003-4721-b7e1-703d9f039da9",
"FaceModelVersion": "4.0",
"FaceMatches": [
{
"Face": {
"BoundingBox": {
"Width": 0.022825799882411957,
"Top": 0.31017398834228516,
"Left": 0.4018920063972473,
"Height": 0.06067270040512085
},
"FaceId": "bfd58e70-2bcf-403a-87da-6137c28ccbdd",
"ExternalImageId": "ImageContaining40plusfaces.jpg",
"Confidence": 100.0,
"ImageId": "7f09400e-2de8-3d11-af05-223f13f9ef76"
},
"Similarity": 92.36637115478516
}
]
}
So now I have to do the same thing for all the other face images detected in the source image nº1 and then compare them to the ones detected from the source image nº2 using the same set of commands!

Related

Deneb plot (Vega-Lite) for Power BI: How to use an free scale y-axis with facet

I am using Deneb custom visual to repeat visual for different tasks. Is it possible to only show the relevent Y-axis values. Following data is used:
The following Vega-lite JSON is used:
{
"data": {"name": "dataset"},
"mark": {
"type": "bar",
"opacity": 1,
"tooltip": true,
"cornerRadius": 15
},
"encoding": {
"x": {
"field": "Earliest StartDate",
"type": "temporal"
},
"y": {
"field": "MachGrpCode",
"type": "nominal",
"axis": {
"title": null,
"grid": true,
"tickBand": "extent"
}
},
"row": {
"field": "ProdHeaderOrdNr",
"header": {"labelAngle": 0}
}
},
"resolve": {
"axis": {
"x": "independent",
"y": "independent"
}
}
}
Which results in:
Is it possible to only use the relevent task values (for 022 --> erase 6700 row)?
From what I've seen, this is a Vega bug. The recommended work-around is to use vconcat with a filter transform. If you only have a few ProdHeaderOrdNr, this is doable.
Open the Chart in the Vega Editor

ChartJS with dates on the X axis not displaying any graph

I have a Vue.JS project getting data from a REST API (mine, so I can modify it if needed). This data is formatted for Chart.JS.
I am supposed to display a graph with 3 datasets, 2 of type line with the same X values, but 1 of type bar with different X values (that's why I don't want to specify labels). Whatever, all the X values are dates, so I would like only 1 X axis for all the curves.
I am using datasets with (x,y) data format :
x is an ISO8601 date
y is a float
My problem is that NOTHING is displayed at all...could anyone help me please, I don't understand why. I feel like having done things right. I saw somewhere that I needed to include momentjs, but in the official documentation they say that it is not the case. I bet that the problems come from the dates because I tried changing the y values, and the 2 y axis bounds are modified (so y values are understood). I also tried addind the "xAxisID" option, nothing changed.
Here is a sample of my data (normally hundreds of values) :
{
"type": "line",
"data": {
"datasets": [
{
"label": "Température",
"borderColor": "red",
"backgroundColor": "red",
"fill": false,
"data": [
{
"x": "2020-07-05T15:38:47.933711",
"y": 2.8224692
},
{
"x": "2020-07-05T15:48:47.490669",
"y": 33.63129
},
{
"x": "2020-07-05T15:58:48.182698",
"y": 40.540405
},
{
"x": "2020-07-05T16:08:47.829882",
"y": 3.0312533
},
{
"x": "2020-07-05T16:18:47.489026",
"y": 49.145626
}
],
"yAxisID": "yAxeTemperature"
},
{
"label": "Humidité",
"borderColor": "blue",
"backgroundColor": "blue",
"fill": false,
"data": [
{
"x": "2020-07-05T15:38:47.933711",
"y": 33.980587
},
{
"x": "2020-07-05T15:48:47.490669",
"y": 2.0313625
},
{
"x": "2020-07-05T15:58:48.182698",
"y": 24.249685
},
{
"x": "2020-07-05T16:08:47.829882",
"y": 7.4426904
},
{
"x": "2020-07-05T16:18:47.489026",
"y": 2.6335742
},
{
"x": "2020-07-05T16:28:48.175547",
"y": 25.92827
}
],
"yAxisID": "yAxeHumidite"
}
]
},
"options": {
"responsive": true,
"hoverMode": "index",
"stacked": false,
"title": null,
"scales": {
"xAxes": [
{
"type": "time",
"display": true,
"position": "bottom",
"id": "xAxeTime",
"scaleLabel": {
"display": true,
"labelString": "Temps",
"fontColor": "black"
},
"time": {
"unit": "minute",
"parser": "moment.ISO_8601",
"tooltipFormat": "ll"
}
}
],
"yAxes": [
{
"type": "linear",
"display": true,
"position": "left",
"id": "yAxeTemperature",
"scaleLabel": {
"display": true,
"labelString": "Température",
"fontColor": "red"
}
},
{
"type": "linear",
"display": true,
"position": "left",
"id": "yAxeHumidite",
"scaleLabel": {
"display": true,
"labelString": "Humidité",
"fontColor": "blue"
}
}
]
}
}
}
Here is how my chart is created (using vue-chartjs and chart.js) :
createChart(chartId : string, chartData : GrapheBean) {
const ctx = document.getElementById(chartId);
// #ts-ignore
const myChart = new Chart(ctx, {
type: chartData.type,
data: chartData.data,
options: chartData.options,
});
}
Here is the result :
I am stuck now, even if still trying things with little hope. Thanks a lot in advance for the ones who could help me.
First you should remove the option xAxes.time.parser. It is not needed when the dates are of ISO8601 format.
Further Chart.js effectively internally uses Moment.js for the functionality of the time axis. Therefore you should use the bundled version of Chart.js that includes Moment.js in a single file.
Please have a look at your amended code in a pure JavaScript version.
const chartData = {
"type": "line",
"data": {
"datasets": [{
"label": "Température",
"borderColor": "red",
"backgroundColor": "red",
"fill": false,
"data": [{
"x": "2020-07-05T15:38:47.933711",
"y": 2.8224692
},
{
"x": "2020-07-05T15:48:47.490669",
"y": 33.63129
},
{
"x": "2020-07-05T15:58:48.182698",
"y": 40.540405
},
{
"x": "2020-07-05T16:08:47.829882",
"y": 3.0312533
},
{
"x": "2020-07-05T16:18:47.489026",
"y": 49.145626
}
],
"yAxisID": "yAxeTemperature"
},
{
"label": "Humidité",
"borderColor": "blue",
"backgroundColor": "blue",
"fill": false,
"data": [{
"x": "2020-07-05T15:38:47.933711",
"y": 33.980587
},
{
"x": "2020-07-05T15:48:47.490669",
"y": 2.0313625
},
{
"x": "2020-07-05T15:58:48.182698",
"y": 24.249685
},
{
"x": "2020-07-05T16:08:47.829882",
"y": 7.4426904
},
{
"x": "2020-07-05T16:18:47.489026",
"y": 2.6335742
},
{
"x": "2020-07-05T16:28:48.175547",
"y": 25.92827
}
],
"yAxisID": "yAxeHumidite"
}
]
},
"options": {
"responsive": true,
"hoverMode": "index",
"stacked": false,
"title": null,
"scales": {
"xAxes": [{
"type": "time",
"display": true,
"position": "bottom",
"id": "xAxeTime",
"scaleLabel": {
"display": true,
"labelString": "Temps",
"fontColor": "black"
},
"time": {
"unit": "minute",
// "parser": "moment.ISO_8601", -> remove this line
"tooltipFormat": "ll"
}
}],
"yAxes": [{
"type": "linear",
"display": true,
"position": "left",
"id": "yAxeTemperature",
"scaleLabel": {
"display": true,
"labelString": "Température",
"fontColor": "red"
}
},
{
"type": "linear",
"display": true,
"position": "left",
"id": "yAxeHumidite",
"scaleLabel": {
"display": true,
"labelString": "Humidité",
"fontColor": "blue"
}
}
]
}
}
}
const ctx = document.getElementById('myChart');
const myChart = new Chart(ctx, {
type: chartData.type,
data: chartData.data,
options: chartData.options,
});
<script src="https://cdnjs.cloudflare.com/ajax/libs/Chart.js/2.9.3/Chart.bundle.min.js"></script>
<canvas id="myChart" height="120"></canvas>
In your question you also wrote "I am supposed to display a graph with 3 datasets, 2 of type line with the same X values, but 1 of type bar with different X values". Your code however only defines 2 datasets. In case you're also facing problems with this point, please post a new question for this separate issue.

Amazon Rekognition - Detect Faces - not returning all attributes

I am calling the Amazon Rekognition Detect faces API from Post man. I pass the image as base64 and pass the "Attributes" as ["ALL"].
{
"Image": {
"Attributes": ["ALL"],
"Bytes": <base64 image>
}
}
The response I get is as below. You can see that it has only a few attributes and does not include age range, beard, glasses etc. Same set of attributes are returned irrespective of whether I pass the value as "ALL", or "DEFAULT" or "ALL","DEFAULT" in the Attributes parameter.
Am I missing something here? Any pointers would be much appreciated.
{
"FaceDetails": [
{
"BoundingBox": {
"Height": 0.49429410696029663,
"Left": 0.35876789689064026,
"Top": 0.15820752084255219,
"Width": 0.3210359811782837
},
"Confidence": 100.0,
"Landmarks": [
{
"Type": "eyeLeft",
"X": 0.4103875756263733,
"Y": 0.35949569940567017
},
{
"Type": "eyeRight",
"X": 0.5616039037704468,
"Y": 0.3441786468029022
},
{
"Type": "mouthLeft",
"X": 0.4385274350643158,
"Y": 0.5330458879470825
},
{
"Type": "mouthRight",
"X": 0.5625125169754028,
"Y": 0.5202205181121826
},
{
"Type": "nose",
"X": 0.48630291223526,
"Y": 0.436920166015625
}
],
"Pose": {
"Pitch": 8.636483192443848,
"Roll": -5.8078813552856445,
"Yaw": -3.338975429534912
},
"Quality": {
"Brightness": 82.37736511230469,
"Sharpness": 83.14741516113281
}
}
]
}

Amazon Rekognition for Video - getFaceSearch: index number

I’m new using Amazon Rekognition to analyze faces on a video.
I’m using startFaceSearch to start my analysis. After the job is completed successfully, I’m using the JobId generated to call getFaceSearch.
On my first video analyzed, the results were as expected. But when I analyze the second example some strange behavior occurs and I can’t understand why.
Viewing the JSON generated as results for my second video, completely different faces are identified with the same index number.
Please see the results below.
{
"Timestamp": 35960,
"Person": {
"Index": 11,
"BoundingBox": {
"Width": 0.09375,
"Height": 0.24583333730698,
"Left": 0.1875,
"Top": 0.375
},
"Face": {
"BoundingBox": {
"Width": 0.06993006914854,
"Height": 0.10256410390139,
"Left": 0.24475525319576,
"Top": 0.375
},
"Landmarks": [
{
"Type": "eyeLeft",
"X": 0.26899611949921,
"Y": 0.40649232268333
},
{
"Type": "eyeRight",
"X": 0.28330621123314,
"Y": 0.41610333323479
},
{
"Type": "nose",
"X": 0.27063181996346,
"Y": 0.43293061852455
},
{
"Type": "mouthLeft",
"X": 0.25983560085297,
"Y": 0.44362303614616
},
{
"Type": "mouthRight",
"X": 0.27296212315559,
"Y": 0.44758656620979
}
],
"Pose": {
"Roll": 22.106262207031,
"Yaw": 6.3516845703125,
"Pitch": -6.2676968574524
},
"Quality": {
"Brightness": 41.875026702881,
"Sharpness": 65.948883056641
},
"Confidence": 90.114051818848
}
}
}
{
"Timestamp": 46520,
"Person": {
"Index": 11,
"BoundingBox": {
"Width": 0.19034090638161,
"Height": 0.42083331942558,
"Left": 0.30681818723679,
"Top": 0.17916665971279
},
"Face": {
"BoundingBox": {
"Width": 0.076486013829708,
"Height": 0.11217948794365,
"Left": 0.38680067658424,
"Top": 0.26923078298569
},
"Landmarks": [
{
"Type": "eyeLeft",
"X": 0.40642243623734,
"Y": 0.32347011566162
},
{
"Type": "eyeRight",
"X": 0.43237379193306,
"Y": 0.32369664311409
},
{
"Type": "nose",
"X": 0.42121160030365,
"Y": 0.34618207812309
},
{
"Type": "mouthLeft",
"X": 0.41044121980667,
"Y": 0.36520344018936
},
{
"Type": "mouthRight",
"X": 0.43202903866768,
"Y": 0.36483728885651
}
],
"Pose": {
"Roll": 0.3165397644043,
"Yaw": 2.038902759552,
"Pitch": -1.9931464195251
},
"Quality": {
"Brightness": 54.697460174561,
"Sharpness": 53.806159973145
},
"Confidence": 95.216400146484
}
}
}
In fact, in this video, all faces have the same index number, regardless of they are different. Any suggestions?
PersonDetail object is the result of the API . "index" is the identifier for the person detected in the video. So the index doesn't span across videos. It is just an internal reference.
Link below which details Index
https://docs.aws.amazon.com/rekognition/latest/dg/API_PersonDetail.html

Error !!! CloudFormation template validation

I'm designing redis cluster using cloudformation template and during the validation of the template I'm facing this error "Template contains errors.: Template format error: JSON not well-formed. (line 151, column 2)"
Below is the cloudformation script
{
"AWSTemplateFormatVersion": "2010-09-09",
"Metadata": {
"AWS::CloudFormation::Designer": {
"f60e2d2e-b46b-48b1-88c8-eecce45d2166": {
"size": {
"width": 60,
"height": 60
},
"position": {
"x": 320,
"y": 70
},
"z": 2,
"parent": "71508a33-8207-4580-8721-c3688c4a0353",
"embeds": [],
"ismemberof": [
"a63aacbd-1c6e-4118-8bbe-08a5bc63052a",
"55eb37aa-e764-49ac-b8fe-3eddb2ea77ad"
]
},
"a63aacbd-1c6e-4118-8bbe-08a5bc63052a": {
"size": {
"width": 60,
"height": 60
},
"position": {
"x": 320,
"y": 160
},
"z": 2,
"parent": "71508a33-8207-4580-8721-c3688c4a0353",
"embeds": []
},
"0291abc8-9c50-491b-8400-e1f7f8b22118": {
"source": {
"id": "f60e2d2e-b46b-48b1-88c8-eecce45d2166"
},
"target": {
"id": "a63aacbd-1c6e-4118-8bbe-08a5bc63052a"
},
"z": 1
},
"55eb37aa-e764-49ac-b8fe-3eddb2ea77ad": {
"size": {
"width": 60,
"height": 60
},
"position": {
"x": 440,
"y": 70
},
"z": 2,
"parent": "71508a33-8207-4580-8721-c3688c4a0353",
"embeds": []
},
"7aa270dd-1131-4dc4-8913-dfaf44a3815d": {
"source": {
"id": "f60e2d2e-b46b-48b1-88c8-eecce45d2166"
},
"target": {
"id": "55eb37aa-e764-49ac-b8fe-3eddb2ea77ad"
},
"z": 2
},
"71508a33-8207-4580-8721-c3688c4a0353": {
"size": {
"width": 610,
"height": 600
},
"position": {
"x": 20,
"y": 10
},
"z": 1,
"embeds": [
"55eb37aa-e764-49ac-b8fe-3eddb2ea77ad",
"a63aacbd-1c6e-4118-8bbe-08a5bc63052a",
"f60e2d2e-b46b-48b1-88c8-eecce45d2166"
]
}
}
},
"Parameters" : {
"CacheNodeType" : {
"Description" : "The compute and memory capacity of the nodes in the Cache Cluster",
"Type" : "String",
"Default" : "cache.m3.medium",
"AllowedValues" : ["cache.t2.micro", "cache.t2.small", "cache.t2.medium",
"cache.m3.medium", "cache.m3.large", "cache.m3.xlarge", "cache.m3.2xlarge",
"cache.t1.micro", "cache.m1.small", "cache.m1.medium", "cache.m1.large",
"cache.m1.xlarge", "cache.c1.xlarge", "cache.r3.large", "cache.r3.xlarge",
"cache.r3.2xlarge", "cache.r3.4xlarge","cache.r3.8xlarge", "cache.m2.xlarge",
"cache.m2.2xlarge", "cache.m2.4xlarge"],
"ConstraintDescription" : "must select a valid Cache Node type."
}
},
"Resources": {
"RedisClusterReplicationGroup": {
"Type": "AWS::ElastiCache::ReplicationGroup",
"Properties": {
"CacheParameterGroupName": {
"Ref": "RedisClusterParameterGroup"
},
"CacheSubnetGroupName": {
"Ref": "RedisClusterSubnetGroup"
},
"CacheNodeType" : { "Ref" : "CacheNodeType" },
"Engine" : "redis",
"EngineVersion" : "2.8.24",
"NumCacheClusters" : 4,
"Port" : 6879,
"PreferredCacheClusterAZs" : ["us-east-1c","us-east-1d","us-east-1e"],
"ReplicationGroupDescription" : "RedisClusterReplicationGroup",
"SecurityGroupIds" : "sg-7ea72e07",
"SnapshotRetentionLimit" : 0,
"AutomaticFailoverEnabled" : true,
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "f60e2d2e-b46b-48b1-88c8-eecce45d2166"
}
}
},
"RedisClusterParameterGroup": {
"Type": "AWS::ElastiCache::ParameterGroup",
"Properties": {
"CacheParameterGroupFamily" : "redis2.8",
"CacheParameterGroupName" : "RedisClusterParameterGroup",
"Description" :"RedisClusterParameterGroup"
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "a63aacbd-1c6e-4118-8bbe-08a5bc63052a"
}
}
},
"RedisClusterSubnetGroup": {
"Type": "AWS::ElastiCache::SubnetGroup",
"Properties": {
"Description" : "RedisClusterSubnetGroups",
"SubnetIds" : ["subnet-7854ab20", "subnet-eaa7039c", "subnet-988a00a5"]
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "71508a33-8207-4580-8721-c3688c4a0353"
}
}
}
},
}
One way to avoid this whole set of JSON errors is to switch to YAML syntax, which is supported by Cloudformation. You can convert your JSON document to YAML at
https://www.json2yaml.com/
and then just use that. I find YAML much easier to maintain without the quotes, braces, and commas.
{
"AWSTemplateFormatVersion": "2010-09-09",
"Metadata": {
"AWS::CloudFormation::Designer": {
"f60e2d2e-b46b-48b1-88c8-eecce45d2166": {
"size": {
"width": 60,
"height": 60
},
"position": {
"x": 320,
"y": 70
},
"z": 2,
"parent": "71508a33-8207-4580-8721-c3688c4a0353",
"embeds": [],
"ismemberof": [
"a63aacbd-1c6e-4118-8bbe-08a5bc63052a",
"55eb37aa-e764-49ac-b8fe-3eddb2ea77ad"
]
},
"a63aacbd-1c6e-4118-8bbe-08a5bc63052a": {
"size": {
"width": 60,
"height": 60
},
"position": {
"x": 320,
"y": 160
},
"z": 2,
"parent": "71508a33-8207-4580-8721-c3688c4a0353",
"embeds": []
},
"0291abc8-9c50-491b-8400-e1f7f8b22118": {
"source": {
"id": "f60e2d2e-b46b-48b1-88c8-eecce45d2166"
},
"target": {
"id": "a63aacbd-1c6e-4118-8bbe-08a5bc63052a"
},
"z": 1
},
"55eb37aa-e764-49ac-b8fe-3eddb2ea77ad": {
"size": {
"width": 60,
"height": 60
},
"position": {
"x": 440,
"y": 70
},
"z": 2,
"parent": "71508a33-8207-4580-8721-c3688c4a0353",
"embeds": []
},
"7aa270dd-1131-4dc4-8913-dfaf44a3815d": {
"source": {
"id": "f60e2d2e-b46b-48b1-88c8-eecce45d2166"
},
"target": {
"id": "55eb37aa-e764-49ac-b8fe-3eddb2ea77ad"
},
"z": 2
},
"71508a33-8207-4580-8721-c3688c4a0353": {
"size": {
"width": 610,
"height": 600
},
"position": {
"x": 20,
"y": 10
},
"z": 1,
"embeds": [
"55eb37aa-e764-49ac-b8fe-3eddb2ea77ad",
"a63aacbd-1c6e-4118-8bbe-08a5bc63052a",
"f60e2d2e-b46b-48b1-88c8-eecce45d2166"
]
}
}
},
"Parameters": {
"CacheNodeType": {
"Description": "The compute and memory capacity of the nodes in the Cache Cluster",
"Type": "String",
"Default": "cache.m3.medium",
"AllowedValues": [
"cache.t2.micro",
"cache.t2.small",
"cache.t2.medium",
"cache.m3.medium",
"cache.m3.large",
"cache.m3.xlarge",
"cache.m3.2xlarge",
"cache.t1.micro",
"cache.m1.small",
"cache.m1.medium",
"cache.m1.large",
"cache.m1.xlarge",
"cache.c1.xlarge",
"cache.r3.large",
"cache.r3.xlarge",
"cache.r3.2xlarge",
"cache.r3.4xlarge",
"cache.r3.8xlarge",
"cache.m2.xlarge",
"cache.m2.2xlarge",
"cache.m2.4xlarge"
],
"ConstraintDescription": "must select a valid Cache Node type."
}
},
"Resources": {
"RedisClusterReplicationGroup": {
"Type": "AWS::ElastiCache::ReplicationGroup",
"Properties": {
"CacheParameterGroupName": {
"Ref": "RedisClusterParameterGroup"
},
"CacheSubnetGroupName": {
"Ref": "RedisClusterSubnetGroup"
},
"CacheNodeType": {
"Ref": "CacheNodeType"
},
"Engine": "redis",
"EngineVersion": "2.8.24",
"NumCacheClusters": 4,
"Port": 6879,
"PreferredCacheClusterAZs": [
"us-east-1c",
"us-east-1d",
"us-east-1e"
],
"ReplicationGroupDescription": "RedisClusterReplicationGroup",
"SecurityGroupIds": "sg-7ea72e07",
"SnapshotRetentionLimit": 0,
"AutomaticFailoverEnabled": true,
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "f60e2d2e-b46b-48b1-88c8-eecce45d2166"
}
}
}
},
"RedisClusterParameterGroup": {
"Type": "AWS::ElastiCache::ParameterGroup",
"Properties": {
"CacheParameterGroupFamily": "redis2.8",
"CacheParameterGroupName": "RedisClusterParameterGroup",
"Description": "RedisClusterParameterGroup"
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "a63aacbd-1c6e-4118-8bbe-08a5bc63052a"
}
}
},
"RedisClusterSubnetGroup": {
"Type": "AWS::ElastiCache::SubnetGroup",
"Properties": {
"Description": "RedisClusterSubnetGroups",
"SubnetIds": [
"subnet-7854ab20",
"subnet-eaa7039c",
"subnet-988a00a5"
]
},
"Metadata": {
"AWS::CloudFormation::Designer": {
"id": "71508a33-8207-4580-8721-c3688c4a0353"
}
}
}
}
}
Any JSON parser will tell you what the issue is. The last element need not have a ',' and the JSON needed one more '}' to get validated properly. I haven't checked if the script passes the cloudformation validation but, it passes the JSON parsing