Custom geolocation search resolver using aws-amplify api - amazon-web-services

I'm trying to add a custom geolocation search resolver that targets an Elasticsearch domain using aws-amplify API (base on documentation)
My Custom stack json is :
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "An auto-generated nested stack.",
"Metadata": {},
"Parameters": {
"AppSyncApiId": {
"Type": "String",
"Description": "The id of the AppSync API associated with this project."
},
"AppSyncApiName": {
"Type": "String",
"Description": "The name of the AppSync API",
"Default": "AppSyncSimpleTransform"
},
"env": {
"Type": "String",
"Description": "The environment name. e.g. Dev, Test, or Production",
"Default": "NONE"
},
"S3DeploymentBucket": {
"Type": "String",
"Description": "The S3 bucket containing all deployment assets for the project."
},
"S3DeploymentRootKey": {
"Type": "String",
"Description": "An S3 key relative to the S3DeploymentBucket that points to the root\nof the deployment directory."
}
},
"Resources": {
"QueryNearbyUsers": {
"Type": "AWS::AppSync::Resolver",
"Properties": {
"ApiId": {
"Ref": "AppSyncApiId"
},
"DataSourceName": "ElasticsearchDomain",
"TypeName": "Query",
"FieldName": "nearbyUsers",
"RequestMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.nearbyUsers.req.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
},
"ResponseMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/Query.nearbyUsers.res.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
}
}
}
},
"Conditions": {
},
"Outputs": {
}
}
But It give me this error :
Resource Name: QueryNearbyUsers (AWS::AppSync::Resolver)
Event Type: create
Reason: No data source found named ElasticsearchDomain (Service: AWSAppSync; Status Code: 404; Error Code: NotFoundException; Request ID: 920993d8-46ef-11e9-82c8-e977f5face03)
I tried many different things for DataSourceName including the domain name in aws console or copy pasting the code from other auto generated stacks,... unfortunately none of them work .
How can I find DataSourceName value?

Seems, there is a typo in their documentation and it should be :
"DataSourceName": "ElasticSearchDomain",
not :
"DataSourceName": "ElasticsearchDomain",
now it's working fine.. so many hours wasted on such a simple typo.

Related

Invalid template resource property "Tags"

For the below cloudformation template:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Some Stack",
"Parameters":{
"VpcId":{
"Type": "AWS::EC2::VPC::Id",
"Description": "The target VPC Id"
},
"SubnetId":{
"Type": "AWS::EC2::Subnet::Id",
"Description": "The target subnet Id"
},
"KeyName": {
"Type": "String",
"Description": "The key pair that is allowed SSH access"
}
},
"Resources":{
"EcsCluster":{
"Type": "AWS::ECS::Cluster",
"Tags": [
{
"Key": "Name",
"Value": { "Fn::Join": ["", [ { "Ref": "AWS::StackName" }, "-ecs-cluster" ] ] }
},
{
"Key": "workenv",
"Value": "dev"
},
{
"Key": "abc",
"Value": "some_value"
},
{
"Key": "function",
"Value": "someapp"
},
{
"Key": "owner",
"Value": "email#abc.com"
}
]
}
},
"Outputs":{
"ElbDomainName":{
"Description": "Public DNS name of Elastic Load Balancer",
"Value": {
"Fn::GetAtt": [
"ElasticLoadBalancer",
"DNSName"
]
}
}
}
}
Below is the error:
Invalid template resource property 'Tags'
Am following the below documentation to add tags:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-resource-tags.html
Why CloudFormation service does not accept Tags defined on every resource? something to do with indentation?
It is because the Tags are not defined correctly for EcsCluster resource. The Tags property should be defined inside the Properties section as how you defined Tags for other resources.
"EcsCluster": {
"Type": "AWS::ECS::Cluster",
"Properties": {
"Tags": []
}
}
Hope this helps.
Not all resource types support tags. Check the documentation for each resource type.

Import s3 bucket from one stack to other stack

I have created S3 Bucket with deletepolicy retain using cloud formation, I Have exported the created bucket using Export in outputs in cloudformation.
Now I want to use the same s3 bucket in another stack using import
Cloud formation for s3:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Creates an S3 bucket to be used for static content/website hosting.",
"Parameters": {
"AssetInsightId": {
"Description": "Asset Insight ID",
"Type": "String",
"Default": "206153"
},
"ResourceOwner": {
"Description": "tr:resource-owner",
"Type": "String",
"Default": "####"
},
"EnvironmentType": {
"Description": "tr:environment-type",
"Default": "preprod",
"Type": "String",
"AllowedValues": ["preprod", "prod"],
"ConstraintDescription": "must specify preprod, prod."
}
},
"Resources": {
"S3Bucket": {
"Type": "AWS::S3::Bucket",
"DeletionPolicy": "Retain",
"Properties": {
"BucketName": {
"Fn::Sub": "a${AssetInsightId}-s3bucket-${EnvironmentType}"
},
"Tags": [{
"Key": "tr:application-asset-insight-id",
"Value": {
"Fn::Sub": "${AssetInsightId}"
}
}, {
"Key": "tr:environment-type",
"Value": {
"Fn::Sub": "${EnvironmentType}"
}
}
]
}
}
},
"Outputs": {
"S3Bucket": {
"Description": "Information about the value",
"Description": "Name of the S3 Resource Bucket",
"Value": "!Ref S3Bucket",
"Export": {
"Name": "ExportS3Bucket"
}
}
}
}
cloud formation to use created s3 bucket from another template with import
Second template :
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Creates an S3 apigateway to be used for static content/website hosting.",
"Parameters": {
"AssetInsightId": {
"Description": "Asset Insight ID",
"Type": "String",
"Default": "206153"
},
"ResourceOwner": {
"Description": "tr:resource-owner",
"Type": "String",
"Default": "swathi.koochi#thomsonreuters.com"
},
"EnvironmentType": {
"Description": "tr:environment-type",
"Default": "preprod",
"Type": "String",
"AllowedValues": ["preprod", "prod"],
"ConstraintDescription": "must specify preprod, prod."
},
"endpointConfiguration": {
"Description": "tr:endpoint-configuration",
"Default": "REGIONAL",
"Type": "String",
"AllowedValues": ["REGIONAL", "EDGE"],
"ConstraintDescription": "must specify REGIONAL, EDGE."
}
},
"Resources": {
"S3BucketImport": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": {"Fn::ImportValue" : "ExportS3Bucket"}
}
},
"APIGateWayRestResourceRestApi": {
"Type": "AWS::ApiGateway::RestApi",
"Properties": {
"Name": "MyAPI",
"Description": "API Gateway rest api with cloud formation",
"EndpointConfiguration": {
"Types": [{
"Ref": "endpointConfiguration"
}
]
}
}
},
"APIGateWayResource": {
"Type": "AWS::ApiGateway::Resource",
"Properties": {
"RestApiId": {
"Ref": "APIGateWayRestResourceRestApi"
},
"ParentId": {
"Fn::GetAtt": ["APIGateWayRestResourceRestApi", "RootResourceId"]
},
"PathPart": "test"
}
},
"APIGatewayPostMethod": {
"Type": "AWS::ApiGateway::Method",
"Properties": {
"AuthorizationType": "NONE",
"HttpMethod": "POST",
"Integration": {
"Type": "AWS_PROXY",
"IntegrationHttpMethod": "POST",
"Uri": {
"Fn::Sub": "arn:aws:apigateway:us-east-2:lambda:path/2015-03-31/functions/arn:aws:lambda:us-east-2:861756181523:function:GreetingLambda/invocations"
}
},
"MethodResponses": [{
"ResponseModels": {
"application/json": {
"Ref": "PostMethodResponse"
}
},
"StatusCode": 200
}
],
"ResourceId": {
"Ref": "APIGateWayResource"
},
"RestApiId": {
"Ref": "APIGateWayRestResourceRestApi"
}
}
},
"PostMethodResponse": {
"Type": "AWS::ApiGateway::Model",
"Properties": {
"ContentType": "application/json",
"Name": "PostMethodResponse",
"RestApiId": {
"Ref": "APIGateWayRestResourceRestApi"
},
"Schema": {
"$schema": "http://json-schema.org/draft-04/schema#",
"title": "PostMethodResponse",
"type": "object",
"properties": {
"Email": {
"type": "string"
}
}
}
}
},
"RestApiDeployment": {
"DependsOn": "APIGatewayPostMethod",
"Type": "AWS::ApiGateway::Deployment",
"Properties": {
"RestApiId": {
"Ref": "APIGateWayRestResourceRestApi"
}
}
},
"RestAPIStage": {
"Type": "AWS::ApiGateway::Stage",
"Properties": {
"DeploymentId": {
"Ref": "RestApiDeployment"
},
"MethodSettings": [{
"DataTraceEnabled": true,
"HttpMethod": "*",
"ResourcePath": "/*"
}
],
"RestApiId": {
"Ref": "APIGateWayRestResourceRestApi"
},
"StageName": "Latest"
}
},
"APIGateWayDomainName": {
"Type": "AWS::ApiGateway::DomainName",
"Properties": {
"CertificateArn": {
"Ref": "myCertificate"
},
"DomainName": {
"Fn::Join": [".", [{
"Ref": "AssetInsightId"
}, {
"Ref": "EnvironmentType"
}, "api"]]
},
"EndpointConfiguration": {
"Types": [{
"Ref": "endpointConfiguration"
}
]
}
}
},
"myCertificate": {
"Type": "AWS::CertificateManager::Certificate",
"Properties": {
"DomainName": {
"Fn::Join": [".", [{
"Ref": "AssetInsightId"
}, {
"Ref": "EnvironmentType"
}, "api"]]
}
}
}
}
}
when I/m trying to import using Import Value, I'm getting error saying
S3BucketImport
CREATE_FAILED Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 9387EBE0E472E559; S3 Extended Request ID: o8EbE20IOoUgEMwXc7xVjuoyQT03L/nnQ7AsC94Ff1S/PkE100Imeyclf1BxYeM0avuYjDWILxA=)
As #Jarmod correctly pointed out,
In your first template, export the s3 bucket name using { "Ref" : ",S3Bucket" }
In your second template, you don't have to create the bucket again.you can use the exported value from the first template if you want to refer the bucket name from resources. But i don't see any of the resources in the second template refer the S3 bucket name.

Name the resources with the supplied parameters to cloud formation stack

Im new to cloudformation and intrinsc function.
I am trying to run a stack with the template which accepts ent type as parameter
Using this parameter I would like to name my s3 bucket.
But Im getting 400 Bad request
My template
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Creates an S3 bucket using parameters supplied.",
"Parameters": {
"AssetInsightId": {
"Description": "Asset Insight ID",
"Type": "String",
"Default": "###"
},
"ResourceOwner": {
"Description": "tr:resource-owner",
"Type": "String",
"Default": "###"
},
"EnvironmentType": {
"Description": "tr:environment-type",
"Default": "PREPROD",
"Type": "String",
"AllowedValues": ["PREPROD", "PROD"],
"ConstraintDescription": "must specify PREPROD, PROD."
}
},
"Resources": {
"ResourceBucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": "!Sub a${AssetInsightId}-S3Bucket-${EnvironmentType}",
"VersioningConfiguration": {
"Status": "Enabled"
},
"Tags": [
{
"Key": "tr:application-asset-insight-id",
"Value": "${AssetInsightId}"
},
{
"Key": "tr:resource-owner",
"Value": "${ResourceOwner}"
},
{
"Key": "tr:environment-type",
"Value": "${EnvironmentType}"
}
]
}
}
},
"Outputs": {
"BucketName": {
"Description": "Name of the S3 Resource Bucket",
"Value": "!Ref ResourceBucket"
}
}
}
Using the AWS CLI - for passing dynamic parameters
aws cloudformation create-stack --stack-name my-stack
--template-body template.json
--parameters ParameterKey=EnvironmentType,ParameterValue=Development
From here
Also !Sub is the syntax for the short form of Fn::Sub: - used in template YAML
{ "Fn::Sub" : String } - used in template JSON
Fn::Sub
Change
"BucketName": "!Sub a${AssetInsightId}-S3Bucket-${EnvironmentType}",
To
"BucketName": { "Fn::Sub": "a${AssetInsightId}-S3Bucket-${EnvironmentType}"},
Example

Cloudformation Property validation failure: Encountered unsupported properties

I'm trying to create a nested stack with the root stack looks like this:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"DynamoDBTable": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"Parameters": {
"TableName": {
"Fn::Sub": "${AWS::StackName}"
}
},
"TemplateURL": "https://s3.amazonaws.com/my-templates-bucket/dynamodb.json"
}
},
"S3WebsiteReact": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"Parameters": {
"BucketName": {
"Fn::Sub": "${AWS::StackName}-website"
}
},
"TemplateURL": "https://s3.amazonaws.com/my-templates-bucket/s3-static-website-react.json"
}
},
"S3UploadBucket": {
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"Parameters": {
"BucketName": {
"Fn::Sub": "${AWS::StackName}-upload"
}
},
"TemplateURL": "https://s3.amazonaws.com/my-templates-bucket/s3-with-cors.json"
}
},
"Cognito": {
"Type": "AWS::CloudFormation::Stack",
"DependsOn": "DynamoDBTable",
"Properties": {
"Parameters": {
"CognitoUserPoolName": {
"Fn::Join" : ["",
{
"Fn::Split": ["-", {
"Ref": "AWS::StackName"
}]
}
]
}
},
"TemplateURL": "https://s3.amazonaws.com/my-templates-bucket/cognito.json"
}
},
"ApiGateway": {
"Type": "AWS::CloudFormation::Stack",
"DependsOn": ["DynamoDBTable", "Cognito"],
"Properties": {
"Parameters": {
"ApiGatewayName": {
"Fn::Sub": "${AWS::StackName}-api"
},
"CognitoUserPoolArn": {
"Fn::GetAtt": [ "Cognito", "Outputs.UserPoolArn" ]
},
"DynamoDBStack": {
"Fn::GetAtt": [ "DynamoDBTable", "Outputs.DDBStackName" ]
}
},
"TemplateURL": "https://s3.amazonaws.com/my-templates-bucket/api-gateway.json"
}
},
"IdentityPool": {
"Description": "Cognito Identity Pool. Must be created after User Pool and API Gateway.",
"Type": "AWS::Cognito::IdentityPool",
"DependsOn": ["Cognito", "ApiGateway", "S3UploadBucket"],
"Properties": {
"Parameters": {
"AppClientId": {
"Fn::GetAtt": [ "Cognito", "Outputs.AppClientId" ]
},
"UserPoolProviderName": {
"Fn::GetAtt": [ "Cognito", "Outputs.ProviderName" ]
},
"UserPoolName": {
"Fn::GetAtt": [ "Cognito", "Outputs.UserPoolName" ]
},
"UploadBucketName": {
"Fn::GetAtt": [ "S3UploadBucket", "Outputs.UploadBucketName" ]
},
"ApiGatewayId": {
"Fn::GetAtt": [ "ApiGateway", "Outputs.ApiGatewayId" ]
}
},
"TemplateURL": "https://s3.amazonaws.com/my-templates-bucket/identity-pool.json"
}
}
},
"Outputs": {
}
}
And I get this error:
2019-06-19 14:45:14 UTC-0400 IdentityPool CREATE_FAILED Property validation failure: [Encountered unsupported properties in {/}: [TemplateURL, Parameters]]
It looks like my identity pool stack has some issues with the parameters. But the identity pool stack parameters look like this:
"Parameters" : {
"AppClientId": {
"Description": "ID of the App Client of the Cognito User Pool passed into this stack.",
"Type": "String"
},
"UserPoolProviderName": {
"Description": "Cognito User Pool Provider name passed into this stack.",
"Type": "String"
},
"UserPoolName": {
"Description": "Cognito User Pool Name passed into this stack.",
"Type": "String"
},
"UploadBucketName": {
"Description": "Name of the bucket that is used to upload files to.",
"Type": "String"
},
"ApiGatewayId": {
"Description": "ID of the API Gateway created for the stack.",
"Type": "String"
}
},
The funny thing is: I tried creating each stack on its own, then passed the outputs from them as parameters to the stacks that need those parameters and every single stack was created successfully without any problems.
I've tried to look for what is unsupported but was unable to find any answers.
The error:
[Encountered unsupported properties in {/}: [TemplateURL, Parameters]]
Says that those two properties are unsupported. Unlike all the rest of the resources declared in your template which also use those two properties, this resource is a AWS::Cognito::IdentityPool, while the rest are all of type AWS::CloudFormation::Stack.
Those two properties are only valid on the AWS::CloudFormation::Stack type, hence the validation error.

Ansible Cloudformation, how to not break if Resource already exists?

I have the following AWS Cloudformation config, which sets up S3, Repositories.
When I run it via an ansible playbook, on the second time running the playbook this happens
AWS::ECR::Repository Repository CREATE_FAILED: production-app-name already exists
etc
How can I make it so that when this is ran multiple times, it will keep the existing s3 and repository instead of just blowing up? (I had assumed the param "DeletionPolicy": "Retain", would do this)
What I'd like to achieve:
If i run this 100x, I want the same resource state as it was after run #1. I do not want any resources deleted/wiped of any data.
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "Pre-reqs for Elastic Beanstalk application",
"Parameters": {
"BucketName": {
"Type": "String",
"Description": "S3 Bucket name"
},
"RepositoryName": {
"Type": "String",
"Description": "ECR Repository name"
}
},
"Resources": {
"Bucket": {
"Type": "AWS::S3::Bucket",
"DeletionPolicy": "Retain",
"Properties": {
"BucketName": { "Fn::Join": [ "-", [
{ "Ref": "BucketName" },
{ "Ref": "AWS::Region" }
]]}
}
},
"Repository": {
"Type": "AWS::ECR::Repository",
"DeletionPolicy": "Retain",
"Properties": {
"RepositoryName": { "Ref": "RepositoryName" }
}
}
},
"Outputs": {
"S3Bucket": {
"Description": "Full S3 Bucket name",
"Value": { "Ref": "Bucket" }
},
"Repository": {
"Description": "ECR Repo",
"Value": { "Fn::Join": [ "/", [
{
"Fn::Join": [ ".", [
{ "Ref": "AWS::AccountId" },
"dkr",
"ecr",
{ "Ref": "AWS::Region" },
"amazonaws.com"
]]
},
{ "Ref": "Repository" }
]]}
}
}
}
edit:
DB with similar issue when ran twice
{
"AWSTemplateFormatVersion": "2010-09-09",
"Parameters": {
"DBPassword": {
"MinLength": "8",
"NoEcho": true,
"Type": "String"
},
"Environment": {
"MinLength": "1",
"Type": "String"
},
"DBName": {
"Type": "String",
"Description": "DBName"
},
"DBInstanceIdentifier": {
"Type": "String",
"Description": "DBInstanceIdentifier"
},
"DBPort": {
"Type": "String",
"Description": "DBPort"
},
"DBUsername": {
"Type": "String",
"Description": "DBName"
}
},
"Outputs": {
"Url": {
"Value": {
"Fn::Sub": "postgres://${DBUsername}:${DBPassword}#${Instance.Endpoint.Address}:${Instance.Endpoint.Port}/${DBName}"
}
}
},
"Resources": {
"Instance": {
"Type": "AWS::RDS::DBInstance",
"DeletionPolicy": "Retain",
"Properties": {
"AllocatedStorage": "10",
"DBInstanceClass": "db.t2.micro",
"DBInstanceIdentifier": {"Ref": "DBInstanceIdentifier"},
"DBName": {
"Ref": "DBName"
},
"Engine": "postgres",
"EngineVersion": "9.6.6",
"MasterUsername": {
"Ref": "DBUsername"
},
"MasterUserPassword": {
"Ref": "DBPassword"
},
"MultiAZ": "false",
"Port": {
"Ref": "DBPort"
},
"PubliclyAccessible": "false",
"StorageType": "gp2"
}
}
}
}
The field RepositoryName in AWS::ECR::Repository is actually not required and I would advise against specifying one. By letting CloudFormation dynamically assign a unique name to the repository you'll avoid collision.
If you later want to use the repository name, for exemple: in a task definition, you can use the "Ref" function like so { "Ref": "Repository" } to extract the unique name generated by CloudFormation.
As for the issue with the RDS instance, tt comes down to the same problem of hardcoding resources name.
Using retain will keep the resource alive but it will no longer be managed by CloudFormation which is a big problem.
Just make sure when doing updates to never modify a parameter that require a resource "replacement". The documentation always states what kind of update a parameter change will incur.
Image taken from (here)
If you really need to change a parameter that requires a replacement. Create a new resource with the adapter parameters, migrate whatever data you had in the database or ECR repository, then remove the old resource from the template. If you don't need to migrate anything, make sure you don't have hardcoded names and let CloudFormation perform the replacement.