I am trying to connect AppSync to an Aurora serverless data source but it shows this when I try to create the data source through the AWS console:
My AppSync API is in ap-southeast-1 (Singapore) and my Aurora Serverless database is also in the same region. According to the AWS docs, the Data API is avaiable in that region. Here is my cloudformation template to deploy the DB Cluster:
DbCluster:
Type: AWS::RDS::DBCluster
DependsOn: DbSecret
Properties:
DatabaseName: !Ref DatabaseName
DBClusterIdentifier: !Ref DbClusterId
DeletionProtection: false
EnableHttpEndpoint: true
Engine: aurora
EngineMode: serverless
EngineVersion: 5.6.10a
MasterUsername: !Join ['', ['{{resolve:secretsmanager:', !Ref DbSecret, ':SecretString:username}}']]
MasterUserPassword: !Join ['', ['{{resolve:secretsmanager:', !Ref DbSecret, ':SecretString:password}}']]
ScalingConfiguration:
AutoPause: true
MinCapacity: 1
MaxCapacity: 2
SecondsUntilAutoPause: 300
StorageEncrypted: true
The CloudFormation template deploys fine, and, as you can see, the EnableHttpEndpoint has been set to true, which means that the Data API is enabled. I have also checked that it is enabled by going into the AWS console to try and modify the database:
I have tried searching the internet for any clues but I could not find anything. I'm not sure if this is a bug or I am doing something wrong. How do I get pass this error to create my data source?
After creating a support case, I have found that the Data API is available in the region, just that AppSync is not integrated to that region. In other words, Data API is available but AppSync can't use it in that region.
As an alternative, I am planning to run AWS AppSync Lambda resolvers which will call the Data API. This is only because I need the database to be ap-southeast-1.
If you do not require your database to be in an unsupported region, you can try using AppSync in ap-southeast-1 while having your database in a supported region (us-east-1 will most likely work).
Related
I'm trying to deploy an AWS API Gateway using Gitlab CICD pipelines. Since I have different environments, I want the StageName property of the API to be set dynamically i.e., StageName should be "dev" when deployed to dev stack, "prod" when deployed to prod stack. Below is the YAML script I used.
Mappings:
StackEnv:
dev-stack:
envm: "dev"
qa-stack:
envm: "qa"
Resources:
TestingAPI:
Type: AWS::Serverless::Api
Properties:
StageName: [!FindInMap [StackEnv, !Ref AWS::StackName, envm]]
It looks fine from above but when I try to deploy the API, I'm getting the error mentioned below.
[InvalidResourceException('TestingAPI', "Type of property 'StageName' is invalid.")]
I can't seem to understand why it should fail. If this approach won't work, how can I pass the respective environment values to the StageName property without breaking the pipeline?
You are wrapping the results of the !FindInMap function in [] making it a list. Remove the brackets:
StageName: !FindInMap [StackEnv, !Ref AWS::StackName, envm]
I have so far Glue Crawler defined in Cloud Formation Template as:
Type: AWS::Glue::Crawler
Properties:
Name: CrawlerName
DatabaseName: DBName
Targets:
DynamoDBTargets:
- Path: DynamoDBTableName
How I can turn on enable sampling option available in UI Console, but I do not see it in AWS Documentation of CFT
I haven't tried this myself, but this might work based on how the aws api for glue is structured
Type: AWS::Glue::Crawler
Properties:
Name: CrawlerName
DatabaseName: DBName
Targets:
DynamoDBTargets:
- Path: DynamoDBTableName
- ScanAll: False
Given that the default is to enable sampling you shouldn't have to add anything to the CFN template. I did try "ScanAll: True" to try and disable sampling but the command doesn't seem to be supported.
Property validation failure: [Encountered unsupported properties in {/Targets/DynamoDBTargets/1}: [ScanAll]]
I want to connect my RDS Database table with my lambda function, for this, I have created a lambda function and used knex.js and postgres database in rds, I got the knex object, but I cannot work with any query.
To give some more information about the services,
RDS database server security group can be access from anywhere
I have given the vpc in the serverless.yml file in the function.
Region of both lambda and rds are different, but not sure whether it is the problem.
My serverless function
note: this knex code is working when I tried this separately.
module.exports.storeTransaction = async (event) => {
...
knex('Transactions')
.select('*')
.then(response => {
console.log('response is ');
console.log(response);
})
...
};
Serverless.yml file
service: <service-name>
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: us-east-1
package:
exclude:
- node_modules/**
plugins:
- serverless-plugin-include-dependencies
functions:
storeEmail:
handler: handler.storeTransaction
vpc:
securityGroupIds:
- <security-group-id-of-rds>
subnetIds:
- <subnet-id-of-rds>
- <subnet-id-of-rds>
...
region:
- us-east-1a
events:
- http:
path: email/store
method: post
cors: true
So can you identify my issue on why I can't connect my rds db with lambda function, and let me know what I did wrong or what is missing.
I think the problem is that RDS and Lambda are in different regions, which means they are also in different VPCs, as a VPC cannot span across multiple regions. Although you can enable Inter VPC Peering (https://aws.amazon.com/vpc/faqs/#Peering_Connections).
Consider that when you deploy a lambda function in a VPC, it won't have internet access as long as you don't attach a NAT Gateway to that VPC/subnet.
If the RDS is open to the world (and does it really need to be??), you can try to deploy in the same region (without a VPC) and verify if that works.
I created my RDS database and Lambda function using CloudFormation/AWS SAM. I currently passed in my DB connection info via envrionment variables but am unsure if thats the recommended way since in the AWS dashboard, I can see the password in clear text
TestApiFunction:
Type: AWS::Serverless::Function
DependsOn: DB
Properties:
Handler: src/test.handler
FunctionName: Test
VpcConfig:
SecurityGroupIds:
- !Ref DataTierSecurityGroup
SubnetIds:
- !Ref PrivateSubnet1
- !Ref PrivateSubnet2
- !Ref PrivateSubnet3
Environment:
Variables:
'DB_HOST': !GetAtt DB.Endpoint.Address
'DB_USER': !Ref DBUser
'DB_PASSWORD': !Ref DBPassword
You can use IAM database authentication to use an IAM role instead of a username and password to connect to your database, if you're using MySQL or MySQL-compatible Aurora.
You would just need to turn on IAM database authentication on the RDS instance, create the role with rds-db:connect permission, and attach the role to the Lambda function. This article goes into more detailed instructions for setting this up.
Unfortunately, it doesn't look like you can enable IAM database authentication from CloudFormation, so if that is a no-go or if you're not using a compatible database engine, you can also look into AWS Secrets Manager. You would need to create an IAM role that can access your Secrets Manager secrets and attach that role to your Lambda function. One benefit of this approach is that AWS provides secrets rotation out-of-the-box for you for RDS usernames/passwords.
So dont pass the credential directly via Cloudformation. Make your lambda get credential by reading from dynamodb, parameter store (SSM) or AWS secret.
Parameter store show **** for password
With DynamoDB you can restrict view to certain users.
AWS secret is another service that allow you to store secret in key:value
I'm trying to figure out how to automate the creation of several cloud resources in AWS, using CloudFormation.
Now I need to include the creation of SES (Simple Email Service) domain, but couldn't find the documentation, but I've already checked:
Simple Email Service Documentation
CloudFormation Resource Types Documentation
Do AWS support SES in CloudFormation?
CloudFormation provides several built-in Amazon SES resource types, but as of 2018 2020 2022 is still missing the ones many people need: domain and email verification.
Fortunately, CloudFormation has the ability to define your own custom resource types. I've built Custom::SES_Domain and Custom::SES_EmailIdentity resources that are designed to play well with other CloudFormation resources. Get them here: https://github.com/medmunds/aws-cfn-ses-domain.
Once you've pulled the custom CfnSESResources into your template, you can verify an SES domain like this:
Resources:
# Provision a domain with Amazon SES:
MySESDomain:
Type: Custom::SES_Domain
Properties:
ServiceToken: !GetAtt CfnSESResources.Outputs.CustomDomainIdentityArn
Domain: "example.com"
EnableSend: true
EnableReceive: false
# Then add all required DNS records for SES verification and usage:
MyRoute53RecordsForSES:
Type: AWS::Route53::RecordSetGroup
Properties:
HostedZoneName: "example.com."
RecordSets: !GetAtt MySESDomain.Route53RecordSets
Full instructions are in the repository. Custom::SES_Domain has properties for controlling several common SES domain options, and exposes attributes that feed into your CloudFormation DNS resources: either a standard AWS::Route53::RecordSetGroup resource as shown above, or other (external) DNS providers via zone file entries.
Unfortunately this is currently not supported, but who knows Re:Invent 2017 is around the corner ,,,
Question asked on AWS Developer Forum
It is possible by creating a custom function, some blog about SES and cloudformation.
Though AWS Cloudformation is not currently supported use the AWS SDKs ( e.g Node SDK) to provision the SES resources required.
Its a common practice to use custom code with AWS SDKs and AWS CLI commands in combination with CloudFormation to provision resources AWS since each approach can be advantages, based on the parameters, number of resources, repetitions and etc.
Here is the current list of SES Resource Types supported by CloudFormation:
AWS::SES::ConfigurationSet
AWS::SES::ConfigurationSetEventDestination
AWS::SES::ReceiptFilter
AWS::SES::ReceiptRule
AWS::SES::ReceiptRuleSet
AWS::SES::Template
Update October 2022
CloudFormation now supports the AWS::SES::EmailIdentity resource, which allows us to define both domains and email addresses through infrastructure as code.
According to the CloudFormation release history this resource was added on June 30, 2022.
CloudFormation provides a nativ AWS::SES::EmailIdentity resource now. (since 30.07.2022)
Here is an example with automated Route53 DEKIM setup/verification:
EmailIdentity:
Type: AWS::SES::EmailIdentity
Properties:
EmailIdentity: {your.domain.com}
Route53DEKIM:
Type: AWS::Route53::RecordSetGroup
Properties:
HostedZoneId: {ZoneId}
RecordSets:
- Name: !GetAtt EmailIdentity.DkimDNSTokenName1
Type: CNAME
TTL: '3600'
ResourceRecords:
- !GetAtt EmailIdentity.DkimDNSTokenValue1
- Name: !GetAtt EmailIdentity.DkimDNSTokenName2
Type: CNAME
TTL: '3600'
ResourceRecords:
- !GetAtt EmailIdentity.DkimDNSTokenValue2
- Name: !GetAtt EmailIdentity.DkimDNSTokenName3
Type: CNAME
TTL: '3600'
ResourceRecords:
- !GetAtt EmailIdentity.DkimDNSTokenValue3
{your.domain.com} and {ZoneId} must be adapted.
Not supported. But, you can make it handled by lambda.
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: >-
A simple email example
Resources:
FunctionEmailHandler:
Type: 'AWS::Serverless::Function'
Properties:
Handler: email.handler
Runtime: nodejs6.10
CodeUri: ..
Description: >-
...
Tags:
App: your app
MemorySize: 128
Timeout: 10
Policies:
- Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 's3:GetObject'
Resource: '*'
LambdaInvokePermission:
Type: "AWS::Lambda::Permission"
Properties:
Action: 'lambda:InvokeFunction'
FunctionName: !GetAtt FunctionEmailHandler.Arn
Principal: ses.amazonaws.com
SESEmailRecievedRule:
Type: "AWS::SES::ReceiptRule"
Properties:
RuleSetName: your default rule set name
After: store-email-to-s3
Rule:
Name: email-recieved-rule
Enabled: true
Actions:
- LambdaAction:
FunctionArn: !GetAtt FunctionEmailHandler.Arn
InvocationType: Event