API HTTP Gateway lambda integration 'null' in Resource Path - amazon-web-services

I am setting up an API HTTP Gateway (V2) with lambda integrations via Cloudformation, and everything has been working so far. I have 2 working integrations, but my third integration is not working: Everything looks fine from the API Gateway side (it lists the correct route with a link to the Lambda), but the API endpoint in the lambda is listed as "https://c59boisn2k.execute-api.eu-central-1.amazonaws.com/productionnull". When I try to call the route, it says "Not Found". The odd thing is that I am using the same template for all three integrations.
I was thinking it could be a "dependsOn" issue, but I think I have all the correct dependencies. I tried re-creating the stack from scratch and now two of the three functions say "null" in their URL while the API Gateway still states the correct routes. Can this be a 'dependsOn' problem?
Here's my template for a single integration:
{
"Resources": {
"api": {
"Type": "AWS::ApiGatewayV2::Api",
"Properties": {
"Name": { "Ref": "AWS::StackName" },
"ProtocolType": "HTTP",
"CorsConfiguration": {
"AllowMethods": ["*"],
"AllowOrigins": ["*"]
}
}
},
"stage": {
"Type": "AWS::ApiGatewayV2::Stage",
"Properties": {
"Description": { "Ref": "AWS::StackName" },
"StageName": "production",
"AutoDeploy": true,
"ApiId": { "Ref": "api" },
"AccessLogSettings": {
"DestinationArn": {
"Fn::GetAtt": ["stageLogGroup", "Arn"]
}
}
}
},
"getSignedS3LambdaRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"RoleName": {
"Fn::Sub": "${AWS::StackName}-getSignedS3"
},
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": ["lambda.amazonaws.com"]
},
"Action": ["sts:AssumeRole"]
}
]
},
"Policies": [
{
"PolicyName": "root",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Resource": "arn:aws:logs:*:*:*",
"Action": "logs:*"
},
{
"Effect": "Allow",
"Action": ["s3:*"],
"Resource": ["arn:aws:s3:::euromomo.eu/uploads/*"]
}
]
}
}
]
}
},
"getSignedS3Lambda": {
"Type": "AWS::Lambda::Function",
"DependsOn": ["getSignedS3LambdaRole"],
"Properties": {
"FunctionName": {
"Fn::Sub": "${AWS::StackName}-getSignedS3"
},
"Code": {
"S3Bucket": { "Ref": "operationsS3Bucket" },
"S3Key": { "Ref": "getSignedS3S3Key" }
},
"Runtime": "nodejs10.x",
"Handler": "index.handler",
"Role": { "Fn::GetAtt": ["getSignedS3LambdaRole", "Arn"] }
}
},
"getSignedS3Permission": {
"Type": "AWS::Lambda::Permission",
"DependsOn": ["api", "getSignedS3Lambda"],
"Properties": {
"Action": "lambda:InvokeFunction",
"FunctionName": { "Ref": "getSignedS3Lambda" },
"Principal": "apigateway.amazonaws.com",
"SourceArn": {
"Fn::Sub": "arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${api}/*/*"
}
}
},
"getSignedS3Integration": {
"Type": "AWS::ApiGatewayV2::Integration",
"DependsOn": ["getSignedS3Permission"],
"Properties": {
"ApiId": { "Ref": "api" },
"IntegrationType": "AWS_PROXY",
"IntegrationUri": {
"Fn::Sub": "arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${getSignedS3Lambda.Arn}/invocations"
},
"PayloadFormatVersion": "2.0"
}
},
"getSignedS3Route": {
"Type": "AWS::ApiGatewayV2::Route",
"DependsOn": ["getSignedS3Integration"],
"Properties": {
"ApiId": { "Ref": "api" },
"RouteKey": "POST /getSignedS3",
"AuthorizationType": "NONE",
"Target": { "Fn::Sub": "integrations/${getSignedS3Integration}" }
}
}
}
}

After spending hours debugging this, I found that the problem was in my Lambda permission. I need to use the correct path in the permission.
This does not work:
arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${api}/*/*
This does work:
arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${api}/*/*/getSignedS3
I believe I could scope it even more to this:
arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${api}/*/POST/getSignedS3
This fixed all my problems and shows the correct path in the lambda web console.

Related

AWS Firehose DynamicPartitioningConfiguration/Prefix simply won't work

I've been slamming my head against the wall for the past 2 days trying to get dynamic partitioning to work in the delivery stream I'm creating using a serverless application model template. I've tried multiple configurations, but it seems as soon as I fill in a prefix, enable dynamic partitioning and add a metadata extraction processor to get myself a field I can use as the prefix my stack stops working, I can generate alerts in aws and see the logs from the lambda processor I also added, but no files reach the bucket, I also can't seem to configure cloudwatch logs for the stream as well, since I've tried creating them through the script and manually and neither at least have logs pertaining to any error....
I'll leave my script here, hopefully I'm doing something wrong:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Transform": "AWS::Serverless-2016-10-31",
"Description": "An AWS Serverless Application.",
"Parameters": {
"BucketArn": {
"Type": "String"
}
},
"Resources": {
"DeliveryPolicy": {
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyName": "K-POLICY",
"Roles": [
{
"Ref": "DeliveryRole"
}
],
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
{
"Ref": "BucketArn"
}
]
},
{
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction",
"lambda:GetFunctionConfiguration"
],
"Resource": [
{
"Fn::GetAtt": [
"Processor",
"Arn"
]
}
]
}
]
}
}
},
"DeliveryRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"firehose.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
}
}
},
"DeliveryStream": {
"Type": "AWS::KinesisFirehose::DeliveryStream",
"Properties": {
"DeliveryStreamName": "K-STREAM",
"DeliveryStreamType": "DirectPut",
"ExtendedS3DestinationConfiguration": {
"BucketARN": {
"Ref": "BucketArn"
},
"Prefix": "!{partitionKeyFromQuery:symb}",
"ErrorOutputPrefix": "errors/!{firehose:random-string}/!{firehose:error-output-type}/!{timestamp:yyyy/MM/dd}",
"RoleARN": {
"Fn::GetAtt": [
"DeliveryRole",
"Arn"
]
},
"DynamicPartitioningConfiguration": {
"Enabled": true
},
"ProcessingConfiguration": {
"Enabled": true,
"Processors": [
{
"Type": "MetadataExtraction",
"Parameters": [
{
"ParameterName": "MetadataExtractionQuery",
"ParameterValue": "{symb: .TICKER_SYMBOL}"
},
{
"ParameterName": "JsonParsingEngine",
"ParameterValue": "JQ-1.6"
}
]
},
{
"Type": "Lambda",
"Parameters": [
{
"ParameterName": "LambdaArn",
"ParameterValue": {
"Fn::GetAtt": [
"Processor",
"Arn"
]
}
}
]
}
]
}
}
}
},
"Processor": {
"Type": "AWS::Serverless::Function",
"Properties": {
"Handler": "K-PROCESSOR::KRATOS_PROCESSOR.Functions::FunctionHandler",
"FunctionName": "K-PROCESSOR",
"Runtime": "dotnetcore3.1",
"CodeUri": "./staging/app/3cs-int-k",
"MemorySize": 256,
"Timeout": 60,
"Role": null,
"Policies": [
"AWSLambdaBasicExecutionRole"
]
}
}
}
}

Need help on CloudFormation template and AWS lambda for pulling events from SQS to S3 via lambda

I am new to AWS CloudFormation, and I am trying to capture events from SQS Queue and place them in S3 bucket via AWS lambda. Flow of events is
SNS --> SQS <-- Lambda ---> S3 bucket.
I am trying to achieve above flow using cloudFormation template.I am getting below error message after deploying my CloudFormation template. Any help you can provide would be greatly appreciated. Thank you
11:51:56 2022-01-13 17:51:47,930 - INFO - ...
11:52:53 2022-01-13 17:52:48,511 - ERROR - Stack myDemoApp shows a rollback status ROLLBACK_IN_PROGRESS.
11:52:53 2022-01-13 17:52:48,674 - INFO - The following root cause failure event was found in the myDemoApp stack for resource EventStreamLambda:
11:52:53 2022-01-13 17:52:48,674 - INFO - Resource handler returned message: "Error occurred while GetObject. S3 Error Code: NoSuchKey.
S3 Error Message: The specified key does not exist. (Service: Lambda, Status Code: 400, Request
ID: 5f2f9882-a863-4a58-90bd-7e0d0dfdf4d5, Extended Request ID: null)" (RequestToken: 0a95acb4-a677-0a2d-d0bc-8b7487a858ad, HandlerErrorCode: InvalidRequest)
11:52:53 2022-01-13 17:52:48,674 - INFO - ..
MY lambda function is :
import json
import logging
import boto3
logger = logging.getLogger()
logger.setLevel(logging.INFO)
logging.basicConfig(level=logging.INFO,
format='%(asctime)s: %(levelname)s: %(message)s')
def lambda_handler(event, context):
logger.info(f"lambda_handler -- event: {json.dumps(event)}")
s3_bucket = boto3.resource("3")
event_message = json.loads(event["Records"][0]["body"])
s3_bucket.put_object(Bucket="S3DeployBucket", key="data.json", Body=json.dumps(event_message))
My complete CloudFormation template is :
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "myDemoApp Resource Stack",
"Mappings": {
},
"Parameters": {
"S3DeployBucket": {
"Default": "myDemoApp-deploy-bucket",
"Description": "Bucket for deployment configs and artifacts for myDemoApp",
"Type": "String"
},
"EnvName": {
"Description": "Platform environment name for myDemoApp",
"Type": "String"
},
"AuditRecordKeyArn": {
"Description": "ARN for audit record key encryption for myDemoApp",
"Type": "String"
},
"ParentVPCStack": {
"Description": "The name of the stack containing the parent VPC for myDemoApp",
"Type": "String"
},
"StackVersion": {
"Description": "The version of this stack of myDemoApp",
"Type": "String"
},
"EventLogFolderName": {
"Type": "String",
"Description": "folder name for the logs for the event stream of myDemoApp",
"Default": "event_log_stream"
},
"EventLogPartitionKeys": {
"Type": "String",
"Description": "The partition keys that audit logs will write to S3. Use Hive-style naming conventions for automatic Athena/Glue comprehension.",
"Default": "year=!{timestamp:yyyy}/month=!{timestamp:MM}/day=!{timestamp:dd}/hour=!{timestamp:HH}"
},
"AppEventSNSTopicArn": {
"Description": "Events SNS Topic of myDemoApp",
"Type": "String"
},
"ReportingEventsRetentionDays": {
"Default": "2192",
"Description": "The number of days to retain a record used for reporting.",
"Type": "String"
}
},
"Resources": {
"AppEventSQSQueue": {
"Type": "AWS::SQS::Queue"
},
"AppEventSnsSubscription": {
"Type": "AWS::SNS::Subscription",
"Properties": {
"TopicArn": {
"Ref": "AppEventSNSTopicArn"
},
"Endpoint": {
"Fn::GetAtt": [
"AppEventSQSQueue",
"Arn"
]
},
"Protocol": "sqs"
}
},
"S3DeployBucket": {
"Type": "AWS::S3::Bucket",
"DeletionPolicy": "Retain",
"UpdateReplacePolicy": "Retain",
"Properties": {
"BucketEncryption": {
"ServerSideEncryptionConfiguration": [
{
"ServerSideEncryptionByDefault": {
"KMSMasterKeyID": {
"Ref": "AuditRecordKeyArn"
},
"SSEAlgorithm": "aws:kms"
}
}
]
},
"VersioningConfiguration": {
"Status": "Enabled"
},
"LifecycleConfiguration": {
"Rules": [
{
"ExpirationInDays": {
"Ref": "ReportingEventsRetentionDays"
},
"Status": "Enabled"
}
]
}
}
},
"EventStreamLogGroup": {
"Type": "AWS::Logs::LogGroup"
},
"EventLogStream": {
"Type": "AWS::Logs::LogStream",
"Properties": {
"LogGroupName": {
"Ref": "EventStreamLogGroup"
}
}
},
"EventStreamSubscriptionRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "sns.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
},
"Policies": [
{
"PolicyName": "SNSSQSAccessPolicy",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": {
"Action": [
"sqs:*"
],
"Effect": "Allow",
"Resource": "*"
}
}
}
]
}
},
"EventDeliveryRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "sqs.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": {
"Ref": "AWS::AccountId"
}
}
}
}
]
}
}
},
"EventSqsQueuePolicy": {
"Type": "AWS::SQS::QueuePolicy",
"Properties": {
"PolicyDocument": {
"Version": "2012-10-17",
"Id": "SqsQueuePolicy",
"Statement": [
{
"Sid": "Allow-SNS-SendMessage",
"Effect": "Allow",
"Principal": "*",
"Action": [
"sqs:SendMessage",
"sqs:ReceiveMessage"
],
"Resource": {
"Fn::GetAtt": [
"EventStreamLambda",
"Arn"
]
},
"Condition": {
"ArnEquals": {
"aws:SourceArn": {
"Ref": "EventSNSTopicArn"
}
}
}
}
]
},
"Queues": [
{
"Ref": "EventSNSTopicArn"
}
]
}
},
"EventDeliveryPolicy": {
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyName": "sqs_delivery_policy",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
{
"Fn::GetAtt": [
"S3DeployBucket",
"Arn"
]
},
{
"Fn::Join": [
"",
[
{
"Fn::GetAtt": [
"S3DeployBucket",
"Arn"
]
},
"/*"
]
]
}
]
},
{
"Effect": "Allow",
"Action": [
"logs:PutLogEvents"
],
"Resource": {
"Fn::Sub": "arn:${AWS::Partition}:logs:${AWS::Region}:${AWS::AccountId}:log-group:${EventStreamLogGroup}:log-stream:${EventLogStreamLogStream}"
}
},
{
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:GenerateDataKey"
],
"Resource": [
{
"Ref": "AuditRecordKeyArn"
}
],
"Condition": {
"StringEquals": {
"kms:ViaService": {
"Fn::Join": [
"",
[
"s3.",
{
"Ref": "AWS::Region"
},
".amazonaws.com"
]
]
}
},
"StringLike": {
"kms:EncryptionContext:aws:s3:arn": {
"Fn::Join": [
"",
[
{
"Fn::GetAtt": [
"S3DeployBucket",
"Arn"
]
},
"/*"
]
]
}
}
}
}
]
},
"Roles": [
{
"Ref": "EventDeliveryRole"
}
]
}
},
"EventStreamLambda": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Handler": "lambda_function.lambda_handler",
"MemorySize": 128,
"Runtime": "python3.8",
"Timeout": 30,
"FunctionName": "sqs_s3_pipeline_job",
"Role": {
"Fn::GetAtt": [
"SQSLambdaExecutionRole",
"Arn"
]
},
"Code": {
"S3Bucket": {
"Ref": "S3DeployBucket"
},
"S3Key": {
"Ref": "S3DeployBucket"
}
},
"TracingConfig": {
"Mode": "Active"
}
}
},
"SQSLambdaExecutionRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Policies": [
{
"PolicyName": "StreamLambdaLogs",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": "arn:aws:logs:*:*:*"
}
]
}
},
{
"PolicyName": "SQSLambdaPolicy",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sqs:ReceiveMessage",
"sqs:DeleteMessage",
"sqs:GetQueueAttributes",
"sqs:ChangeMessageVisibility"
],
"Resource":"*"
}
]
}
}
]
}
}
},
"Outputs": {
"VpcSubnet3ExportKey": {
"Value": {
"Fn::Sub": "${ParentVPCStack}-privateSubnet3"
}
}
}
}
SubscriptionRoleArn is only for kinesis:
This property applies only to Amazon Kinesis Data Firehose delivery stream subscriptions.

Lambda RDS Proxy connection from different VPC

I have two AWS accounts with VPCs connected with peer connections. I have RDS Proxy on account 1 and Lambda in the private, isolated subnet on account 2.
I cannot figure out how to connect to RDS Proxy. I was trying all possible VPC endpoints and interfaces and whatnot.
The only way I managed to connect was through NAT Gateway, but it's expensive. And to be honest, weird, if I want to keep a private network.
Is it possible to have a PrivateLink or something?
How should I connect to RDS Proxy from Lambda?
I have attached the CF template (I hope I cleaned it from all sensitive data 😑):
{
"Resources": {
"vpcA2121C38": {
"Type": "AWS::EC2::VPC",
"Properties": {
"CidrBlock": "10.0.0.0/16",
"EnableDnsHostnames": true,
"EnableDnsSupport": true,
"InstanceTenancy": "default"
},
"Metadata": {
"aws:cdk:path": "Mws/vpc/Resource"
}
},
"vpcPrivateSubnet1Subnet934893E8": {
"Type": "AWS::EC2::Subnet",
"Properties": {
"CidrBlock": "10.0.0.0/26",
"VpcId": {
"Ref": "vpcA2121C38"
},
"AvailabilityZone": {
"Fn::Select": [
0,
{
"Fn::GetAZs": ""
}
]
},
"MapPublicIpOnLaunch": false
},
"Metadata": {
"aws:cdk:path": "Mws/vpc/PrivateSubnet1/Subnet"
}
},
"vpcPrivateSubnet1RouteTableB41A48CC": {
"Type": "AWS::EC2::RouteTable",
"Properties": {
"VpcId": {
"Ref": "vpcA2121C38"
}
},
"Metadata": {
"aws:cdk:path": "Mws/vpc/PrivateSubnet1/RouteTable"
}
},
"vpcPrivateSubnet1RouteTableAssociation67945127": {
"Type": "AWS::EC2::SubnetRouteTableAssociation",
"Properties": {
"RouteTableId": {
"Ref": "vpcPrivateSubnet1RouteTableB41A48CC"
},
"SubnetId": {
"Ref": "vpcPrivateSubnet1Subnet934893E8"
}
},
"Metadata": {
"aws:cdk:path": "Mws/vpc/PrivateSubnet1/RouteTableAssociation"
}
},
"vpcS3CB758969": {
"Type": "AWS::EC2::VPCEndpoint",
"Properties": {
"ServiceName": {
"Fn::Join": [
"",
[
"com.amazonaws.",
{
"Ref": "AWS::Region"
},
".s3"
]
]
},
"VpcId": {
"Ref": "vpcA2121C38"
},
"RouteTableIds": [
{
"Ref": "vpcPrivateSubnet1RouteTableB41A48CC"
}
],
"VpcEndpointType": "Gateway"
},
"Metadata": {
"aws:cdk:path": "Mws/vpc/S3/Resource"
}
},
"vpcPeertomainaccount833D3E2C": {
"Type": "AWS::EC2::VPCPeeringConnection",
"Properties": {
"PeerVpcId": "vpc-f04b939b",
"VpcId": {
"Ref": "vpcA2121C38"
},
"PeerOwnerId": "XXXX__ACCOINT_1__XXXXXX",
"PeerRoleArn": "arn:aws:iam::XXXX__ACCOINT_1__XXXXXX:role/VPCPeerConnection"
},
"Metadata": {
"aws:cdk:path": "Mws/vpc/Peer to main account"
}
},
"producerServiceRoleEBCB54D0": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
}
}
],
"Version": "2012-10-17"
},
"ManagedPolicyArns": [
{
"Fn::Join": [
"",
[
"arn:",
{
"Ref": "AWS::Partition"
},
":iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
]
]
},
{
"Fn::Join": [
"",
[
"arn:",
{
"Ref": "AWS::Partition"
},
":iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
]
]
}
]
},
"Metadata": {
"aws:cdk:path": "Mws/producer/producer/ServiceRole/Resource"
}
},
"producerServiceRoleDefaultPolicyEA5B80A1": {
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyDocument": {
"Statement": [
{
"Action": [
"xray:PutTraceSegments",
"xray:PutTelemetryRecords"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Action": [
"s3:GetObject*",
"s3:GetBucket*",
"s3:List*"
],
"Effect": "Allow",
"Resource": [
{
"Fn::Join": [
"",
[
"arn:",
{
"Ref": "AWS::Partition"
},
":s3:::",
{
"Fn::Sub": "cdk-hnb659fds-assets-${AWS::AccountId}-${AWS::Region}"
}
]
]
},
{
"Fn::Join": [
"",
[
"arn:",
{
"Ref": "AWS::Partition"
},
":s3:::",
{
"Fn::Sub": "cdk-hnb659fds-assets-${AWS::AccountId}-${AWS::Region}"
},
"/*"
]
]
}
]
}
],
"Version": "2012-10-17"
},
"PolicyName": "producerServiceRoleDefaultPolicyEA5B80A1",
"Roles": [
{
"Ref": "producerServiceRoleEBCB54D0"
}
]
},
"Metadata": {
"aws:cdk:path": "Mws/producer/producer/ServiceRole/DefaultPolicy/Resource"
}
},
"producerSecurityGroup9AA1BE28": {
"Type": "AWS::EC2::SecurityGroup",
"Properties": {
"GroupDescription": "Automatic security group for Lambda Function Mwsproducer416E938A",
"SecurityGroupEgress": [
{
"CidrIp": "0.0.0.0/0",
"Description": "Allow all outbound traffic by default",
"IpProtocol": "-1"
}
],
"VpcId": {
"Ref": "vpcA2121C38"
}
},
"Metadata": {
"aws:cdk:path": "Mws/producer/producer/SecurityGroup/Resource"
}
},
"producerAD962441": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Code": {
"S3Bucket": {
"Fn::Sub": "cdk-hnb659fds-assets-${AWS::AccountId}-${AWS::Region}"
},
"S3Key": "dc05df325f7034421a587e7ee47aed301c7472d001152e686053dd9d5c45c164.zip"
},
"Role": {
"Fn::GetAtt": [
"producerServiceRoleEBCB54D0",
"Arn"
]
},
"Description": "Produces MWS customer orders messages",
"Environment": {
"Variables": {
"DB_HOST": "mws-rds-proxy.proxy-XXXXXXXXX.eu-central-1.rds.amazonaws.com",
"DB_NAME": "dev"
}
},
"Handler": "lambda/mws_producer.php",
"Layers": [
"arn:aws:lambda:eu-central-1:209497400698:layer:php-80:18"
],
"MemorySize": 1024,
"Runtime": "provided.al2",
"Timeout": 900,
"TracingConfig": {
"Mode": "Active"
},
"VpcConfig": {
"SecurityGroupIds": [
{
"Fn::GetAtt": [
"producerSecurityGroup9AA1BE28",
"GroupId"
]
}
],
"SubnetIds": [
{
"Ref": "vpcPrivateSubnet1Subnet934893E8"
}
]
}
},
"DependsOn": [
"producerServiceRoleDefaultPolicyEA5B80A1",
"producerServiceRoleEBCB54D0"
]
}
}
}
Devil as we know hidden in detail. In this case, it's a Route Tables.
When we create a Peering connection between VPCs we also need to give to know our subnets how to use it. Basically, just add peering connection id (pcx-XXX) as a target for peering network.
Took me several days to realize it. Happy happy happy!

AWS Deploying environment and create environments for dev and prod

Greeting all,
I'm looking for a way to deploy my application which contains:
API Gateway
DynamoDB
Lambda Functions
An S3 bucket
I looked at CloudFormation and CodeDeploy but I'm unsure how to proceed without EC2...
All the information I find is for EC2, I haven't found any information regarding deploying the app above...
The goal is to have a deployment script that deploys app to an environment automatically with technology from AWS. (Basically duplicating my environment)
Any help would greatly be appreciated.
EDIT: I need to be able to export from one AWS account then import onto another AWS account.
Cheers!
In order to deploy your CloudFormation stack into a "different" environment, you have to parameterize your CloudFormation stack name and resource names. (You don't have to parameterize the AWS::Serverless::Function function in this example because CloudFormation automatically creates a function name if no function name is specified, but for most other resources it's necessary)
Example CloudFormation template cfn.yml using the Serverless Application Model (SAM):
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Deploys a simple AWS Lambda using different environments.
Parameters:
Env:
Type: String
Description: The environment you're deploying to.
Resources:
ServerlessFunction:
Type: AWS::Serverless::Function
Properties:
Handler: index.handler
Runtime: nodejs12.x
CodeUri: ./
Policies:
- AWSLambdaBasicExecutionRole
MyBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub 'my-bucket-name-${Env}'
You can add further resources like a DynamoDB table. The API Gateway is automatically created if you're using SAM and provide an Events section in your AWS::Serverless::Function resource. See also this SAM example code from the serverless-app-examples repository.
Example deploy.sh script:
#!/usr/bin/env bash
LAMBDA_BUCKET="Your-S3-Bucket-Name"
# change this ENV variable depending on the environment you want to deploy
ENV="prd"
STACK_NAME="aws-lambda-cf-environments-${ENV}"
# now package the CloudFormation template which automatically uploads the Lambda function artifacts to S3 -> generated a "packaged" CloudFormation template cfn.packaged.yml
aws cloudformation package --template-file cfn.yml --s3-bucket ${LAMBDA_BUCKET} --output-template-file cfn.packaged.yml
# ... and deploy the packaged CloudFormation template
aws cloudformation deploy --template-file cfn.packaged.yml --stack-name ${STACK_NAME} --capabilities CAPABILITY_IAM --parameter-overrides Env=${ENV}
See the full example code here. Just deploy the script using ./deploy.sh and change the ENV variable.
Based JSON examples.
Lambda function AWS::Lambda::Function
This example creates a Lambda function and an IAM Role attached to it.
Language: NodeJs.
"LambdaRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Policies": [
{
"PolicyName": "LambdaSnsNotification",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowSnsActions",
"Effect": "Allow",
"Action": [
"sns:Publish",
"sns:Subscribe",
"sns:Unsubscribe",
"sns:DeleteTopic",
"sns:CreateTopic"
],
"Resource": "*"
}
]
}
}
]
}
},
"LambdaFunctionMessageSNSTopic": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Description": "Send message to a specific topic that will deliver MSG to a receiver.",
"Handler": "index.handler",
"MemorySize": 128,
"Role": {
"Fn::GetAtt": [
"LambdaRole",
"Arn"
]
},
"Runtime": "nodejs6.10",
"Timeout": 60,
"Environment": {
"Variables": {
"sns_topic_arn": ""
}
},
"Code": {
"ZipFile": {
"Fn::Join": [
"\n",
[
"var AWS = require('aws-sdk');",
"};"
]
]
}
}
}
}
API Gateway AWS::ApiGateway::RestApi
This example creates Role, RestAPI, Usageplan, Keys and permission to execute lambda from a Request method.
"MSGGatewayRestApi": {
"Type": "AWS::ApiGateway::RestApi",
"Properties": {
"Name": "MSG RestApi",
"Description": "API used for sending MSG",
"FailOnWarnings": true
}
},
"MSGGatewayRestApiUsagePlan": {
"Type": "AWS::ApiGateway::UsagePlan",
"Properties": {
"ApiStages": [
{
"ApiId": {
"Ref": "MSGGatewayRestApi"
},
"Stage": {
"Ref": "MSGGatewayRestApiStage"
}
}
],
"Description": "Usage plan for stage v1",
"Quota": {
"Limit": 5000,
"Period": "MONTH"
},
"Throttle": {
"BurstLimit": 200,
"RateLimit": 100
},
"UsagePlanName": "Usage_plan_for_stage_v1"
}
},
"RestApiUsagePlanKey": {
"Type": "AWS::ApiGateway::UsagePlanKey",
"Properties": {
"KeyId": {
"Ref": "MSGApiKey"
},
"KeyType": "API_KEY",
"UsagePlanId": {
"Ref": "MSGGatewayRestApiUsagePlan"
}
}
},
"MSGApiKey": {
"Type": "AWS::ApiGateway::ApiKey",
"Properties": {
"Name": "MSGApiKey",
"Description": "CloudFormation API Key v1",
"Enabled": "true",
"StageKeys": [
{
"RestApiId": {
"Ref": "MSGGatewayRestApi"
},
"StageName": {
"Ref": "MSGGatewayRestApiStage"
}
}
]
}
},
"MSGGatewayRestApiStage": {
"DependsOn": [
"ApiGatewayAccount"
],
"Type": "AWS::ApiGateway::Stage",
"Properties": {
"DeploymentId": {
"Ref": "RestAPIDeployment"
},
"MethodSettings": [
{
"DataTraceEnabled": true,
"HttpMethod": "*",
"LoggingLevel": "INFO",
"ResourcePath": "/*"
}
],
"RestApiId": {
"Ref": "MSGGatewayRestApi"
},
"StageName": "v1"
}
},
"ApiGatewayCloudWatchLogsRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"apigateway.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Policies": [
{
"PolicyName": "ApiGatewayLogsPolicy",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:PutLogEvents",
"logs:GetLogEvents",
"logs:FilterLogEvents"
],
"Resource": "*"
}
]
}
}
]
}
},
"ApiGatewayAccount": {
"Type": "AWS::ApiGateway::Account",
"Properties": {
"CloudWatchRoleArn": {
"Fn::GetAtt": [
"ApiGatewayCloudWatchLogsRole",
"Arn"
]
}
}
},
"RestAPIDeployment": {
"Type": "AWS::ApiGateway::Deployment",
"DependsOn": [
"MSGGatewayRequest"
],
"Properties": {
"RestApiId": {
"Ref": "MSGGatewayRestApi"
},
"StageName": "DummyStage"
}
},
"ApiGatewayMSGResource": {
"Type": "AWS::ApiGateway::Resource",
"Properties": {
"RestApiId": {
"Ref": "MSGGatewayRestApi"
},
"ParentId": {
"Fn::GetAtt": [
"MSGGatewayRestApi",
"RootResourceId"
]
},
"PathPart": "delivermessage"
}
},
"MSGGatewayRequest": {
"DependsOn": "LambdaPermission",
"Type": "AWS::ApiGateway::Method",
"Properties": {
"ApiKeyRequired": true,
"AuthorizationType": "NONE",
"HttpMethod": "POST",
"Integration": {
"Type": "AWS",
"IntegrationHttpMethod": "POST",
"Uri": {
"Fn::Join": [
"",
[
"arn:aws:apigateway:",
{
"Ref": "AWS::Region"
},
":lambda:path/2015-03-31/functions/",
{
"Fn::GetAtt": [
"LambdaFunctionMessageSNSTopic",
"Arn"
]
},
"/invocations"
]
]
},
"IntegrationResponses": [
{
"StatusCode": 200
},
{
"SelectionPattern": "500.*",
"StatusCode": 500
},
{
"SelectionPattern": "412.*",
"StatusCode": 412
}
],
"RequestTemplates": {
"application/json": ""
}
},
"RequestParameters": {
},
"ResourceId": {
"Ref": "ApiGatewayMSGResource"
},
"RestApiId": {
"Ref": "MSGGatewayRestApi"
},
"MethodResponses": [
{
"StatusCode": 200
},
{
"StatusCode": 500
},
{
"StatusCode": 412
}
]
}
},
"LambdaPermission": {
"Type": "AWS::Lambda::Permission",
"Properties": {
"Action": "lambda:invokeFunction",
"FunctionName": {
"Fn::GetAtt": [
"LambdaFunctionMessageSNSTopic",
"Arn"
]
},
"Principal": "apigateway.amazonaws.com",
"SourceArn": {
"Fn::Join": [
"",
[
"arn:aws:execute-api:",
{
"Ref": "AWS::Region"
},
":",
{
"Ref": "AWS::AccountId"
},
":",
{
"Ref": "MSGGatewayRestApi"
},
"/*"
]
]
}
}
}
DynamoDB AWS::DynamoDB::Table
This example creates a DynamoDB table MyCrossConfig and an alarms for it.
"TableMyCrossConfig": {
"Type": "AWS::DynamoDB::Table",
"Properties": {
"TableName": "MyCrossConfig",
"AttributeDefinitions": [
{
"AttributeName": "id",
"AttributeType": "S"
}
],
"KeySchema": [
{
"AttributeName": "id",
"KeyType": "HASH"
}
],
"ProvisionedThroughput": {
"ReadCapacityUnits": "5",
"WriteCapacityUnits": "5"
}
}
},
"alarmTargetTrackingtableMyCrossConfigProvisionedCapacityLowdfcae8d90ee2487a8e59c7bc0f9f6bd9": {
"Type": "AWS::CloudWatch::Alarm",
"Properties": {
"ActionsEnabled": "true",
"AlarmDescription": {
"Fn::Join": [
"",
[
"DO NOT EDIT OR DELETE. For TargetTrackingScaling policy arn:aws:autoscaling:",
{
"Ref": "AWS::Region"
},
":",
{
"Ref": "AWS::AccountId"
},
":scalingPolicy:7558858e-b58c-455c-be34-6de387a0c6d1:resource/dynamodb/table/MyCrossConfig:policyName/DynamoDBReadCapacityUtilization:table/MyCrossConfig."
]
]
},
"ComparisonOperator": "LessThanThreshold",
"EvaluationPeriods": "3",
"MetricName": "ProvisionedReadCapacityUnits",
"Namespace": "AWS/DynamoDB",
"Period": "300",
"Statistic": "Average",
"Threshold": "5.0",
"AlarmActions": [
{
"Fn::Join": [
"",
[
"arn:aws:autoscaling:",
{
"Ref": "AWS::Region"
},
":",
{
"Ref": "AWS::AccountId"
},
":scalingPolicy:7558858e-b58c-455c-be34-6de387a0c6d1:resource/dynamodb/table/MyCrossConfig:policyName/DynamoDBReadCapacityUtilization:table/MyCrossConfig"
]
]
}
],
"Dimensions": [
{
"Name": "TableName",
"Value": "MyCrossConfig"
}
]
}
}
s3 bucket AWS::S3::Bucket
This example creates a Bucket with name configbucket- + AWS::AccountId
"ConfigBucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": {
"Fn::Join": [
"",
[
"configbucket-",
{
"Ref": "AWS::AccountId"
}
]
]
}
},
"DeletionPolicy": "Delete"
}
Now you need to put altogether, make the reference in the template, Etc.
Hope it helps!
My guess would be that you could use CloudFormation for such an app, but I'm also unfamiliar.
What I have had success with is writing small scripts which leverage the awscli utility to accomplish this. Additionally, you'll need a strategy for how you setup a new environment.
Typically, what I have done is to use a different suffix on DynamoDB tables and S3 buckets to represent different environments. Lambda + API Gateway have the idea of different versions baked in, so you can support different environments there as well.
For really small projects, I have even setup my Dynamo schema to support many environments within a single table. This is nice for pet or small projects because it's cheaper.
Built my own SDK for deployments, it's in the making...
https://github.com/LucLaverdure/aws-sdk
You will need to use the following shell scripts within the containers:
export.sh
import.sh
Requirements:
AWS CLI
Python
pip
npm
jq

Cloud Formation: S3 linked to Lambda gives The ARN is not well formed

I'm trying to use CloudFormation to deploy an S3 bucket that on ObjectCreate invokes a Lambda function.
Here are my resources:
"ExampleFunction": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Handler": "index.lambda_handler",
"Code": {
"S3Bucket": "bucketname",
"S3Key": "something.zip"
},
"Runtime": "python3.6",
"Role": {
"Fn::GetAtt": [
"LambdaExecutionRole",
"Arn"
]
}
}
},
"InputDataBucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": "input-data",
"NotificationConfiguration": {
"LambdaConfigurations": [
{
"Function": {
"Ref": "ExampleFunction"
},
"Event": "s3:ObjectCreated:*",
"Filter": {
"S3Key": {
"Rules": [
{
"Name": "suffix",
"Value": "zip"
}
]
}
}
}
]
}
}
},
"LambdaInvokePermission": {
"Type": "AWS::Lambda::Permission",
"Properties": {
"Action": "lambda:InvokeFunction",
"FunctionName": {
"Fn::GetAtt": [
"ExampleFunction",
"Arn"
]
},
"Principal": "s3.amazonaws.com",
"SourceAccount": {
"Ref": "AWS::AccountId"
},
"SourceArn": {
"Fn::Join": [
":",
[
"arn",
"aws",
"s3",
"",
"",
{
"Ref": "InputDataBucket"
}
]
]
}
}
}
I've tried to follow the documentation of the Notification Configuration, that says that there can be a circular dependency. However, if I follow the instructions I get the same error. Reference: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket-notificationconfig.html
When I try to create the stack, the S3 always breaks it with error "The ARN is not well formed"
I've tried many things, but I always receive this same error.
I can get this to work as long as I know the S3 bucket name in advance (mybucketname below). If you don't know the bucket name in advance, then you can enhance this to request the bucket name as a stack parameter and it should still work. If you need the bucket name to be auto-generated (so you can't predict the name in advance) then this will not work and you'll have to go the create/update route.
Key thing here is to manually create the S3 bucket ARN from the known bucket name, rather than relying on "Ref": "InputDataBucket" to get the bucket name for you.
Also worth reading this support article.
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "stackoverflow-48037497",
"Resources" : {
"ExampleFunction": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Handler": "index.lambda_handler",
"Code": {
"S3Bucket": "bucketname",
"S3Key": "something.zip"
},
"Runtime": "python3.6",
"Role": {
"Fn::GetAtt": [
"LambdaExecutionRole",
"Arn"
]
}
}
},
"LambdaInvokePermission": {
"Type": "AWS::Lambda::Permission",
"DependsOn": [ "ExampleFunction" ],
"Properties": {
"Action": "lambda:InvokeFunction",
"FunctionName": {
"Fn::GetAtt": [
"ExampleFunction",
"Arn"
]
},
"Principal": "s3.amazonaws.com",
"SourceAccount": {
"Ref": "AWS::AccountId"
},
"SourceArn": "arn:aws:s3:::mybucketname"
}
},
"InputDataBucket": {
"Type": "AWS::S3::Bucket",
"DependsOn": [ "ExampleFunction", "LambdaInvokePermission" ],
"Properties": {
"BucketName": "mybucketname",
"NotificationConfiguration": {
"LambdaConfigurations": [
{
"Function": { "Fn::GetAtt" : [ "ExampleFunction", "Arn" ] },
"Event": "s3:ObjectCreated:*"
}
]
}
}
}
}
}