How to migrate sns and sqs using cloudformation? - amazon-web-services

In my aws account, there is having a 250+ SNS and SQS i need to migrate one region to another region using cloudformation, any one can help to write a script using yaml
Resources:
T1:
Type: 'AWS::SNS::Topic'
Properties: {}
Q1:
Type: 'AWS::SQS::Queue'
Properties: {}
Q1P:
Type: 'AWS::SQS::QueuePolicy'
Properties:
Queues:
- !Ref Q1
PolicyDocument:
Id: AllowIncomingAccess
Statement:
-
Effect: Allow
Principal:
AWS:
- !Ref AWS::AccountId
Action:
- sqs:SendMessage
- sqs:ReceiveMessage
Resource:
- !GetAtt Q1.Arn
-
Effect: Allow
Principal: '*'
Action:
- sqs:SendMessage
Resource:
- !GetAtt Q1.Arn
T1SUB:
Type: 'AWS::SNS::Subscription'
Properties:
Protocol: sqs
Endpoint: !GetAtt Q1.Arn
TopicArn: !Ref T1

You can try using Former2 which is an open-sourced tool to:
Former2 allows you to generate Infrastructure-as-Code outputs from your existing resources within your AWS account. By making the relevant calls using the AWS JavaScript SDK, Former2 will scan across your infrastructure and present you with the list of resources for you to choose which to generate outputs for.

Related

How to catch logs of external endpoint getting triggerred in cloudwatch aws

Im new to aws and was trying to use Event brigde and api destination for pinging and endpoint. I am successful in creating api destination and api connection
using cloud formation template and also successfully created IAM role and rule for the same, but i want to have the logs in cloudwatch triggering that endpoint after a scheduled expression. (There is no event triggering that endpoint, i want to trigger it after every 5 min)
Below is my cloud formation template
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- events.amazonaws.com
Action:
- sts:AssumeRole
Path: '/jobs-role/'
Policies:
- PolicyName: eventbridge-api-destinations
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- events:InvokeApiDestination
Resource: !GetAtt Destination.Arn
ApiConnection:
Type: AWS::Events::Connection
Properties:
AuthorizationType: API_KEY
Description: Connection to API
AuthParameters:
ApiKeyAuthParameters:
ApiKeyName: authorization
ApiKeyValue: MyAPIkey
Destination:
Type: AWS::Events::ApiDestination
Properties:
ConnectionArn: !GetAtt ApiConnection.Arn
Description: API Destination to send events
HttpMethod: POST
InvocationEndpoint: !Ref ApiDestinationInvocationEndpoint
InvocationRateLimitPerSecond: 10
RecordsRule:
Type: AWS::Events::Rule
Properties:
Description: !Sub 'Trigger ${RecordsRole} according to the specified schedule'
State: ENABLED
ScheduleExpression: "rate(5 minutes)"
Targets:
- Id: !Sub '${RecordsRole}'
Arn: !GetAtt Destination.Arn
RoleArn: !GetAtt RecordsRole.Arn
HttpParameters:
HeaderParameters:
Content-type: application/json;charset=utf-8
Can anyone tell me what i am lacking?
Below is link to my screenshot of invocations graph in metrics
[1]: https://i.stack.imgur.com/Zms44.png

What is SourceArn in aws lambda permission

This is my cft for lambda, I upload the jar file to s3 and then upload to lambda through s3, I completed the LambdaRole and LambdaFunction section, in the permission section, what should be the SourceArn? I went through the lambda official doc but didn't find any example.
Also, can anyone take a look to see if this cft is correct or not? Thanks!
ConfigurationLambdaRole:
Type: "AWS::IAM::Role"
Properties:
RoleName: 'configuration-sqs-lambda'
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
- events.amazonaws.com
- s3.amazonaws.com
Action:
- sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AmazonSQSFullAccess
- arn:aws:iam::aws:policy/CloudWatchLogsFullAccess
ConfigurationLambdaFunction:
Type: AWS::Lambda::Function
Properties:
Description: 'configuration service with lambda'
FunctionName: 'configuration-lambda'
Handler: com.lambda.handler.EventHandler::handleRequest
Runtime: Java 11
MemorySize: 128
Timeout: 120
Code:
S3Bucket: configurationlambda
S3Key: lambda-service-1.0.0-SNAPSHOT.jar
Role: !GetAtt ConfiguratioLambdaRole.Arn
ConfigurationLambdaInvokePermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName:
Fn::GetAtt:
- ConfigurationLambdaFunction
- Arn
Action: 'lambda:InvokeFunction'
Principal: s3.amazonaws.com
SourceArn: ''
You are creating a Role to run your lambda, a lambda function, and permissions for something to invoke that lambda. The SourceArn is the thing that will invoke the lambda. So in your case it sounds like you want an S3 bucket to invoke the lambda, so the SourceArn would be the ARN of the S3 bucket.
This tutorial relates to what you are doing--specifically step 8 under the "Create the Lambda function" section.
Your CF template generally looks correct. The only thing I see that will be assuming this role is lambda.amazonaws.com, so the role may not need to list the following in the AssumeRolePolicyDocument section:
- events.amazonaws.com
- s3.amazonaws.com
Also see https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-permission.html

Usage of AWS Lake Formation with CloudFormation

I want to set up an additional security layer on top of my S3 / Glue Data Lake
using Lake Formation. I want to do as much as possible via Infrastructure as Code, so naturally I looked into the documentation of the CloudFormation implementation of Lake Formation which is currently, frankly speaking, very useless.
I have a simple use case: Granting admin permission to one IAM-User on one bucket.
Can someone help me out with an example or anything similar?
This is what I found out:
Setting a data lake location and granting data permissions to your data bases is currently possible. Unfortunately it seems like CloudFormation doesn't support Data locations yet. You will have to grant your IAM Role access to the S3 Bucket by hand in the AWS Console under Lake Formation -> Data locations. I will update the answer as soon as CloudFormation supports more.
This is the template that we are using at the moment:
DataBucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
UpdateReplacePolicy: Retain
Properties:
AccessControl: Private
BucketEncryption:
ServerSideEncryptionConfiguration:
- ServerSideEncryptionByDefault:
SSEAlgorithm: AES256
VersioningConfiguration:
Status: Enabled
LifecycleConfiguration:
Rules:
- Id: InfrequentAccessRule
Status: Enabled
Transitions:
- TransitionInDays: 30
StorageClass: INTELLIGENT_TIERING
GlueDatabase:
Type: AWS::Glue::Database
Properties:
CatalogId: !Ref AWS::AccountId
DatabaseInput:
Name: !FindInMap [Environment, !Ref Environment, GlueDatabaseName]
Description: !Sub Glue Database ${Environment}
GlueDataAccessRole:
Type: AWS::IAM::Role
Properties:
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSGlueServiceRole
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Sid: ''
Effect: Allow
Principal:
Service: glue.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: AccessDataBucketPolicy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- glue:*
- lakeformation:*
Resource: '*'
- Effect: Allow
Action:
- s3:GetObject
- s3:PutObject
- s3:ListBucket
- s3:DeleteObject
Resource:
- !Sub ${DataBucket.Arn}
- !Sub ${DataBucket.Arn}/*
DataBucketLakeFormation:
Type: AWS::LakeFormation::Resource
Properties:
ResourceArn: !GetAtt DataBucket.Arn
UseServiceLinkedRole: true
DataLakeFormationPermission:
Type: AWS::LakeFormation::Permissions
Properties:
DataLakePrincipal:
DataLakePrincipalIdentifier: !GetAtt GlueDataAccessRole.Arn
Permissions:
- ALL
Resource:
DatabaseResource:
Name: !Ref GlueDatabase
DataLocationResource:
S3Resource: !Ref DataBucket

stream logs to elastic using cloudformation template

Cloudtrail default logs can be streamed to elasticsearch domain as shown in this image. How do I achieve this using cloudformation template?
Update:
If you are using aws-cli, take a look at my answer here.
Well, after a few hours of exploring and reading a lot of documentation I finally succeeded to create this template.
Designer Overview :
In order to enable the stream logs to elasticsearch we need to create the following resources:
The lambda function will forward the logs from cloudwatch log group to Elasticsearch.
Relevant IAM Role to get logs from cloudwatch and insert to Elasticsearch.
Lambda permission - The AWS::Lambda::Permission resource grants an AWS service or another account permission to use a function to allow the cloudwatch log group to trigger the lambda.
Subscription Filter - The AWS::Logs::SubscriptionFilter resource specifies a subscription filter and associates it with the specified log group. Subscription filters allow you to subscribe to a real-time stream of log events and have them delivered to a specific destination.
Template usage:
Download LogsToElasticsearch.zip from my Github page.
Update var endpoint = '${Elasticsearch_Endpoint}'; in index.js with your Elasticseatch url e.g - 'search-xxx-yyyy.eu-west-1.es.amazonaws.com';.
Copy the zip file to s3 bucket which will be used in the template (LambdaArtifactBucketName).
Fill relevant Parameters - you can find descriptions to each resource.
Template YAML:
AWSTemplateFormatVersion: 2010-09-09
Description: Enable logs to elasticsearch
Parameters:
ElasticsearchDomainName:
Description: Name of the Elasticsearch domain that you want to insert logs to
Type: String
Default: amitb-elastic-domain
CloudwatchLogGroup:
Description: Name of the log group you want to subscribe
Type: String
Default: /aws/eks/amitb-project/cluster
LambdaName:
Description: Name of the lambda function
Type: String
Default: amitb-cloudwatch-logs
LambdaRole:
Description: Name of the role used by the lambda function
Type: String
Default: amit-cloudwatch-logs-role
LambdaArtifactBucketName:
Description: The bucket where the lambda function located
Type: String
Default: amit-bucket
LambdaArtifactName:
Description: The name of the lambda zipped file
Type: String
Default: LogsToElasticsearch.zip
VPC:
Description: Choose which VPC the Lambda-functions should be deployed to
Type: 'AWS::EC2::VPC::Id'
Default: vpc-1111111
Subnets:
Description: Choose which subnets the Lambda-functions should be deployed to
Type: 'List<AWS::EC2::Subnet::Id>'
Default: 'subnet-123456789,subnet-123456456,subnet-123456741'
SecurityGroup:
Description: Select the Security Group to use for the Lambda-functions
Type: 'List<AWS::EC2::SecurityGroup::Id>'
Default: 'sg-2222222,sg-12345678'
Resources:
ExampleInvokePermission:
Type: 'AWS::Lambda::Permission'
DependsOn: ExampleLambdaFunction
Properties:
FunctionName:
'Fn::GetAtt':
- ExampleLambdaFunction
- Arn
Action: 'lambda:InvokeFunction'
Principal: !Sub 'logs.${AWS::Region}.amazonaws.com'
SourceArn: !Sub >-
arn:aws:logs:${AWS::Region}:${AWS::AccountId}:log-group:${CloudwatchLogGroup}:*
SourceAccount: !Ref 'AWS::AccountId'
LambdaExecutionRole:
Type: 'AWS::IAM::Role'
Properties:
RoleName: !Ref LambdaRole
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole'
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action:
- 'sts:AssumeRole'
Path: /
Policies:
- PolicyName: lambda-to-es-via-vpc-policy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- 'es:*'
Resource:
- !Sub >-
arn:aws:es:${AWS::Region}:${AWS::AccountId}:domain/${ElasticsearchDomainName}
- PolicyName: logs-and-ec2-permissions
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- 'ec2:CreateNetworkInterface'
- 'ec2:DescribeNetworkInterfaces'
- 'ec2:DeleteNetworkInterface'
- 'logs:CreateLogGroup'
- 'logs:CreateLogStream'
- 'logs:PutLogEvents'
Resource: '*'
ExampleLambdaFunction:
Type: 'AWS::Lambda::Function'
DependsOn: LambdaExecutionRole
Properties:
Code:
S3Bucket: !Ref LambdaArtifactBucketName
S3Key: !Ref LambdaArtifactName
FunctionName: !Ref LambdaName
Handler: !Sub '${LambdaName}.handler'
Role:
'Fn::GetAtt':
- LambdaExecutionRole
- Arn
Runtime: nodejs8.10
Timeout: '300'
VpcConfig:
SecurityGroupIds: !Ref SecurityGroup
SubnetIds: !Ref Subnets
MemorySize: 512
SubscriptionFilter:
Type: 'AWS::Logs::SubscriptionFilter'
DependsOn: ExampleInvokePermission
Properties:
LogGroupName: !Ref CloudwatchLogGroup
FilterPattern: '[host, ident, authuser, date, request, status, bytes]'
DestinationArn:
'Fn::GetAtt':
- ExampleLambdaFunction
- Arn
Results:
Cloudwatch log:
Hope you find it helpfull.
Update 02/09/2020:
node.js 8.10 is now deprecated, you should use node.js 10 or 12.
https://docs.aws.amazon.com/lambda/latest/dg/runtime-support-policy.html

Add retention policy to API Gateway logs published to CloudWatch

I have to add retention policy to API Gateway Cloudwatch logs, hence I cannot use the aws provided policy to do so i.e. arn:aws:iam::aws:policy/service-role/AmazonAPIGatewayPushToCloudWatchLogs
So instead I created my own role with custom policy :
ApiGatewayCloudWatchLogsRole:
Type: 'AWS::IAM::Role'
DependsOn: APIGFunctionLogGroup
Properties:
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- apigateway.amazonaws.com
Action: 'sts:AssumeRole'
Path: /
Policies:
- PolicyName: APIGatewayPushLogsPolicy
PolicyDocument:
Version: 2012-10-17
Statement:
Effect: Allow
Action:
- 'logs:CreateLogStream'
- 'logs:PutLogEvents'
- 'logs:DescribeLogGroups'
- 'logs:DescribeLogStreams'
- 'logs:GetLogEvents'
- 'logs:FilterLogEvents'
Resource: '*'
And then created LogGroup with retention as :
APIGFunctionLogGroup:
Type: 'AWS::Logs::LogGroup'
Properties:
RetentionInDays: 30
LogGroupName: !Join
- ''
- - API-Gateway-Execution-Logs_
- !Ref MyRestApi
And passed the above created role to AWS::ApiGateway::Account
ApiGatewayAccount:
Type: 'AWS::ApiGateway::Account'
DependsOn: APIGFunctionLogGroup
Properties:
CloudWatchRoleArn: !GetAtt
- ApiGatewayCloudWatchLogsRole
- Arn
But while deploying my API Gateway I am getting error as :
I have the trust policy as well but API Gateway Account is not getting created.
If you create the log group yourself, before APIgateway does you should be able to use the existing policy/service role.