Can I enable Redshift cross region snapshot copy through cloudformation? - amazon-web-services

Redshift CFN resource AWS::Redshift::Cluster has two properties for configuring cross region snapshot copy: DestinationRegion and SnapshotCopyGrantName. But after creating the stack with those parameters I see the cross region snapshot copy is still disabled. Am I missing something?

This is not yet supported, but its already on the roadmap: AWS::Redshift::SnapshotConfiguration.
For now you would have to use custom resource in the form of a lambda function which would use AWS SDK's enable_snapshot_copy to enable the snapshot copies. You could also use AWSUtility::CloudFormation::CommandRunner to enable that with AWS CLI.

Related

Could not find an option to pass parameter `CallerReference` in Terraform resource `aws_cloudfront_origin_access_identity`

We are in the way to migrate from api calls to terraform to spin resources/accesses/policies in aws. I was bit struct in a place where I could not find an option to pass CallerReference to aws terraform resource aws_cloudfront_origin_access_identity.
We have this option using api: https://docs.aws.amazon.com/cloudfront/latest/APIReference/API_CreateCloudFrontOriginAccessIdentity.html
Do we have any custom options for passing the same in other ways?
If its not directly supported by TF, you can always use local-exec with AWS CLI to create your origin identity.

aws-cdk kms multi-region key. What constructors use to setup regions?

Using AWS CDK we could create multi-region KMS keys by
Creating the principal key(pk) with the level 1 constructor CfnKey
Creating the replica of the principal key using the level 1 constructor CfnReplicaKey, which takes as one of its parameters the pk_arn
Those constructors however do not specify the regions, where I want to make those keys available.
My question is:
What aws-CDK constructor or pattern should I use to make the replicas available in certain regions, using aws-CDK?
Thanks in advance
CfnReplicaKey will be created in the parent stack's region (see a CloudFormation example in the docs).
For the CDK (and CloudFormation), the unit of deployment is [Edit:] the Stack, which is tied to one environment:
Each Stack instance in your AWS CDK app is explicitly or implicitly associated with an environment (env). An environment is the target AWS account and region into which the stack is intended to be deployed.
This logic applies generally to all CDK resources - the account/region is defined at the stack level, not the construct level. Stacks can be replicated across regions and accounts in several ways, including directly in a CDK app:
# replicate the stack in several regions using CDK
app = core.App()
for region in ["us-east-1". "us-west-1", "us-central-1", "eu-west-1"]:
MyStack(app, "MyStack_" + region, env=Environment(
region=region,
account="555599931100"
))

CloudFormation is not propagating stack-level tags for EMR

As per the AWS Cloudformation documentation
it is mentioned that Cloudformation automatically provides stack-level tags to resources.
aws:cloudformation:logical-id
aws:cloudformation:stack-id
aws:cloudformation:stack-name
I could see that for resources like EC2, S3, etc.
But when it comes to EMR I couldn’t see those tags. I need aws:cloudformation:stack-id tag value, so that I can later identify stackId without any hustle.
Isn’t it supported for EMR?
If not what could be workaround? I need to add CF stackId using which I can easily identify the stack for other use.
Note: aws cloudformation describe-stack-resources --physical-resource-id j-XXXXXXXXXXX this is not an option to get stackId because of not having enough IAM politics.
How I'm creating EMR cluster: I have one lambda which invokes CloudFormation using boto3, which then created the cluster.
I checked that on my EMR cluster and CloudFormation. You are correct. Tags are no where to be seen.
Could be oversight on AWS part, as they explicitly write in the docs that only EBS volumes don't have such tags:
All stack-level tags, including automatically created tags, are propagated to resources that AWS CloudFormation supports. Currently, tags are not propagated to Amazon EBS volumes that are created from block device mappings.
The only workaround I can think of is to "manually" create such tags, e.g. using custom resources. Or as you are already using lambda, do it in your lambda after EMR cluster creation.

'm3.xlarge' is not supported in AWS Data Pipeline

I am new to AWS, trying to run an AWS DATA Pipeline by loading data from DynamoDB to S3. But i am getting below error. Please help
Unable to create resource for #EmrClusterForBackup_2020-05-01T14:18:47 due to: Instance type 'm3.xlarge' is not supported. (Service: AmazonElasticMapReduce; Status Code: 400; Error Code: ValidationException; Request ID: 3bd57023-95e4-4d0a-a810-e7ba9cdc3712)
I was facing the same problem when I have dynamoDB table and s3 bucket created in us-east-2 region and pipeline in us-east-1 as I was not allowed to create pipeline in us-east-2 region.
But, once I created dynamoDB table and s3 bucket created in us-east-1 region and then pipeline also in the same region, it worked well even with m3.xlarge instance type.
It is always good to use latest generation instances. They are technologically more advanced and some times even cheaper.
So there is no reason to start on older generations.. They are there only for people who are already having infrastructure on those machines.. so to provide backward compatibility.
I think this should help you. AWS will force you to use m3 if you use DynamoDBDataNode or resizeClusterBeforeRunning
https://aws.amazon.com/premiumsupport/knowledge-center/datapipeline-override-instance-type/?nc1=h_ls
I faced the same error but just changing from m3.xlarge to m4.xlarge didn't solve the problem. The DynamoDB table I was trying to export was in eu-west-2 but at the time of writing Data Pipeline is not available in eu-west-2. I found I had to edit the pipeline to change the following:
Instance type from m3.xlarge to m4.xlarge
Release Label from emr-5.23.0 to emr-5.24.0 not strictly necessary for export but required for import [1]
Hardcode the region to eu-west-2
So the end result was:
[1] From: https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-importexport-ddb-prereq.html
On-Demand Capacity works only with EMR 5.24.0 or later
DynamoDB tables configured for On-Demand Capacity are supported only when using Amazon EMR release version 5.24.0 or later. When you use a template to create a pipeline for DynamoDB, choose Edit in Architect and then choose Resources to configure the Amazon EMR cluster that AWS Data Pipeline provisions. For Release label, choose emr-5.24.0 or later.

How to mention the region for a lambda function in AWS using cli

I am trying to create a lambda function in a particular region using aws-cli. I am not sure how to create it. Looking at this doc and couldn't find any parameter related to region. http://docs.aws.amazon.com/cli/latest/reference/lambda/create-function.html
Thank you.
The region is a common option to all AWS CLI commands. If you want to explicitly include the region in your command, simply include --region us-east-1, for example, to run your command in the us-east-1 region.
If this parameter is not specified explicitly, it will be implicitly derived from your configuration. This could be environment variables, your CLI's config file, or even inherited from an IAM instance profile.
A safe command to verify this is aws lambda list-functions. This is a read-only command that lists your functions; it will only list functions in the region that was implicitly supplied via your configuation. You can explicitly supply a region to this function and observe that the results will change if you have functions in one region but not the other.
Further Reading
AWS Documentation - Configuring the AWS Command Line Interface
AWS Documentation - Configuration and Credential Files
AWS Documentation - AWS CLI Options