Ansible Cloudwatch rule reports failed invocations - amazon-web-services

I have created an AWS lambda that works well when I test it and when I create a cron job manually through a cloudwatch rule.
It reports metrics as invocations (not failed) and also logs with details about the execution.
Then I decided to remove that manually created cloudwatch rule in order to create one with ansible.
- name: Create lambda service.
lambda:
name: "{{ item.name }}"
state: present
zip_file: "{{ item.zip_file }}"
runtime: 'python2.7'
role: 'arn:aws:iam::12345678901:role/lambda_ecr_delete'
handler: 'main.handler'
region: 'eu-west-2'
environment_variables: "{{ item.env_vars }}"
with_items:
- name: lamda_ecr_cleaner
zip_file: assets/scripts/ecr-cleaner.zip
env_vars:
'DRYRUN': '0'
'IMAGES_TO_KEEP': '20'
'REGION': 'eu-west-2'
register: new_lambda
- name: Schedule a cloudwatch event.
cloudwatchevent_rule:
name: ecr_delete
schedule_expression: "rate(1 day)"
description: Delete old images in ecr repo.
targets:
- id: ecr_delete
arn: "{{ item.configuration.function_arn }}"
with_items: "{{ new_lambda.results }}"
That creates almost the exact same cloudwatch rule. The only difference I can see with the manually created one is in the targets, the lambda version / alias is set to Default when created manually while it is set to version, with a corresponding version number when created with ansible.
The cloudwatch rule created with ansible has only failed invocations.
Any idea why this is? I can't see any logs. Is there a way I can set the version to Default as well with the cloudwatchevent_rule module in ansible?

I've lost hours with this too, same error and same confusion (Why there isn't a log for failed invokations?), I'm going to share my ""solution"", it will solve the problem to someone, and will help others to debug and find the ultimate solution.
Note: Be carefull, this could allow any AWS account execute your lambda functions
Since you got invoke the function by creating the rule target manually, I assume you added the invoke permission to the lambda from CloudWatch, however it looks like the Source Account ID is different when the event is created by cli/api and when is created by de AWS dashboard/console
If you are adding the Source Account condition in the lambda invoke permission from principal "events.amazonaws.com" to prevent any AWS account execute your lambdas just remove it (under your responsability!).
So, if your lambda policy looks like this:
{
"Sid": "<sid>",
"Effect": "Allow",
"Principal": {
"Service": "events.amazonaws.com"
},
"Action": "lambda:InvokeFunction",,
"Condition": {
"StringEquals": {
"AWS:SourceAccount": "<account-id>"
}
},
"Resource": "arn:aws:lambda:<region>:<account-id>:function:<lambda-function>"
}
Remove the "Condition" field
{
"Sid": "sid",
"Effect": "Allow",
"Principal": {
"Service": "events.amazonaws.com"
},
"Action": "lambda:InvokeFunction",,
"Resource": "arn:aws:lambda:<region>:<account-id>:function:<lambda-function>"
}
And "maybe" it will work for you.
I think something weird it is happening with the cloudwatch event owner/creator data when the event is created by cli/api... maybe a bug? Not sure. I will keep working on it

To extend the answered here https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/CWE_Troubleshooting.html#LAMfunctionNotInvoked. Since you are creating it via API you should add permission to Lambda as mentioned before. Without compromising security you could do the following:
Add rule with PutRule api call, it will return you
{
"RuleArn": "string"
}
Use the RuleArn in Lambda AddPermission call
aws lambda add-permission \
--function-name MyFunction \
--statement-id MyId \
--action 'lambda:InvokeFunction' \
--principal events.amazonaws.com \
--source-arn arn-from-PutRule-request

If you are looking for the reason your invocations are failing, see the other answers UNLESS you're trying to implement AWS::Events::Rule and you're seeing failed invocations. The following answer may resolve the issue and negate to need to find these non-existent logs.
Cloudwatch failedinvocation error no logs available

Related

API Gateway cares about my Authorization header when it shouldn't

I created a private REST API in API Gateway (with Lambda proxy integration), which needs to be accessible from a VPC. I've setup a VPC Endpoint for API Gateway in the VPC. The API is accessible from within the VPC, as expected.
The VPC endpoint (and indeed the entire VPC environment) is created via CloudFormation.
The API needs to consume an Authorization header, which is not something I can change. The content of that header is something specific to our company, it's not something standard. The problem is that when I add an Authorization header to the request, API Gateway rejects it with the following error (from API Gateway logs in CloudWatch):
IncompleteSignatureException
Authorization header requires 'Credential' parameter.
Authorization header requires 'Signature' parameter.
Authorization header requires 'SignedHeaders' parameter.
Authorization header requires existence of either a 'X-Amz-Date' or a 'Date' header.
Authorization=[the header content here]
If I remove the Authorization header, the request is accepted and I get the expected response from my lambda. The method I'm calling has Auth set to NONE.
The strange thing is that if I delete the VPC endpoint and create it manually via the console, it works correctly - the Authorization header is passed through to my lambda, instead of API Gateway inspecting and rejecting it.
I've torn the endpoint down and recreated it multiple times manually and with CloudFormation and the results are consistent. But I've compared them to each other and they look exactly the same: same settings, same subnets, same security groups, same policy. Since I can see no difference between them, I'm at a bit of a loss as to why it doesn't work with the CloudFormation version.
The only difference I've been able to find is in the aws headers for each version (with Authorization header removed, otherwise it doesn't get as far as logging the headers with the CF endpoint). With the CF endpoint, the headers include x-amzn-vpce-config=0 and x-amzn-vpce-policy-url=MQ==. With the manual endpoint I get x-amzn-vpce-config=1, and the policy-url header isn't included.
I've also tried changing the API to both set and remove the VPC endpoint (it can be set on the API in the Settings section), and redeployed it, but in either case it has no effect - requests continue to work/get rejected as before.
Does anyone have any ideas? I've posted this on the AWS forum as well, but just in case anyone here has come across this before...
If it's of any interest, the endpoint is created like so ([] = redacted):
ApiGatewayVPCEndpoint:
Type: AWS::EC2::VPCEndpoint
Properties:
PrivateDnsEnabled: true
PolicyDocument:
Statement:
- Action: '*'
Effect: Allow
Resource: '*'
Principal: '*'
ServiceName: !Sub com.amazonaws.${AWS::Region}.execute-api
SecurityGroupIds:
- !Ref [my sec group]
SubnetIds:
- !Ref [subnet a]
- !Ref [subnet b]
- !Ref [subnet c]
VpcEndpointType: Interface
VpcId: !Ref [my vpc]
I've managed to get it working, and it's the most ridiculous thing.
This is the endpoint policy in CF (including property name to show it in context):
PolicyDocument:
Statement:
- Action: '*'
Effect: Allow
Resource: '*'
Principal: '*'
This is how that policy appears in the console:
{
"Statement": [
{
"Action": "*",
"Effect": "Allow",
"Resource": "*",
"Principal": "*"
}
]
}
This is how the policy appears in describe-vpc-endpoints:
"PolicyDocument": "{\"Statement\":[{\"Action\":\"*\",\"Resource\":\"*\",\"Effect\":\"Allow\",\"Principal\":\"*\"}]}"
Now let's look at the policy of a manually created endpoint.
Console:
{
"Statement": [
{
"Action": "*",
"Effect": "Allow",
"Resource": "*",
"Principal": "*"
}
]
}
describe-vpc-endpoints:
"PolicyDocument": "{\n \"Statement\": [\n {\n \"Action\": \"*\", \n \"Effect\": \"Allow\", \n \"Principal\": \"*\", \n \"Resource\": \"*\"\n }\n ]\n}"
The console shows them exactly the same, and the JSON itself returned in describe-vpc-endpoints is the same except for some "prettifying" newlines and whitespace, surely that could have no effect whatsoever? Wrong! It's those newlines that make the policy actually work!
Anyway, the solution is to supply the policy as JSON, for example:
ApiGatewayVPCEndpoint:
Type: AWS::EC2::VPCEndpoint
Properties:
PrivateDnsEnabled: true
PolicyDocument: '
{
"Statement": [
{
"Action": "*",
"Effect": "Allow",
"Resource": "*",
"Principal": "*"
}
]
}'
ServiceName: !Sub com.amazonaws.${AWS::Region}.execute-api
SecurityGroupIds:
- !Ref [my sec group]
SubnetIds:
- !Ref [subnet a]
- !Ref [subnet b]
- !Ref [subnet c]
VpcEndpointType: Interface
VpcId: !Ref [my vpc]
You can even put all the JSON on a single line, it will get the newline characters put in there by AWS at some point. It's just YAML that gets transformed to JSON without newlines and causes this issue.
With the CF resource like that, API Gateway accepts my Authorization header and passes it through to the Lambda without any issues.

Setting a QueuePolicy from Cloudformation does not take effect at Stack creation - but setting an identical policy after creation takes effect

I'm trying to create a queue and a subscription to it from an (existing) SNS Topic. All resources in the same account. I know that, in order to do so, the queue needs to have a QueuePolicy that allows SNS to SendMessage to the queue.
However, I've found that the QueuePolicy I've created via Cloudformation does not appear to be respected - messages are not delivered to the queue, and Cloudwatch logs from the Topic report that delivery failed because permission was denied. If I re-apply that same policy after creation, however, it takes effect and messages are delivered.
Here's what I tried first:
$ cat template.yaml
---
AWSTemplateFormatVersion: "2010-09-09"
Description:
...
Parameters:
TopicParameter:
Type: String
Resources:
Queue:
Type: AWS::SQS::Queue
Subscription:
Type: AWS::SNS::Subscription
DependsOn: QueuePolicy
Properties:
Endpoint:
Fn::GetAtt:
- "Queue"
- "Arn"
Protocol: "sqs"
RawMessageDelivery: "true"
TopicArn: !Ref TopicParameter
QueuePolicy:
Type: AWS::SQS::QueuePolicy
Properties:
PolicyDocument:
Version: 2012-10-17
Statement:
- Sid: '1'
Effect: Allow
Principal: "*"
Action: "SQS:SendMessage"
Resource: !Ref Queue
Condition:
ArnEquals:
aws:SourceArn: !Ref TopicParameter
Queues:
- !Ref Queue
Outputs:
QueueArn:
Value:
Fn::GetAtt:
- "Queue"
- "Arn"
Export:
Name: "QueueArn"
$ aws cloudformation create-stack --stack-name my-test-stack --template-body file://template.yaml --parameters ParameterKey=TopicParameter,ParameterValue=<topicArn>
{
"StackId": "<stackId>"
}
# ...wait...
$ aws cloudformation describe-stacks --stack-name my-test-stack --query "Stacks[0] | Outputs[0] | OutputValue"
"<queueArn>"
# Do some trivial substitution to get the QueueUrl - it's *probably* possible via the CLI, but I don't think you need me to prove that I can do it
$ aws sqs get-queue-attributes --queue-url <queueUrl> --attribute-names ApproximateNumberOfMessages --query "Attributes.ApproximateNumberOfMessages"
"0"
# The above is consistently true, even if I wait and retry after several minutes. I've confirmed that messages *are* being published from the topic via other subscriptions
$ aws sqs get-queue-attributes --queue-url <queueUrl> --attribute-names Policy --query "Attributes.Policy"
"{\"Version\":\"2012-10-17\",\"Statement\":[{\"Sid\":\"1\",\"Effect\":\"Allow\",\"Principal\":\"*\",\"Action\":\"SQS:SendMessage\",\"Resource\":\"<queueUrl>\",\"Condition\":{\"ArnEquals\":{\"aws:SourceArn\":\"<topicArn>\"}}}]}"
$ aws sqs get-queue-attributes --queue-url <queueUrl> --attribute-names Policy --query "Attributes.Policy" | | perl -pe 's/^.(.*?).$/$1/' | perl -pe 's/\\"/"/g' | python -m json.tool
{
"Statement": [
{
"Action": "SQS:SendMessage",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "<topicArn>"
}
},
"Effect": "Allow",
"Principal": "*",
"Resource": "<queueUrl>",
"Sid": "1"
}
],
"Version": "2012-10-17"
}
At this point, everything looks correct. If I go to the AWS Console, I see a QueuePolicy on the queue that is exactly what I expect - but no messages.
If I re-apply the QueuePolicy, though...
$ aws sqs get-queue-attributes --queue-url <queueUrl> --attribute-names Policy --query "Attributes" > policyInFile
$ cat policyInFile
{
"Policy": "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Sid\":\"1\",\"Effect\":\"Allow\",\"Principal\":\"*\",\"Action\":\"SQS:SendMessage\",\"Resource\":\"<queueUrl>\",\"Condition\":{\"ArnEquals\":{\"aws:SourceArn\":\"<topicArn>\"}}}]}"
}
$ aws sqs set-queue-attributes --queue-url <queueUrl> --attributes policyInFile
Then, a few seconds later, the queue starts receiving messages.
Even weirder, I can reproduce this same behaviour by doing the following:
set up the Stack
going to the queue in the console
confirm that the queue is not receiving messages
hit "Edit" on the queue's Policy
hit "Save" (that is - not changing anything in the policy)
observe the queue receiving messages
How can I make the QueuePolicy in the Cloudformation Stack take effect at the time of Queue Creation?
The issue was that I should have used the Queue's ARN for the resource, not the URL. I guess that, when setting a QueuePolicy for a Queue (via Console or CLI but not via Cloudformation), the resource field is overwritten to the ARN of the queue in question.

Enable cloudwatch logs for kinesis firehose cloudformation

I am trying to catch Cloudwatch logs for my firehose to find any errors when sending data to S3 destination. I created a cloudformation template with logging details
"CloudWatchLoggingOptions" : {
"Enabled" : "true",
"LogGroupName": "/aws/firehose/firehose-dev", -->firehose-dev is my firehosedeliverystream name
"LogStreamName" : "s3logs"
},
I have given necesary IAM permission to firehose for creating loggroupname and streamname.
{
"Sid": "",
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:*:*:*"
]
}
When i triggered the template i didnt find any of the loggroupname and streamname is created in cloudwatch logs.
But when we give same IAM permissions to AWS::Lambda resource it will automatically create a loggroupname(i.e./aws/lambda/mylambdaname) and send the logs to the that group. But why this scenario is not working for firehose ?
As a Workaround
I am manually creating AWS::Logs::LogGroup resource with name as /aws/firehose/firehose-dev and AWS::Logs::LogStream resource with name as s3logs.
And also firehose will create a loggroup name and logstream name
automatically, if we configure the firehose deliverystream using
console.
Can't firehose create loggroup name and logstream name automatically like aws lambda do when configured through cloudformation?
Thanks
Any help is appreciated
Its resource dependent. Some resources will create the log group for you, some not. Sometimes console does create them in the background. When you use CloudFormation, usually you have to do everything yourself.
In case of Firehose you can create the AWS::Logs::LogGroup and AWS::Logs::LogStream resources in CloudFormation. For example (yaml):
MyFirehoseLogGroup:
Type: AWS::Logs::LogGroup
Properties:
RetentionInDays: 1
MyFirehoseLogStream:
Type: AWS::Logs::LogStream
Properties:
LogGroupName: !Ref MyFirehoseLogGroup
Then when you define your AWS::KinesisFirehose::DeliveryStream, you could reference them:
CloudWatchLoggingOptions:
Enabled: true
LogGroupName: !Ref MyFirehoseLogGroup
LogStreamName: !Ref MyFirehoseLogStream

AWS CloudFormation Script Fails - Cognito is not allowed to use your email identity

I am trying to build a CloudFormation script that sets up a Cognito User Pool and configures it to use a custom email for sending users their validation code in the signup process (i.e. FROM: noreply#mydomain.com).
I am getting this error when executing my AWS CloudFormation script:
"ResourceStatusReason": "Cognito is not allowed to use your email identity (Service: AWSCognitoIdentityProvider; Status Code: 400; Error Code: InvalidEmailRoleAccessPolicyException;
I have attached a Policy for Cognito to use my SES email identity e.g. noreply#mydomain.com. I have manually setup and validated this email identity in SES prior to running CloudFormation script.
Here is my CloudFormation configuration for the policy to allow Cognito to send emails on my behalf e.g. From noreply#mydomain.com:
CognitoSESPolicy:
Type: AWS::IAM::ManagedPolicy
Description: "Allow Cognito the send email on behalf of email identity (e.g. noreply#example.org)"
Properties:
PolicyDocument:
Version: 2012-10-17
Statement:
- Sid: "ucstmnt0001"
Effect: "Allow"
Action:
- "ses:SendEmail"
- "ses:SendRawEmail"
Resource: !FindInMap [ environment, !Ref "Environment", emailARN ]
SESRole:
Type: AWS::IAM::Role
Description: "An IAM Role to allow Cognito to send email on behalf of email identity"
Properties:
RoleName: uc-cognito-ses-role
ManagedPolicyArns:
- Ref: CognitoSESPolicy
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- sts:AssumeRole
Principal:
Service:
- cognito-idp.amazonaws.com
DependsOn: CognitoSESPolicy
I am not sure what I am doing wrong here...
Answering my own question for others' benefit. AWS SES has its own managed identity for emails, requiring a user to verify ownership of the email before it can be used by other AWS services. My solution was to manually setup the SES email account using AWS portal, verify the email account, then reference the ARN for the identity created in SES for email in my CloudFormation script. Maybe AWS will have a way in the future to create SES identity via CloudFormation scripts, but at this time it seems that manual process is required for initial setup.
Recently ran into this issue and could not find a way to add it via Cloudformation still. Was able to use aws ses put-identity-policy instead.
ses_policy=$(cat << EOM
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "cognito-idp.amazonaws.com"
},
"Action": [
"ses:SendEmail",
"ses:SendRawEmail"
],
"Resource": "${email_arn}"
}
]
}
EOM
)
aws ses put-identity-policy \
--identity "${email_arn}" \
--policy-name "${policy_name}" \
--policy "${ses_policy}"
Instead of cat you can use read but my script was already using set -o errexit and not worth changing to be purist for no particular reason.

Status on AWS S3 cross region replication delete operations behaviour

I've been surprised to find out that file deletion was not replicated in a S3 bucket Cross Region Replication situation, running this simple test:
simplest configuration of a CRR
upload a new file
check it is replicated
delete the file (not a version of the file)
So I checked the documentation and I find this statement:
If you delete an object from the source bucket, the following occurs:
If you make a DELETE request without specifying an object version ID, Amazon S3 adds a delete marker. Amazon S3 deals with the delete
marker as follows:
If using latest version of the replication configuration, that is you specify the Filter element in a replication configuration rule,
Amazon S3 does not replicate the delete marker.
If don't specify the Filter element, Amazon S3 assumes replication configuration is a prior version V1. In the earlier
version, Amazon S3 handled replication of delete markers differently.
For more information, see Backward Compatibility .
The later link to backward compat tell me that:
When you delete an object from your source bucket without specifying an object version ID, Amazon S3 adds a delete marker. If you use V1 of the replication configuration XML, Amazon S3 replicates delete markers that resulted from user actions.[...]
In V2, Amazon S3 doesn't replicate delete markers and therefore you must set the DeleteMarkerReplication element to Disabled.
So if I sum this up:
CRR configuration is considered v1 if there is no Filter
with CRR configuration v1, file deletion is replicated, not with v2
Well, this is my configuration :
{
"ReplicationConfiguration": {
"Role": "arn:aws:iam::271226720751:role/service-role/s3crr_role_for_mybucket_to_myreplica",
"Rules": [
{
"ID": "first replication rule",
"Status": "Enabled",
"Destination": {
"Bucket": "arn:aws:s3:::myreplica"
}
}
]
}
}
And deletion is not replicated. So it makes me think that my configuration is still considered V2 (even if I have no filter).
So, can someone confirm this presumption?
And could someone tell me what does:
In V2, Amazon S3 doesn't replicate delete markers and therefore you must set the DeleteMarkerReplication element to Disabled
really mean?
There are two different configuration when replicating delete marker, V1 and V2.
Currently, when you enable S3 Replication (CRR or SRR) from the console, V2 configuration is enabled by default. However, if your use case requires you to delete replicated objects whenever they are deleted from the source bucket, you need the V1 configuration
Here is the difference between V1 and V2:
V1 configuration
The delete marker is replicated (V1 configuration). A subsequent GET request to the deleted object in both the source and the destination bucket does not return the object.
V2 configuration
The delete marker is not replicated (V2 configuration). A subsequent GET request to the deleted object returns the object only in the destination bucket.
To enable V1 configuration (to replicate delete marker), use the policy below. Keep in mind that certain replication features such as tag-based filtering and Replication Time Control (RTC) that are only available in V2 configurations.
{
"Role": " IAM-role-ARN ",
"Rules": [
{
"ID": "Replication V1 Rule",
"Prefix": "",
"Status": "Enabled",
"Destination": {
"Bucket": "arn:aws:s3:::<destination-bucket>"
}
}
]
}
Here is the blog that describes these behavior in details:
https://aws.amazon.com/blogs/storage/managing-delete-marker-replication-in-amazon-s3/
I have seen exactly the same behaviour. I was unable to create a v1 situation to get DeleteMarker replication to occur.
The issue comes from still not clear documentation from AWS.
To use DeleteMarkerReplication, you need V1 of the configuration. To let AWS know that you want V1, you need to specify a Prefix element in your configuration, and no DeleteMarkerReplication element, so your first try was almost correct.
{
"ReplicationConfiguration": {
"Role": "arn:aws:iam::271226720751:role/service-role/s3crr_role_for_mybucket_to_myreplica",
"Rules": [
{
"ID": "first replication rule",
"Prefix": "",
"Status": "Enabled",
"Destination": {
"Bucket": "arn:aws:s3:::myreplica"
}
}
]
}
}
And of course you need the s3:ReplicateDelete permission in your policy.
I believe I've figured this out. It looks like whether the Delete Markers are replicated or not depends on the permissions in the Replication Role.
If your replication role has the permission s3:ReplicateDelete on the destination, then Delete Markers will be replicated. If if does not have that permission they are not.
Below is the Cloudformation YAML for my Replication role with the ReplicateDelete permission commented out as an example. With this setup it does not replicate Delete Markers, uncomment the permission and it will. Note the permissions is based on what AWS actually creates if you set up the replication via the console (and they differ slightly from those in the documentation).
ReplicaRole:
Type: AWS::IAM::Role
Properties:
#Path: "/service-role/"
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- s3.amazonaws.com
Action:
- sts:AssumeRole
Policies:
- PolicyName: "replication-policy"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Resource:
- !Sub "arn:aws:s3:::${LiveBucketName}"
- !Sub "arn:aws:s3:::${LiveBucketName}/*"
Action:
- s3:Get*
- s3:ListBucket
- Effect: Allow
Resource: !Sub "arn:aws:s3:::${LiveBucketName}-replica/*"
Action:
- s3:ReplicateObject
- s3:ReplicateTags
- s3:GetObjectVersionTagging
#- s3:ReplicateDelete
Adding a comment as an answer because I cannot comment to #john-eikenberry's answer. I have tested answer suggested by John (Action "s3:ReplicateDelete") but it is not working.
Edit: A failed attempt:
I have also tried to put bucket replication with delete marker enabled but it failed. Error message is:
An error occurred (MalformedXML) when calling the PutBucketReplication
operation: The XML you provided was not well-formed or did not
validate against our published schema
Experiment details:
Existing replication configuration:
aws s3api get-bucket-replication --bucket my-source-bucket > my-source-bucket.json
{
"Role": "arn:aws:iam::account-number:role/s3-cross-region-replication-role",
"Rules": [
{
"ID": " s3-cross-region-replication-role",
"Priority": 1,
"Filter": {},
"Status": "Enabled",
"Destination": {
"Bucket": "arn:aws:s3:::my-destination-bucket"
},
"DeleteMarkerReplication": {
"Status": "Disabled"
}
}
]
}
aws s3api put-bucket-replication --bucket my-source-bucket --replication-configuration file://my-source-bucket-updated.json
{
"Role": "arn:aws:iam::account-number:role/s3-cross-region-replication-role",
"Rules": [
{
"ID": " s3-cross-region-replication-role",
"Priority": 1,
"Filter": {},
"Status": "Enabled",
"Destination": {
"Bucket": "arn:aws:s3:::my-destination-bucket"
},
"DeleteMarkerReplication": {
"Status": "Enabled"
}
}
]
}