Status on AWS S3 cross region replication delete operations behaviour - amazon-web-services

I've been surprised to find out that file deletion was not replicated in a S3 bucket Cross Region Replication situation, running this simple test:
simplest configuration of a CRR
upload a new file
check it is replicated
delete the file (not a version of the file)
So I checked the documentation and I find this statement:
If you delete an object from the source bucket, the following occurs:
If you make a DELETE request without specifying an object version ID, Amazon S3 adds a delete marker. Amazon S3 deals with the delete
marker as follows:
If using latest version of the replication configuration, that is you specify the Filter element in a replication configuration rule,
Amazon S3 does not replicate the delete marker.
If don't specify the Filter element, Amazon S3 assumes replication configuration is a prior version V1. In the earlier
version, Amazon S3 handled replication of delete markers differently.
For more information, see Backward Compatibility .
The later link to backward compat tell me that:
When you delete an object from your source bucket without specifying an object version ID, Amazon S3 adds a delete marker. If you use V1 of the replication configuration XML, Amazon S3 replicates delete markers that resulted from user actions.[...]
In V2, Amazon S3 doesn't replicate delete markers and therefore you must set the DeleteMarkerReplication element to Disabled.
So if I sum this up:
CRR configuration is considered v1 if there is no Filter
with CRR configuration v1, file deletion is replicated, not with v2
Well, this is my configuration :
{
"ReplicationConfiguration": {
"Role": "arn:aws:iam::271226720751:role/service-role/s3crr_role_for_mybucket_to_myreplica",
"Rules": [
{
"ID": "first replication rule",
"Status": "Enabled",
"Destination": {
"Bucket": "arn:aws:s3:::myreplica"
}
}
]
}
}
And deletion is not replicated. So it makes me think that my configuration is still considered V2 (even if I have no filter).
So, can someone confirm this presumption?
And could someone tell me what does:
In V2, Amazon S3 doesn't replicate delete markers and therefore you must set the DeleteMarkerReplication element to Disabled
really mean?

There are two different configuration when replicating delete marker, V1 and V2.
Currently, when you enable S3 Replication (CRR or SRR) from the console, V2 configuration is enabled by default. However, if your use case requires you to delete replicated objects whenever they are deleted from the source bucket, you need the V1 configuration
Here is the difference between V1 and V2:
V1 configuration
The delete marker is replicated (V1 configuration). A subsequent GET request to the deleted object in both the source and the destination bucket does not return the object.
V2 configuration
The delete marker is not replicated (V2 configuration). A subsequent GET request to the deleted object returns the object only in the destination bucket.
To enable V1 configuration (to replicate delete marker), use the policy below. Keep in mind that certain replication features such as tag-based filtering and Replication Time Control (RTC) that are only available in V2 configurations.
{
"Role": " IAM-role-ARN ",
"Rules": [
{
"ID": "Replication V1 Rule",
"Prefix": "",
"Status": "Enabled",
"Destination": {
"Bucket": "arn:aws:s3:::<destination-bucket>"
}
}
]
}
Here is the blog that describes these behavior in details:
https://aws.amazon.com/blogs/storage/managing-delete-marker-replication-in-amazon-s3/

I have seen exactly the same behaviour. I was unable to create a v1 situation to get DeleteMarker replication to occur.

The issue comes from still not clear documentation from AWS.
To use DeleteMarkerReplication, you need V1 of the configuration. To let AWS know that you want V1, you need to specify a Prefix element in your configuration, and no DeleteMarkerReplication element, so your first try was almost correct.
{
"ReplicationConfiguration": {
"Role": "arn:aws:iam::271226720751:role/service-role/s3crr_role_for_mybucket_to_myreplica",
"Rules": [
{
"ID": "first replication rule",
"Prefix": "",
"Status": "Enabled",
"Destination": {
"Bucket": "arn:aws:s3:::myreplica"
}
}
]
}
}
And of course you need the s3:ReplicateDelete permission in your policy.

I believe I've figured this out. It looks like whether the Delete Markers are replicated or not depends on the permissions in the Replication Role.
If your replication role has the permission s3:ReplicateDelete on the destination, then Delete Markers will be replicated. If if does not have that permission they are not.
Below is the Cloudformation YAML for my Replication role with the ReplicateDelete permission commented out as an example. With this setup it does not replicate Delete Markers, uncomment the permission and it will. Note the permissions is based on what AWS actually creates if you set up the replication via the console (and they differ slightly from those in the documentation).
ReplicaRole:
Type: AWS::IAM::Role
Properties:
#Path: "/service-role/"
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- s3.amazonaws.com
Action:
- sts:AssumeRole
Policies:
- PolicyName: "replication-policy"
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Resource:
- !Sub "arn:aws:s3:::${LiveBucketName}"
- !Sub "arn:aws:s3:::${LiveBucketName}/*"
Action:
- s3:Get*
- s3:ListBucket
- Effect: Allow
Resource: !Sub "arn:aws:s3:::${LiveBucketName}-replica/*"
Action:
- s3:ReplicateObject
- s3:ReplicateTags
- s3:GetObjectVersionTagging
#- s3:ReplicateDelete

Adding a comment as an answer because I cannot comment to #john-eikenberry's answer. I have tested answer suggested by John (Action "s3:ReplicateDelete") but it is not working.
Edit: A failed attempt:
I have also tried to put bucket replication with delete marker enabled but it failed. Error message is:
An error occurred (MalformedXML) when calling the PutBucketReplication
operation: The XML you provided was not well-formed or did not
validate against our published schema
Experiment details:
Existing replication configuration:
aws s3api get-bucket-replication --bucket my-source-bucket > my-source-bucket.json
{
"Role": "arn:aws:iam::account-number:role/s3-cross-region-replication-role",
"Rules": [
{
"ID": " s3-cross-region-replication-role",
"Priority": 1,
"Filter": {},
"Status": "Enabled",
"Destination": {
"Bucket": "arn:aws:s3:::my-destination-bucket"
},
"DeleteMarkerReplication": {
"Status": "Disabled"
}
}
]
}
aws s3api put-bucket-replication --bucket my-source-bucket --replication-configuration file://my-source-bucket-updated.json
{
"Role": "arn:aws:iam::account-number:role/s3-cross-region-replication-role",
"Rules": [
{
"ID": " s3-cross-region-replication-role",
"Priority": 1,
"Filter": {},
"Status": "Enabled",
"Destination": {
"Bucket": "arn:aws:s3:::my-destination-bucket"
},
"DeleteMarkerReplication": {
"Status": "Enabled"
}
}
]
}

Related

Enabling Logs for AWS WAF WebAcl does not work in CDK

My goal is to enable logging for a regional WebAcl via AWS CDK. This seems to be possible over Cloud Formation and there are the appropriate constructs in CDK. But when using the following code to create a Log Group and linking it in a LoggingConfiguration ...
const webAclLogGroup = new LogGroup(scope, "awsWafLogs", {
logGroupName: `aws-waf-logs`
});
// Create logging configuration with log group as destination
new CfnLoggingConfiguration(scope, "webAclLoggingConfiguration", {
logDestinationConfigs: webAclLogGroup.logGroupArn, // Arn of LogGroup
resourceArn: aclArn // Arn of Acl
});
... I get an exception during cdk deploy, stating that the string in the LogdestinationConfig is not a correct Arn (some parts of the Arn in the log messages have been removed):
Resource handler returned message: "Error reason: The ARN isn't valid. A valid ARN begins with arn: and includes other information separated by colons or slashes., field: LOG_DESTINATION, parameter: arn:aws:logs:xxx:xxx:xxx-awswaflogsF99ED1BA-PAeH9Lt2Y3fi:* (Service: Wafv2, Status Code: 400, Request ID: xxx, Extended Request ID: null)"
I cannot see an error in the generated Cloud Formation code after cdk synth:
"webAclLoggingConfiguration": {
"id": "webAclLoggingConfiguration",
"path": "xxx/xxx/webAclLoggingConfiguration",
"attributes": {
"aws:cdk:cloudformation:type": "AWS::WAFv2::LoggingConfiguration",
"aws:cdk:cloudformation:props": {
"logDestinationConfigs": [
{
"Fn::GetAtt": [
{
"Ref": "awsWafLogs58D3FD01"
},
"Arn"
]
}
],
"resourceArn": {
"Fn::GetAtt": [
"webACL",
"Arn"
]
}
}
},
"constructInfo": {
"fqn": "aws-cdk-lib.aws_wafv2.CfnLoggingConfiguration",
"version": "2.37.1"
}
},
I'm using Cdk with Typescript and the Cdk version is currently set to 2.37.1 but it also did not work with 2.16.0.
WAF has particular requirements to the naming and format of Logging Destination configs as described and shown in their docs.
Specifically, the ARN of the Log Group cannot end in :* which unfortunately is the return value for a Log Group ARN in Cloudformation.
A workaround would be to construct the required ARN format manually like this, which will omit the :* suffix. Also note that logDestinationConfigs takes a List of Strings, though only with exactly 1 element in it.
const webAclLogGroup = new LogGroup(scope, "awsWafLogs", {
logGroupName: `aws-waf-logs`
});
// Create logging configuration with log group as destination
new CfnLoggingConfiguration(scope, "webAclLoggingConfiguration", {
logDestinationConfigs: [
// Construct the different ARN format from the logGroupName
Stack.of(this).formatArn({
arnFormat: ArnFormat.COLON_RESOURCE_NAME,
service: "logs",
resource: "log-group",
resourceName: webAclLogGroup.logGroupName,
})
],
resourceArn: aclArn // Arn of Acl
});
PS: I work for AWS on the CDK team.

Giving access to everything within S3 bucket

Does anyone know if I can use a wildcard and give access to everything within S3 bucket?
Instead of adding every location explicitly like I am currently doing?
const policyDoc = new PolicyDocument({
statements: [
new PolicyStatement({
sid: 'Grant role to read/write to S3 bucket',
resources: [
`${this.attrArn}`,
`${this.attrArn}/*`,
`${this.attrArn}/emailstore`,
`${this.attrArn}/emailstore/*`,
`${this.attrArn}/attachments`,
`${this.attrArn}/attachments/*`
],
actions: ['s3:*'],
effect: Effect.ALLOW,
principals: props.allowedArnPrincipals
})
]
});
You should be able to use:
resources: [
`${this.attrArn}`,
`${this.attrArn}/*`
],
The first one gives permission for actions on the bucket itself (eg ListBucket), while /* gives permission for actions inside the bucket (eg GetObject).

aws cloudformation WAF geo location condition

Trying to create a cloud formation template to configure WAF with geo location condition. Couldnt find the right template yet. Any pointers would be appreciated.
http://docs.aws.amazon.com/waf/latest/developerguide/web-acl-geo-conditions.html
Unfortunately, the actual answer (as of this writing, July 2018) is that you cannot create geo match sets directly in CloudFormation. You can create them via the CLI or SDK, then reference them in the DataId field of a WAFRule's Predicates property.
Creating a GeoMatchSet with one constraint via CLI:
aws waf-regional get-change-token
aws waf-regional create-geo-match-set --name my-geo-set --change-token <token>
aws waf-regional get-change-token
aws waf-regional update-geo-match-set --change-token <new_token> --geo-match-set-id <id> --updates '[ { "Action": "INSERT", "GeoMatchConstraint": { "Type": "Country", "Value": "US" } } ]'
Now reference that GeoMatchSet id in the CloudFormation:
"WebAclGeoRule": {
"Type": "AWS::WAFRegional::Rule",
"Properties": {
...
"Predicates": [
{
"DataId": "00000000-1111-2222-3333-123412341234" // id from create-geo-match-set
"Negated": false,
"Type": "GeoMatch"
}
]
}
}
There is no documentation for it, but it is possible to create the Geo Match in serverless/cloudformation.
Used the following in serverless:
Resources:
Geos:
Type: "AWS::WAFRegional::GeoMatchSet"
Properties:
Name: geo
GeoMatchConstraints:
- Type: "Country"
Value: "IE"
Which translated to the following in cloudformation:
"Geos": {
"Type": "AWS::WAFRegional::GeoMatchSet",
"Properties": {
"Name": "geo",
"GeoMatchConstraints": [
{
"Type": "Country",
"Value": "IE"
}
]
}
}
That can then be referenced when creating a rule:
(serverless) :
Resources:
MyRule:
Type: "AWS::WAFRegional::Rule"
Properties:
Name: waf
Predicates:
- DataId:
Ref: "Geos"
Negated: false
Type: "GeoMatch"
(cloudformation) :
"MyRule": {
"Type": "AWS::WAFRegional::Rule",
"Properties": {
"Name": "waf",
"Predicates": [
{
"DataId": {
"Ref": "Geos"
},
"Negated": false,
"Type": "GeoMatch"
}
]
}
}
I'm afraid that your question is too vague to solicit a helpful response. The CloudFormation User Guide (pdf) defines many different WAF / CloudFront / R53 resources that will perform various forms of geo match / geo blocking capabilities. The link you provide seems a subset of Web Access Control Lists (Web ACL) - see AWS::WAF::WebACL on page 2540.
I suggest you have a look and if you are still stuck, actually describe what it is you are trying to achieve.
Note that the term you used: "geo location condition" doesn't directly relate to an AWS capability that I'm aware of.
Finally, if you are referring to https://aws.amazon.com/about-aws/whats-new/2017/10/aws-waf-now-supports-geographic-match/, then the latest Cloudformation User Guide doesn't seem to have been updated yet to reflect this.

ValidationException: Before you can proceed, you must enable a service-linked role to give Amazon ES permissions to access your VPC

I am trying to create a VPC controlled Elastic Search Service on AWS. The problem is I keep getting the error when I run the following code: 'ValidationException: Before you can proceed, you must enable a service-linked role to give Amazon ES permissions to access your VPC'.
const AWS = require('aws-sdk');
AWS.config.update({region:'<aws-datacenter>'});
const accessPolicies = {
Statement: [{
Effect: "Allow",
Principal: {
AWS: "*"
},
Action: "es:*",
Resource: "arn:aws:es:<dc>:<accountid>:domain/<domain-name/*"
}]
};
const params = {
DomainName: '<domain>',
/* required */
AccessPolicies: JSON.stringify(accessPolicies),
AdvancedOptions: {
EBSEnabled: "true",
VolumeType: "io1",
VolumeSize: "100",
Iops: "1000"
},
EBSOptions: {
EBSEnabled: true,
Iops: 1000,
VolumeSize: 100,
VolumeType: "io1"
},
ElasticsearchClusterConfig: {
DedicatedMasterCount: 3,
DedicatedMasterEnabled: true,
DedicatedMasterType: "m4.large.elasticsearch",
InstanceCount: 2,
InstanceType: 'm4.xlarge.elasticsearch',
ZoneAwarenessEnabled: true
},
ElasticsearchVersion: '5.5',
SnapshotOptions: {
AutomatedSnapshotStartHour: 3
},
VPCOptions: {
SubnetIds: [
'<redacted>',
'<redacted>'
],
SecurityGroupIds: [
'<redacted>'
]
}
};
const es = new AWS.ES();
es.createElasticsearchDomain(params, function (err, data) {
if (err) {
console.log(err, err.stack); // an error occurred
} else {
console.log(JSON.stringify(data, null, 4)); // successful response
}
});
The problem is I get this error: ValidationException: Before you can proceed, you must enable a service-linked role to give Amazon ES permissions to access your VPC. I cannot seem to figure out how to create this service linked role for the elastic search service. In the aws.amazon.com IAM console I cannot select that service for a role. I believe it is supposed to be created automatically.
Has anybody ran into this or know the way to fix it?
The service-linked role can be created using the AWS CLI.
aws iam create-service-linked-role --aws-service-name opensearchservice.amazonaws.com
Previous answer: before the service was renamed, you would do the following:
aws iam create-service-linked-role --aws-service-name es.amazonaws.com
You can now create a service-linked role in a CloudFormation template, similar to the Terraform answer by #htaccess. See the documentation for the CloudFormation syntax for Service-Linked Roles for more details
YourRoleNameHere:
Type: 'AWS::IAM::ServiceLinkedRole'
Properties:
AWSServiceName: es.amazonaws.com
Description: 'Role for ES to access resources in my VPC'
For terraform users who hit this error, you can use the aws_iam_service_linked_role resource to create a service linked role for the ES service:
resource "aws_iam_service_linked_role" "es" {
aws_service_name = "es.amazonaws.com"
description = "Allows Amazon ES to manage AWS resources for a domain on your behalf."
}
This resource was added in Release 1.15.0 (April 18, 2018) of the AWS Provider.
Creating a elasticsearch domain with VPC and using aws-sdk/cloudformation is currently not supported. The elasticsearch service requires a special service linked role to create the network interfaces in the specified VPC. This currently possible using console / cli(#Oscar Barrett's answer below).
However, there is a workaround to get this working and it is described as follows:
Create a test elasticsearch domain with VPC access using console.
This will create a service linked role named AWSServiceRoleForAmazonElasticsearchService [Note: You can not create the role with specified name manually or through thr console]
Once this role is created, use aws-sdk or cloudformation to create elasticsearch domain with VPC.
You can delete the test elasticsearch domain later
Update: The more correct way to create the service role is described in #Oscar Barrett's answer. I was thinking to delete my answer; but the other facts about the actual issue is still more relevant, thus keeping my answer here.
Do it yourself in CDK:
const serviceLinkedRole = new cdk.CfnResource(this, "es-service-linked-role", {
type: "AWS::IAM::ServiceLinkedRole",
properties: {
AWSServiceName: "es.amazonaws.com",
Description: "Role for ES to access resources in my VPC"
}
});
const esDomain = new es.CfnDomain(this, "es", { ... });
esDomain.node.addDependency(serviceLinkedRole);

How to create and assign API Key to a created stage-API using serverless?

I want to create a secure APIG using serverless, in my current "s-fuction.json" I've already have:
"apiKeyRequired": true,
And in my "s-resources-cf.json" I already have:
"AWSApiKey": {
"Type": "AWS::ApiGateway::ApiKey",
"Properties" : {
"Description" : "ApiKey for secure the connections to the xxx API",
"Enabled" : true
}
}
It correctly creates all, a Lambda, an APIG for that lambda (including CORS) and the API Key, but I need to manually "assign" the key to the generated APIG-Stage, do you have any ideas on how could I do this automatically using serverless?
I've read the AWS documentation about the feature I want (and It seems it is possible) from here: AWS CloudFormation API Key
The documentation shows that it can be done by:
"ApiKey": {
"Type": "AWS::ApiGateway::ApiKey",
"DependsOn": ["TestAPIDeployment", "Test"],
"Properties": {
"Name": "TestApiKey",
"Description": "CloudFormation API Key V1",
"Enabled": "true",
"StageKeys": [{
"RestApiId": { "Ref": "RestApi" },
"StageName": "Test"
}]
}
}
But I don't know how add a reference to the APIG automatically created by serverless and how to wait for that APIG is created.
You can specify a list of API keys to be used by your service Rest API by adding an apiKeys array property to the provider object in serverless.yml. You'll also need to explicitly specify which endpoints are private and require one of the api keys to be included in the request by adding a private boolean property to the http event object you want to set as private. API Keys are created globally, so if you want to deploy your service to different stages make sure your API key contains a stage variable as defined below. When using API keys, you can optionally define usage plan quota and throttle, using usagePlan object.
Here's an example configuration for setting API keys for your service Rest API:
service: my-service
provider:
name: aws
apiKeys:
- myFirstKey
- ${opt:stage}-myFirstKey
- ${env:MY_API_KEY} # you can hide it in a serverless variable
usagePlan:
quota:
limit: 5000
offset: 2
period: MONTH
throttle:
burstLimit: 200
rateLimit: 100
functions:
hello:
events:
- http:
path: user/create
method: get
private: true
For more info read the following doc:
https://serverless.com/framework/docs/providers/aws/events/apigateway