AWS S3 Post Policy with SSE-C Algorithm, Key and Md5 - amazon-web-services

I am trying to add SSE-C algorithm, Key and Md5 to an already working policy -
{
"expiration" : "2022-11-22T18:00:16.383Z",
"conditions" :[
{"bucket" : "<bucket>"},
{"key" : "<file path1>"},
{"x-amz-algorithm" : "AWS4-HMAC-SHA256"},
{"x-amz-credential" : "AKIAIOSFODNN7EXAMPLE/20151229/us-east-1/s3/aws4_request"},
{"x-amz-date" : "20221121T180016Z"},
["content-length-range", 10, 20000]
]
}
to create another policy that performs SSE-C encryption on the file that is getting uploaded -
"expiration" : "2022-11-22T18:00:16.383Z",
"conditions" :[
{"bucket" : "<bucket>"},
{"key" : "<file path2>"},
{"x-amz-algorithm" : "AWS4-HMAC-SHA256"},
{"x-amz-credential" : "AAKIAIOSFODNN7EXAMPLE/20151229/us-east-1/s3/aws4_request"},
{"x-amz-date" : "20221121T180016Z"},
{"x-amz-server-side-encryption-customer-algorithm" : "AES256"},
{"x-amz-server-side-encryption-customer-key" : "In3vRc+WpFCvISbI8CPbNW7OSwxlS2bcq0XY0YcpYP0="},
{"x-amz-server-side-encryption-customer-key-MD5" : "2Z32DEb90ZF370xDkf6ing=="},
["content-length-range", 10, 20000]
]
}
When I add the SSE-C related information to the policy the upload fails with the below error:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>Invalid according to Policy: Policy Condition failed: ["eq", "$x-amz-server-side-encryption-customer-algorithm", "AES256"]</Message>
<RequestId>TWZYMWT37G1TDG7D</RequestId>
<HostId>xh+2GQv90MnMGJGNt2tvjydoKCE8AIGUMrq7SniuNb15e86Hgt+jkS0X9KExR6bgoXgaMevvvf0=</HostId>
</Error>
Not sure what is wrong with the policy. In both the cases these are the headers I am including in the POST request
Can some one please help me identify what is that I am doing wrong here.
Thanks in advance.

Related

How to change expiration time of a token in AWS

I am trying to change the expiration time of a token - I mean the last line in that output:
admin#dev:~]$ curl http://169.254.169.254/latest/meta-data/iam/security-credentials/my-instance-role
{
"Code" : "Success",
"LastUpdated" : "2023-02-06T07:00:00Z",
"Type" : "AWS-HMAC",
"AccessKeyId" : "myaccesskey",
"SecretAccessKey" : "mytokenxxx",
"Token" : "xxx",
"Expiration" : "2023-02-06T13:00:00Z"
Strangely i cannot find an easy way to do that - is it possible to change that expiration time without creating new iam role ?

How to correctly escape JSON in AWS API gateway VLT mapping template?

I'm trying to publish an event to AWS Event Bridge via an API Gateway while transforming the event body using API gateway mapping templates written in Velocity Template Language (VLT) following this guide.
The event body looks like this
{
"ordersDelivered": [
{
"orderId": "a0874e2c-4ad3-4fda-8145-18cc51616ecd",
"address": {
"line2": "10 Broad Road",
"city": "Altrincham",
"zipCode": "WA15 7PC",
"state": "Cheshire",
"country": "United Kingdom"
}
}
]
}
and the VLT template like
#set($context.requestOverride.header.X-Amz-Target = "AWSEvents.PutEvents")
#set($context.requestOverride.header.Content-Type = "application/x-amz-json-1.1")
#set($inputRoot = $input.path('$'))
{
"Entries": [
#foreach($elem in $inputRoot.ordersDelivered)
{
"Resources" : ["$context.authorizer.clientId"],
"Detail" : "$util.escapeJavaScript($elem)",
"DetailType" : "OrderDelivered",
"EventBusName" : "hk-playground-more-sole",
"Source" : "delivery"
}#if($foreach.hasNext),#end
#end
]
}
However on making a test call to the REST endpoint method via the API Gateway 'Test' option in the AWS console, I get a malformed request error from the EventBridge integration as shown below:
Endpoint request body after transformations:
{
"Entries": [
{
"Resources" : [""],
"Detail" : "{orderId=a0874e2c-4ad3-4fda-8145-18cc51616ecd, address={line2=10 Broad Road, city=Altrincham, zipCode=WA15 7PC, state=Cheshire, country=United Kingdom}}",
"DetailType" : "OrderDelivered",
"EventBusName" : "hk-playground-more-sole",
"Source" : "delivery"
} ]
}
Sending request to https://events.{aws-region}.amazonaws.com/?Action=PutEvents
Received response. Status: 200, Integration latency: 32 ms
Endpoint response headers: {x-amzn-RequestId=6cd086bf-5147-4418-9498-b467ed2b6b58, Content-Type=application/x-amz-json-1.1, Content-Length=104, Date=Thu, 15 Sep 2022 10:17:44 GMT}
Endpoint response body before transformations: {"Entries":[{"ErrorCode":"MalformedDetail","ErrorMessage":"Detail is malformed."}],"FailedEntryCount":1}
Method response body after transformations: {"Entries":[{"ErrorCode":"MalformedDetail","ErrorMessage":"Detail is malformed."}],"FailedEntryCount":1}
The logs above suggest that $elem object is not being converted to JSON, so instead of $util.escapeJavaScript($elem) I tried using $util.toJson($elem) but that assigns an empty string to the Detail element and I get a 400 error. I have also tried to change the VLT template to directly read the ordersDelivered using JSONPath expression string
#set($inputRoot = $input.path('$.ordersDelivered'))
{
"Entries": [
#foreach($elem in $inputRoot)
{
"Resources" : ["$context.authorizer.clientId"],
"Detail" : "$util.escapeJavaScript($elem)",
"DetailType" : "OrderDelivered",
"EventBusName" : "hk-playground-more-sole",
"Source" : "delivery"
}#if($foreach.hasNext),#end
#end
]
}
but I still get the same MalformedDetail error as above on testing this. Am I missing the correct way of converting JSON in the Detail element?

S3 key not in S3 event notification

Hello I currently have an event notification set up with my s3 bucket. This notification is sent to a SNS topic, then a SQS Queue, and finally a lambda.
My ultimate goal is for my lambda to read the event notification json and parse out the bucket and key.
The problem is that I see the bucket name in the json but not the key when I print out the 'event' object using python. How/where should I go debug to figure out what is going on? I do remember seeing the key in the json from previous implementations
the json looks like:
{
'Records': [{
'messageId': '15d42178-c59c-4f3a-8efa-cce8a20acd5b',
'receiptHandle': 'AQEBnPN7q4+jLFQfExOytZYH69w4kvI4ohjJGFqUqOAvCjRHMfbFFvgEeLVjonZ5q4GAYyzLzDSRQmZv3+YTvE3VYqKmU+Nt0rgX824LoMMkKKMuSWBT6c1a0X5dXRJRzFaOKjpniONRg5Gdm1V9I/7mW0x+Zfi0PXr5cQZXVA1NNUdJ4tIkwtpuC+Rh/dbGFQmAo6fQDuCnpzRW1NKGGda440t3ivtUQMvrniwY8ILKVoX9pnS1rAVgVPGBUo8mXyH9ec9p/Er9O9N5Kxc3xQE44MhHUygD1iJbRROBHG9m0Mj6qbKx4uI7S4KQVWRK8hHkxYFUtP4NzhzcGP1LfY91+zG4mNweGzQkfDbvn0LG9+6guxv9dW+uGz1c3f9My7272s+ABfksvfbNRgPSgwJecg==',
'body': '{\n "Type" : "Notification",\n "MessageId" : "78cadcbb-f349-5bae-b39b-85504866b186",\n "TopicArn" : "<topic arn>",\n "Subject" : "Amazon S3 Notification",\n "Message" : "{\\"Service\\":\\"Amazon S3\\",\\"Event\\":\\"s3:TestEvent\\",\\"Time\\":\\"2021-10-21T19:01:03.083Z\\",\\"Bucket\\":\\"<s3 bucket>\\",\\"RequestId\\":\\"TCJP8AZ6S75XXXPN\\",\\"HostId\\":\\"VYNq+Jh5Hkg+Vykp2RcIy9lSca7uJyhzLPfE8tcgnt3Je9kH0I+H3zvzvJkd6IvfZKZm2jYqu4Q=\\"}",\n "Timestamp" : "2021-10-21T19:01:03.285Z",\n "SignatureVersion" : "1",\n "Signature" : "EE9xsZx8hezxh8Yhyj8DLc+VSGYowl641kHgqr8tWq2msNwOBv4KEZoTtHJ/hdnfNYLEBsR7imsfv5ZrX7nKRKL2kR8xax57tcih7GRbifIuFyrs9wAhtcuclf2NJQG4eY9OrOHHxPN3fSvNI9xduPeBrxB2TAfbTcWq4AeN0C4KriV18J2dU28ecMJGtmqK0JM+2KLEuwQe/dyYiEnEnWu5EfGweDhYCRvmB1aUPRcW4s3yOHIckklmHhBLkbmufl1me/hdO7GEGa1ju8wJDF33hmmCCSE6M7ITl9niWICBtvWlFz1Md5OiswyriRyN4LZjmvEjzRZtNwy/qMkDYA==",\n "SigningCertURL" : "<cert url>",\n "UnsubscribeURL" : "<unsubscribe url>"\n}',
'attributes': {
'ApproximateReceiveCount': '34',
'SentTimestamp': <timestamp>,
'SenderId': <senderid>,
'ApproximateFirstReceiveTimestamp': <timestamp>
},
'messageAttributes': {},
'md5OfBody': <md5>,
'eventSource': 'aws:sqs',
'eventSourceARN': <queue-arn>,
'awsRegion': 'us-east-1'
}]
}
In your Amazon SNS subscription, activate Amazon SNS raw message delivery.
This will pass-through the S3 Event in a cleaner form, with the body containing a string-version of the JSON from S3. You'll need to use JSON.parse() to convert it to an object.

AWS S3 Generate a presigned url with a md5 check

Im looking to generate a pre signed url with aws s3.
It works fine with some condition (mime type for example) but im unable to use 'Content-MD5'.
I use the node js sdk and put the md5 in fields object.
const options = {
Bucket: bucket,
Expires: expires,
ContentType: 'multipart/form-data',
Conditions: [{ key }],
Fields: {
'Content-MD5': params.md5,
},
} as PresignedPost.Params;
if (acl) {
options.Conditions.push({ acl });
}
if (params.mimeType) {
options.Conditions.push({ contentType: params.mimeType });
}
But after when I upload the file, I would like AWS to check by itself the uploaded file with the MD5 given in the presigned request but I always have that error :
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>Invalid according to Policy: Policy Condition failed: ["eq", "$Content-MD5", "<md5>"]</Message>
<RequestId>497312AFEEF83235</RequestId>
<HostId>KY9RxpGZzRog7hjlDk3whjAbItG/mwhpItYDL7rUNNH4BCXMfmLZsbZIPKivmSZZ3VkWxlgstOk=</HostId>
</Error>
My MD5 is generated like that in the browser ( just after recording a video ):
const reader = new FileReader();
reader.readAsBinaryString(blob);
reader.onloadend = () => {
const mdsum = CryptoJS.MD5(reader.result.toString());
resolve(CryptoJS.enc.Base64.stringify(mdsum));
};
Maybe it's not the way it works ?
edit :
If I add to the upload form data (md5 hash is the same as set in the presigned)
formData.append('Content-MD5', encodeURI(fields['Content-MD5']));
the error becomes
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>BadDigest</Code>
<Message>The Content-MD5 you specified did not match what we received.</Message>
<ExpectedDigest>2b36c76525c8d3a6dada59a6ad2867a7</ExpectedDigest><CalculatedDigest>+RifURVLd61O6QCT+SzhBg==</CalculatedDigest><RequestId>B4FF38D0FCC2E8F2</RequestId><HostId>yS7q200rJpBu48RNcGzsb1oGbDUrN8UK9+gkg6jGMl+EJSGeyQaSCfwfcMRUeNlJYapfmF304Oc=</HostId></Error>
answer:
const reader = new FileReader();
reader.readAsBinaryString(blob);
reader.onloadend = () => {
resolve(CryptoJS.enc.Base64.stringify(CryptoJS.MD5(CryptoJS.enc.Latin1.parse(reader.result.toString()))));
};

Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 412 Precondition Failed while removing bucket IAM Member

In order to remove identities from a google cloud bucket, I use the example provided at the GCP examples repo: here. I am wondering if there is something I am missing, I have the correct root credentials to the cloud account, as well as the project ownership credentials.
Here is the original policy:
Policy{
bindings={roles/storage.legacyBucketOwner=[projectOwner:myaccount],
roles/storage.objectAdmin=[serviceAccount:company-kiehn-
log#myaccount.iam.gserviceaccount.com, serviceAccount:company-hammes-
file#myaccount.iam.gserviceaccount.com, serviceAccount:company-howe-
log#myaccount.iam.gserviceaccount.com, serviceAccount:company-doyle-
log#myaccount.iam.gserviceaccount.com, serviceAccount:customer-6a53ee71-95eb-
49b2-8a#myaccount.iam.gserviceaccount.com, serviceAccount:company-kiehn-
file#myaccount.iam.gserviceaccount.com, serviceAccount:company-howe-
file#myaccount.iam.gserviceaccount.com, serviceAccount:company-satterfield-
log#myaccount.iam.gserviceaccount.com, serviceAccount:customer-0c1e8536-8bf5-
46f4-8e#myaccount.iam.gserviceaccount.com, serviceAccount:company-deckow-
log#myaccount.iam.gserviceaccount.com],
roles/storage.legacyBucketReader=[projectViewer:myaccount],
roles/storage.objectViewer=[serviceAccount:company-block-
log#myaccount.iam.gserviceaccount.com]},
etag=CGg=,
version=0}
Here is my code snippet:
Read bucket policy and extract unwanted identities
Set<Identity> wrongIdentities = new HashSet<Identity>();
Role roler = null;
Policy p = Cache.GCSStorage.getIamPolicy("bucketxyz");
Map<Role, Set<Identity>> policyBindings = p.getBindings();
for (Map.Entry<Role, Set<Identity>> entry : policyBindings.entrySet()) {
Set<Identity> setidentities = entry.getValue();
for (Identity set : setidentities) {
if (!(entry.getKey().getValue()
.equals("serviceAccount:attss#myaccount.iam.gserviceaccount.com"))) {
wrongIdentities.add(set);
}
}
for (Identity identity : wrongIdentities) {
System.out.println("identity: " + identity);
System.out.println(removeBucketIamMember("bucektxyz",
roler, identity, p));
}
}
Remove Unwanted Identities from policy
public static Policy removeBucketIamMember(String bucketName, Role role,
Identity identity, Policy policy) {
Policy updatedPolicy = Cache.GCSStorage.setIamPolicy(bucketName,
policy.toBuilder().removeIdentity(role, identity).build());
return updatedPolicy;
However, I am seeing the error:
Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 412
Precondition Failed
{
"code" : 412,
"errors" : [ {
"domain" : "global",
"location" : "If-Match",
"locationType" : "header",
"message" : "Precondition Failed",
"reason" : "conditionNotMet"
} ],
"message" : "Precondition Failed"
}
at com.google.api.client.googleapis.json.GoogleJsonResponseException.from(GoogleJsonResponseException.java:146)
at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:113)
at com.google.api.client.googleapis.services.json.AbstractGoogleJsonClientRequest.newExceptionOnError(AbstractGoogleJsonClientRequest.java:40)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest$1.interceptResponse(AbstractGoogleClientRequest.java:321)
at com.google.api.client.http.HttpRequest.execute(HttpRequest.java:1065)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:419)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.executeUnparsed(AbstractGoogleClientRequest.java:352)
at com.google.api.client.googleapis.services.AbstractGoogleClientRequest.execute(AbstractGoogleClientRequest.java:469)
at com.google.cloud.storage.spi.v1.HttpStorageRpc.setIamPolicy(HttpStorageRpc.java:886)
... 9 more
When modifying the Cloud Storage bucket or object IAM policy, it is important to first read the policy. As part of the policy content is a tag. The updated policy must include the same tag. The tag looks like: etag=CGg=.
In this question the policy update was failing with HTTP error 412 Precondition Failed. This message is caused by the policy tag being incorrect. Since a policy update replaces an existing policy, this tag helps prevent two updates from overwriting each other.