SQS Encryption using CMK - amazon-web-services

I am trying to read message from an encrypted SQS. Objects are landed on an S3 Bucket -> Trigger S3 Event -> Message sent to SQS -> SQS triggers Lambda to Process.
I have got this working using an AWS managed CMK. However, I can't get this working using AWS owned CMK e.g. alias/aws/sqs.
The message just goes into messages in flight and does not invoke the Lambda functions.
As per the the AWS documentation here https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-server-side-encryption.html#sqs-encryption-what-does-sse-encrypt If you don't specify a custom CMK, Amazon SQS uses the AWS managed CMK for Amazon SQS. But we can't attach any policies against AWS owned CMK e.g.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Service": "<<service>>.amazonaws.com"
},
"Action": [
"kms:GenerateDataKey*",
"kms:Decrypt"
],
"Resource": "*"
}]
}
My question is: is it possible to use the AWS owned CMK on an SQS and have Lambda functions be able to read from that queue?
There is a section in the above URL called Enable Compatibility between AWS Services Such as Amazon CloudWatch Events, Amazon S3, and Amazon SNS and Encrypted Queues.
It mentions attaching a policy to the CMK. However, there is an option to use the alias/aws/sqs. I was wondering if I was missing something here.

I spoke with AWS and KMS AWS Managed Key would not work in this scenario. We can't change the key policy for KMS AWS Managed Keys, so wouldn't be possible for scenario: S3 Bucket -> Trigger S3 Event -> Message sent to SQS -> SQS triggers Lambda to Process
I used KMS AWS Customer Managed Key and it worked fine.

Related

Lambda trigger doesn't replicate to SQS source across accounts

I'm trying to add an SQS as a source/trigger to a lambda. I can do this just fine if both components reside within the same account. When I add the trigger to the lambda, the lambda trigger configuration replicates over to the SQS queue to pair the two.
When I try this same thing on my lambda when the SQS is remote in a different account the Lambda trigger is established, but when viewing the remote SQS it doesn't show a trigger configured. This seems to result in the trigger not working on the lambda when a message is added to the queue. The SQS policy on the remote queue is also giving permissions explicitly to the other account as well.
Any thoughts?
Scenario:
Amazon SQS queue in Account-A
AWS Lambda function in Account-B
Goal: SQS triggers Lambda function
Since this involves cross-account access, you will need to grant permissions for the IAM Role used by the Lambda function to access the SQS queue. (Lambda pulls from the queue, rather than SQS pushing to Lambda.)
The steps are:
In the SQS queue, edit the Access Policy to include permission for the IAM Role used by the Lambda function:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT-1:root"
},
"Action": "SQS:*",
"Resource": "arn:aws:sqs:ap-southeast-2:ACCOUNT-1:queue-name"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::ACCOUNT-2:role/lambda-role-name"
},
"Action": [
"SQS:ChangeMessageVisibility",
"SQS:DeleteMessage",
"SQS:ReceiveMessage",
"SQS:GetQueueAttributes"
],
"Resource": "arn:aws:sqs:ap-southeast-2:ACCOUNT-1:queue-name"
}
]
}
The first part of this policy is automatically created by SQS and allows the owning account to use the queue. The second part allows the IAM Role from Account-2 to access the queue in Account-1. The policy was created automatically by SQS when I created the queue and provided the ARN of the IAM Role. However, I had to add SQS:GetQueueAttributes because the Lambda function calls it too.
In the AWS Lambda function in Account-B, click + Trigger, select SQS and enter the ARN of the SQS queue from Account-A
I tried all this and was successfully able to put a message in SQS in Account-B, and then saw Lambda process it in Account-B.

AWS Firehose delivery to Cross Account Elasticsearch in VPC

I have a Elasticsearch inside the VPC running in account A.
I want to deliver logs from Firehose in Account B to the Elasticsearch in Account A.
Is it possible?
When I try to create delivery stream from AWS CLI I am getting below exception,
$: /usr/local/bin/aws firehose create-delivery-stream --cli-input-json file://input.json --profile devops
An error occurred (InvalidArgumentException) when calling the CreateDeliveryStream operation: Verify that the IAM role has access to the ElasticSearch domain.
The same IAM role, and same input.json works when modified to the Elasticsearch in Account B. I have Transit gateway connectivity enabled between the AWS accounts and I can connect telnet to the Elasticsearch in Account A from EC2 instance in Account B.
Adding my complete terraform code(i got same exception in AWS CLI and also in Terraform):
https://gist.github.com/karthikeayan/a67e93b4937a7958716dfecaa6ff7767
It looks like you haven't granted sufficient permissions to the role that is used when creating the stream (from the CLI example provided I'm guessing its a role named 'devops'). At minimum you will need firehose: CreateDeliveryStream.
I suggest adding the below permissions to your role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"firehose:PutRecord",
"firehose:CreateDeliveryStream",
"firehose:UpdateDestination"
],
"Resource": "*"
}
]
}
https://forums.aws.amazon.com/message.jspa?messageID=943731
I have been informed from AWS forum that this feature is currently not supported.
You can set up Kinesis Data Firehose and its dependencies, such as Amazon Simple Storage Service (Amazon S3) and Amazon CloudWatch, to stream across different accounts. Streaming data delivery works for publicly accessible OpenSearch Service clusters whether or not fine-grained access control (FGAC) is enabled
https://aws.amazon.com/premiumsupport/knowledge-center/kinesis-firehose-cross-account-streaming/

Why does S3 file upload not trigger event to SNS topic?

I want a certain HTTPS service to be called every time a file has been uploaded to an S3 bucket.
I have created the S3 bucket and a SNS topic with a verified subscription with the HTTPS service as an endpoint.
I can publish a message on the SNS topic via the AWS UI, and see that the HTTPS service is called as expected.
On the S3 bucket I created an Event, which should link the bucket and the topic. On my first attempt I got an error because the bucket was not allowed to write to the topic, so c.f. the documentation, I changed the topic access policy to:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "sns:Publish",
"Resource": "arn:aws:sns:eu-central-1:TOPIC_ID:OrderUpdates",
"Condition": {
"StringEquals": {
"AWS:SourceAccount": "ACCOUNT_ID"
},
"ArnLike": {
"AWS:SourceArn": "arn:aws:s3:*:*:*"
}
}
}
]
}
where TOPIC_ID is the topic owner id which can be seen when the topic is shown in the AWS UI, and the ACCOUNT_ID is the account id shown under account settings in the AWS UI.
This change in the topic access policy allowed me to create the event on the bucket:
When I call the API method getBucketNotificationConfiguration I get:
{
"TopicConfigurations": [
{
"Id": "OrderFulfilled",
"TopicArn": "arn:aws:sns:eu-central-1:TOPIC_ID:OrderUpdates",
"Events": [
"s3:ObjectCreated:*"
]
}
],
"QueueConfigurations": [],
"LambdaFunctionConfigurations": []
}
But the HTTPS service is not called. What am I missing in this setup, that will trigger the HTTPS service to be called by the SNS topic subscription every time a file is uploaded to the S3 bucket?
Thanks,
-Louise
Having the same issue S3 upload event does not trigger sns message even though our sns access policy is correctly set. Turns out we can NOT use the Enable encryption option, since S3 events are triggered via CloudWatch Alarms which do not work with SNS encrypted topics as of now.
Switch back to Disable encryption option, everything works again.
To reproduce this situation, I did the following:
Created an Amazon SNS topic and subscribed my phone via SMS (a good way to debug subscriptions!)
Created an Amazon S3 bucket with an Event pointing to the Amazon SNS topic
I received this error message:
Unable to validate the following destination configurations. Permissions on the destination topic do not allow S3 to publish notifications from this bucket.
I then added the policy you show above (adjusted for my account and SNS ARN)
This allowed the Event to successfully save
Testing
I then tested the event by uploading a file to the S3 bucket.
I received an SMS very quickly
So, it would appear that your configuration should successfully enable a message to be sent via Amazon SNS. This suggests that the problem lies with the HTTPS subscription, either from sending it from SNS or receiving it in the application.
I recommend that you add an Email or SMS subscription to verify whether Amazon SNS is receiving the topic and forwarding it to subscribers. If this works successfully, then you will need to debug the receipt of the message in the HTTPS application.
You must add TopicConfiguration
Read more about enable event notification

AWS Backup - How to get notification from failed backups

I'm using AWS Backup to backup my resources. I would like to get notifications from failed backups, but the only way to check the status of backups is from the AWS Backup service page - there is nothing AWS Backup related on Cloudwatch metrics, I was thinking of creating SNS-topic from Cloudwatch metric but that doesn't seem to be possible now?
Another question - would there be any way to get weekly report from AWS Backup, like "There are 25 resources currently being backed up, and from the last 7 days there is 175 restore points available"?
First of all you should create an SNS topic, add AWS Backup as a trusted entity in the resource-based policy of the SNS topic:
{
"Sid": "__console_pub_0",
"Effect": "Allow",
"Principal": {
"Service": "backup.amazonaws.com"
},
"Action": "SNS:Publish",
"Resource": "arn:aws:sns:us-west-2:{accountId}:test"
}
then turns on notifications for that topic and add BACKUP_JOB_COMPLETED event by the following AWS documentation:
Using Amazon SNS to Track AWS Backup Events.
Each time when AWS Backup job is completed or failed you will be informed to subscribed email address in SNS topic.
However, I can't find a way to customize notification.

Condition in a bucket policy to only allow specific service

im looking for a bucket policy where I have a specific principal ID for a complete account 'arn:aws:iam::000000000000:root' which is allowed to write to a my bucket.
I now want to implement a condition which will only give firehose as a service the abillity to write to my bucket.
My current ideas were:
{
"Sid": "AllowWriteViaFirehose",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::000000000000:root"
},
"Action": "s3:Put*",
"Resource": "arn:aws:s3:::my-bucket/*",
"Condition": {
#*#
}
}
Whereas #*# should be the specific condition.
I already tried some things like :
{"IpAddress": {"aws:SourceIp": "firehose.amazonaws.com"}}
I thought the requests would come from a firehose endpoint of AWS. But it seems not :-/
"Condition": {"StringLike": {"aws:PrincipalArn": "*Firehose*"}}
i thought this would work since the role which firehose uses to write files should contain a session name with something like 'firehose' in it. But it didn't work.
Any idea how to get this working?
Thanks
Ben
Do not create a bucket policy.
Instead, assign the desired permission to an IAM Role and assign the role to your Kinesis Firehose.
See: Controlling Access with Amazon Kinesis Data Firehose - Amazon Kinesis Data Firehose
This answer is for the situation where the destination S3 bucket is in a different account.
From AWS Developer Forums: Kinesis Firehose cross account write to S3, the method is:
Create cross account roles in Account B and enable trust relationships for Account A to assume Account B's Role.
Enable Bucket policy in Account B to allow Account A to write records into Account B.
Map Account B's S3 bucket to Firehose, had to create the firehose to point to a temporary bucket and then use AWS CLI commands to update the S3 bucket in account A.
CLI Command:
aws firehose update-destination --delivery-stream-name MyDeliveryStreamName --current-delivery-stream-version-id 1 --destination-id destinationId-000000000001 --extended-s3-destination-update file://MyFileName.json
MyFileName.json looks like the one below:
{
"BucketARN": "arn:aws:s3:::MyBucketname",
"Prefix": ""
}