Hi I added a SNS topic using CDK and attached a custom policy statement like this:
const snsTopic = new Topic(this, 'SnsTopic');
const snsTopicPolicyStatement = new PolicyStatement({
effect: Effect.ALLOW,
actions: ['SNS:Publish'],
principals: [
new ArnPrincipal('arn:xxx'),
new ArnPrincipal('arn:xxx'),
],
resources: ['SNS_TOPIC_ARN'],
});
snsTopicPolicyStatement.sid = 'publishStatementId';
snsTopic.addToResourcePolicy(snsTopicPolicyStatement);
But this is somehow the only access policy this topic has, whereas if I just create a new Topic and not attach any custom policy, it would look something like:
{
"Version": "2008-10-17",
"Id": "__default_policy_ID",
"Statement": [
{
"Sid": "__default_statement_ID",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"SNS:GetTopicAttributes",
"SNS:SetTopicAttributes",
"SNS:AddPermission",
"SNS:RemovePermission",
"SNS:DeleteTopic",
"SNS:Subscribe",
"SNS:ListSubscriptionsByTopic",
"SNS:Publish"
],
"Resource": "arn:xxx",
"Condition": {
"StringEquals": {
"AWS:SourceOwner": "xxx"
}
}
},
]
}
So I was wondering how to add this default access policy and the custom one at the same time?
Rather than create your own Policy Statement can you use the built in grantPublish method?
const topic = new Topic(this, 'MyTopic')
topic.grantPublish(new ArnPrincipal('arn:xxx'))
topic.grantPublish(new ArnPrincipal('arn:yyy'))
Related
I am trying to create a simple MediaConnect job with Python.
My pipeline is simple. S3Put triggers a Python lambda, and I am trying to create a simple job.
I created a simple job using AWS Console and the json job is this -
{
"Queue": "arn:aws:mediaconvert:ap-south-1:----:queues/Default",
"UserMetadata": {},
"Role": "arn:aws:iam::----:role/mediaConverterRole",
"Settings": {
"TimecodeConfig": {
"Source": "ZEROBASED"
},
"OutputGroups": [
{
"Name": "File Group",
"Outputs": [
{
"Preset": "System-Generic_Hd_Mp4_Av1_Aac_16x9_640x360p_24Hz_250Kbps_Qvbr_Vq6",
"Extension": ".mp4",
"NameModifier": "converted"
}
],
"OutputGroupSettings": {
"Type": "FILE_GROUP_SETTINGS",
"FileGroupSettings": {
"Destination": "s3://----/"
}
}
}
],
"Inputs": [
{
"AudioSelectors": {
"Audio Selector 1": {
"DefaultSelection": "DEFAULT"
}
},
"VideoSelector": {},
"TimecodeSource": "ZEROBASED",
"FileInput": "s3://----/videos/sample786.mp4"
}
]
},
"AccelerationSettings": {
"Mode": "DISABLED"
},
"StatusUpdateInterval": "SECONDS_60",
"Priority": 0
}
Please note that the Role worked fine while using on AWS console. So far this is ok.
Now coming to my pipeline with s3Put -> Python Lambda -> MediaConnect, the infrastructure is written using Terraform. My iam.tf file -
# create a role
# reseource_type - resource_name
resource "aws_iam_role" "lambda_role" {
name = "${local.resource_component}-lambda-role"
assume_role_policy = jsonencode({
"Version": "2012-10-17",
"Statement": [{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
},
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "mediaconvert.amazonaws.com"
},
"Sid": "",
"Effect": "Allow",
}
]
})
}
# create policy
resource "aws_iam_policy" "policy" {
name = "${local.resource_component}-lambda-policy"
policy = jsonencode({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": "arn:aws:s3:::*"
}
]
})
}
# attach policy to the role
resource "aws_iam_role_policy_attachment" "policy_attachment" {
role = "${aws_iam_role.lambda_role.name}"
policy_arn = "${aws_iam_policy.policy.arn}"
}
The lambda code gets triggered by S3Put successfully. But the lambda throws error -
(AccessDeniedException) when calling the CreateJob operation: User: arn:aws:sts::---:assumed-role/vidstream-inputVideoProcessor-lambda-role/vidstream-inputVideoProcessor is not authorized to perform: iam:PassRole on resource: arn:aws:iam::---:role/mediaConverterRole
I have tried to find boto3 simple examples but nothing simpler is found online.
The lambda Python Code is here -
import json
import logging
import boto3
# initialize logger
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def handler(event, context):
# get input bucket
input_bucket_name = event['Records'][0]['s3']['bucket']['name']
# get file/object name
media_object = event['Records'][0]['s3']['object']['key']
# open json mediaconvert template
with open("job.json", "r") as jsonfile:
job_object = json.load(jsonfile)
# prepare data for mediaconvert job
input_file = f's3://{input_bucket_name}/{media_object}'
# edit job object
job_object['Settings']['Inputs'][0]['FileInput'] = input_file
# updated job object
logger.info("updated job object")
# Create MediaConvert client
mediaconvert_client = boto3.client('mediaconvert')
try:
# try to create a job
mediaconvert_client.create_job(**job_object)
except Exception as e:
logger.error(e)
return {
'statusCode': 200,
'body': json.dumps(event)
}
The boto3 MediaConvert documentation is provided by AWS
I am at a loss, no idea what to do. Is there any simpler example anyone can help me with?
I just need to create a simple job with Lambda that works, no complexity.
Any kind of help will be highly appreciated.
Okay I solved this issue by putting iam:PassRole to lambda policy.
{
"Effect": "Allow",
"Action": [
"iam:PassRole"
],
"Resource": "*"
}
So the updated iam.tf file is -
# create a role
# reseource_type - resource_name
resource "aws_iam_role" "lambda_role" {
name = "${local.resource_component}-lambda-role"
assume_role_policy = jsonencode({
"Version": "2012-10-17",
"Statement": [{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
},
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "mediaconvert.amazonaws.com"
},
"Sid": "",
"Effect": "Allow",
}
]
})
}
# create policy
resource "aws_iam_policy" "policy" {
name = "${local.resource_component}-lambda-policy"
policy = jsonencode({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": [
"iam:PassRole"
],
"Resource": "*"
}
]
})
}
# attach policy to the role
resource "aws_iam_role_policy_attachment" "policy_attachment" {
role = "${aws_iam_role.lambda_role.name}"
policy_arn = "${aws_iam_policy.policy.arn}"
}
I first added this to lambda policy from aws console. After that worked I added this on my tf file. Be careful when editing something on console while the main infrastructure is written in IACs such as Terraform, this might cause drift if you forget what you have done.
I am trying to refresh external table using SNS in snowflake.
I have followed this tutorial to refresh.
https://www.youtube.com/watch?v=PCNa3d6rMO0
it is working as expected.but when I use same topic to trigger another table in another S3 bucket. Can't I use the same topic and create event notifications in Bucket2?
Here is my Access Policy :
{
"Version": "2008-10-17",
"Id": "__default_policy_ID",
"Statement": [
{
"Sid": "__default_statement_ID",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"SNS:Publish",
"SNS:RemovePermission",
"SNS:SetTopicAttributes",
"SNS:DeleteTopic",
"SNS:ListSubscriptionsByTopic",
"SNS:GetTopicAttributes",
"SNS:AddPermission",
"SNS:Subscribe"
],
"Resource": "arn:aws:sns:us-west-1:58:snowflake-dev-SNS",
"Condition": {
"StringEquals": {
"AWS:SourceOwner": "55"
}
}
},
{
"Sid": "__console_pub_0",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "SNS:Publish",
"Resource": "arn:aws:sns:us-west-1:55:snowflake-dev-SNS"
},
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::2:user/b6m8-s-p2s9"
},
"Action": "sns:Subscribe",
"Resource": "arn:aws:sns:us-west-1:55:snowflake-dev-SNS"
}
]
}
Thanks,
Xi
I'm not a Snowflake person, but it appears that the linked demonstration is Configuring Amazon SNS to Automate Snowpipe Using SQS Notifications
.
According to that documentation, the following code is used to create the pipe:
create pipe snowpipe_db.public.mypipe
auto_ingest=true
aws_sns_topic='<sns_topic_arn>'
as
copy into snowpipe_db.public.mytable
from #snowpipe_db.public.mystage
file_format = (type = 'JSON');
The copy into snowpipe_db.public.mytable seems to be hard-coded for a destination table. It seems that each Snowpipe can only be used to load data into a single table.
Therefore, you would likely need to use a different Snowpipe, and therefore a different SNS Topic and SQS queue, if you wish to load data into a different table.
I am trying to create a CMK for my SQS queue to allow encrypted SNS messages to be sent to my encrypted queue. After I create the cmk, I will set it to the "kms_master_key_id" in my queue.
resource "aws_kms_key" "mycmk" {
description = "KMS Key"
deletion_window_in_days = 10
policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Service": "sns.amazonaws.com"
},
"Action": [
"kms:GenerateDataKey",
"kms:Decrypt"
],
"Resource": "*"
}]
}
POLICY
}
This is throwing an error:
my_role_arn is not authorized to perform: kms:CreateKey on resource: *
I've double checked to make sure that action is allowed and it is.
Do I need to update the 'resource' in the policy? If so to what?
The role I am using to run this has these permissions:
Effect = "Allow"
Action = [
"kms:CreateAlias",
"kms:CreateGrant",
"kms:CreateKey",
"kms:DeleteAlias",
"kms:DisableKey",
"kms:EnableKey",
"kms:PutKeyPolicy",
"kms:RevokeGrant",
"kms:ScheduleKeyDeletion",
"kms:TagResource",
"kms:UntagResource",
"kms:UpdateAlias",
"kms:UpdateKeyDescription"
]
Resource = [
"arn:aws:kms:${local.aws_region}:${var.aws_account_id}:key/*",
"arn:aws:kms:${local.aws_region}:${var.aws_account_id}:alias/*"
]
As someone else suggested, it looks like the credentials you use to run Terraform don't have the right permissions.
CreateKey explicitly only works with the "*" resource, so change the policy to this:
data "aws_iam_policy_document" "key_Access" {
statement {
actions = [
"kms:CreateAlias",
"kms:CreateGrant",
"kms:DeleteAlias",
"kms:DisableKey",
"kms:EnableKey",
"kms:PutKeyPolicy",
"kms:RevokeGrant",
"kms:ScheduleKeyDeletion",
"kms:TagResource",
"kms:UntagResource",
"kms:UpdateAlias",
"kms:UpdateKeyDescription"
]
resources = [
"arn:aws:kms:${local.aws_region}:${var.aws_account_id}:key/*",
"arn:aws:kms:${local.aws_region}:${var.aws_account_id}:alias/*"
]
}
statement {
actions = ["kms:CreateKey"]
resources = ["*"]
}
}
With that being said, maybe don't make your own policy. Just assign the existing policy arn:aws:iam::aws:policy/AWSKeyManagementServicePowerUser to the role. That gives the following permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kms:CreateAlias",
"kms:CreateKey",
"kms:DeleteAlias",
"kms:Describe*",
"kms:GenerateRandom",
"kms:Get*",
"kms:List*",
"kms:TagResource",
"kms:UntagResource",
"iam:ListGroups",
"iam:ListRoles",
"iam:ListUsers"
],
"Resource": "*"
}
]
}
I want to add a cloudwatch subscription to a AWS lambda logs thereby making my AWS lambda triggered by cloudwatch logs. What permissions should I add to the role which lambda is using to enable this?
Your Lambda will by default have access to CloudWatch to write logs (with the default AWSLambdaBasicExecutionRole), however if you want to manually add it this is the policy with the required permissions:
{
"document": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
},
"name": "AWSLambdaBasicExecutionRole",
"id": "xxxxx",
"type": "managed",
"arn": "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
Lambda function policy for CloudWatch event trigger on Lambda:
{
"Version": "2012-10-17",
"Id": "default",
"Statement": [
{
"Sid": "uuid",
"Effect": "Allow",
"Principal": {
"Service": "events.amazonaws.com"
},
"Action": "lambda:invokeFunction",
"Resource": "arn:aws:lambda:us-east-x:xxxxxxxxxxxx:function:LambdaFunction",
"Condition": {
"ArnLike": {
"AWS:SourceArn": "arn:aws:events:us-east-x:xxxxxxxxxxxx:rule/CloudWatchRule"
}
}
}
]
}
I’ve created a set of AWS Lambdas using the Serverless framework, and a React app which calls these. A user pool and an identity pool have been setup in AWS Cognito, and a table in DynamoDB. (I've followed the tutorial on serverless-stack.com). It's a simple notes app.
The client app is deployed to: https://dev.cakebook.co
The API is deployed: https://api.cakebook.co/dev/orders
However, after I log in using this Cognito user:
admin#example.com
Passw0rd!
I get a 403 response for the GET of the orders:
message: “User: arn:aws:sts::********8766:assumed-role/cakebook-api-dev-CognitoAuthRole-1DTRT5XGEGRXW/CognitoIdentityCredentials is not authorized to perform: execute-api:Invoke on resource: arn:aws:execute-api:us-east-2:********8766:sss6l7svxc/dev/GET/orders”
I'm new to all this, but it looks like my Cognito user does not have permission to call the Lambda (or API gateway?). Is that the issue? If so, how do I give the users permission to call the Lambdas?
UPDATE, requested JSON
Execution Role:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"logs:CreateLogStream"
],
"Resource": [
"arn:aws:logs:us-east-2:********8766:log-group:/aws/lambda/cakebook-api-dev-create:*",
"arn:aws:logs:us-east-2:********8766:log-group:/aws/lambda/cakebook-api-dev-get:*",
"arn:aws:logs:us-east-2:********8766:log-group:/aws/lambda/cakebook-api-dev-list:*",
"arn:aws:logs:us-east-2:********8766:log-group:/aws/lambda/cakebook-api-dev-update:*",
"arn:aws:logs:us-east-2:********8766:log-group:/aws/lambda/cakebook-api-dev-delete:*"
],
"Effect": "Allow"
},
{
"Action": [
"logs:PutLogEvents"
],
"Resource": [
"arn:aws:logs:us-east-2:********8766:log-group:/aws/lambda/cakebook-api-dev-create:*:*",
"arn:aws:logs:us-east-2:********8766:log-group:/aws/lambda/cakebook-api-dev-get:*:*",
"arn:aws:logs:us-east-2:********8766:log-group:/aws/lambda/cakebook-api-dev-list:*:*",
"arn:aws:logs:us-east-2:********8766:log-group:/aws/lambda/cakebook-api-dev-update:*:*",
"arn:aws:logs:us-east-2:********8766:log-group:/aws/lambda/cakebook-api-dev-delete:*:*"
],
"Effect": "Allow"
},
{
"Action": [
"dynamodb:DescribeTable",
"dynamodb:Query",
"dynamodb:Scan",
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:UpdateItem",
"dynamodb:DeleteItem"
],
"Resource": [
"arn:aws:dynamodb:us-east-2:********8766:table/orders"
],
"Effect": "Allow"
},
{
"Sid": "1",
"Effect": "Allow",
"Action": "lambda:InvokeFunction",
"Resource": "arn:aws:lambda:us-east-2:********8766:function:cakebook-api-dev-list",
"Condition": {
"ArnLike": {
"AWS:SourceArn": "arn:aws:cognito-identity:us-east-2:********8766:identitypool/us-east-2:d9e4e505-c64a-4836-8e56-3af843dbe453"
}
}
}
]
}
Function Policy:
{
"Version": "2012-10-17",
"Id": "default",
"Statement": [
{
"Sid": "cakebook-api-dev-ListLambdaPermissionApiGateway-U7OCBI3JM44G",
"Effect": "Allow",
"Principal": {
"Service": "apigateway.amazonaws.com"
},
"Action": "lambda:InvokeFunction",
"Resource": "arn:aws:lambda:us-east-2:********8766:function:cakebook-api-dev-list",
"Condition": {
"ArnLike": {
"AWS:SourceArn": "arn:aws:execute-api:us-east-2:********8766:w5o4vxx4f0/*/*"
}
}
},
{
"Sid": "lambda-da48f6d0-6d3c-4bbf-a761-ca3510f79624",
"Effect": "Allow",
"Principal": {
"Service": "cognito-sync.amazonaws.com"
},
"Action": "lambda:InvokeFunction",
"Resource": "arn:aws:lambda:us-east-2:********8766:function:cakebook-api-dev-list",
"Condition": {
"ArnLike": {
"AWS:SourceArn": "arn:aws:cognito-identity:us-east-2:********8766:identitypool/us-east-2:d9e4e505-c64a-4836-8e56-3af843dbe453"
}
}
}
]
}
You need to update Lambda permission to allow invoking by Cognito user pool.
Option A - update permission in JSON format
{
"Version": "2012-10-17",
"Id": "default",
"Statement": [
{
"Sid": "lambda-something",
"Effect": "Allow",
"Principal": {
"Service": "cognito-sync.amazonaws.com"
},
"Action": "lambda:InvokeFunction",
"Resource": "arn:aws:lambda:eu-west-1:__accountId__:__function_name__",
"Condition": {
"ArnLike": {
"AWS:SourceArn": "arn:aws:cognito-identity:eu-west-1:__accountId__:identitypool/eu-west-1:....."
}
}
}
]
}
Option B - in console
Go to Lambda Configuration page
Add trigger Cognito Sync Trigger
During saving it will offer to configure Lambda permission automatically - agree