I am trying to create a CMK for my SQS queue to allow encrypted SNS messages to be sent to my encrypted queue. After I create the cmk, I will set it to the "kms_master_key_id" in my queue.
resource "aws_kms_key" "mycmk" {
description = "KMS Key"
deletion_window_in_days = 10
policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Service": "sns.amazonaws.com"
},
"Action": [
"kms:GenerateDataKey",
"kms:Decrypt"
],
"Resource": "*"
}]
}
POLICY
}
This is throwing an error:
my_role_arn is not authorized to perform: kms:CreateKey on resource: *
I've double checked to make sure that action is allowed and it is.
Do I need to update the 'resource' in the policy? If so to what?
The role I am using to run this has these permissions:
Effect = "Allow"
Action = [
"kms:CreateAlias",
"kms:CreateGrant",
"kms:CreateKey",
"kms:DeleteAlias",
"kms:DisableKey",
"kms:EnableKey",
"kms:PutKeyPolicy",
"kms:RevokeGrant",
"kms:ScheduleKeyDeletion",
"kms:TagResource",
"kms:UntagResource",
"kms:UpdateAlias",
"kms:UpdateKeyDescription"
]
Resource = [
"arn:aws:kms:${local.aws_region}:${var.aws_account_id}:key/*",
"arn:aws:kms:${local.aws_region}:${var.aws_account_id}:alias/*"
]
As someone else suggested, it looks like the credentials you use to run Terraform don't have the right permissions.
CreateKey explicitly only works with the "*" resource, so change the policy to this:
data "aws_iam_policy_document" "key_Access" {
statement {
actions = [
"kms:CreateAlias",
"kms:CreateGrant",
"kms:DeleteAlias",
"kms:DisableKey",
"kms:EnableKey",
"kms:PutKeyPolicy",
"kms:RevokeGrant",
"kms:ScheduleKeyDeletion",
"kms:TagResource",
"kms:UntagResource",
"kms:UpdateAlias",
"kms:UpdateKeyDescription"
]
resources = [
"arn:aws:kms:${local.aws_region}:${var.aws_account_id}:key/*",
"arn:aws:kms:${local.aws_region}:${var.aws_account_id}:alias/*"
]
}
statement {
actions = ["kms:CreateKey"]
resources = ["*"]
}
}
With that being said, maybe don't make your own policy. Just assign the existing policy arn:aws:iam::aws:policy/AWSKeyManagementServicePowerUser to the role. That gives the following permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"kms:CreateAlias",
"kms:CreateKey",
"kms:DeleteAlias",
"kms:Describe*",
"kms:GenerateRandom",
"kms:Get*",
"kms:List*",
"kms:TagResource",
"kms:UntagResource",
"iam:ListGroups",
"iam:ListRoles",
"iam:ListUsers"
],
"Resource": "*"
}
]
}
Related
I have the Terraform code (below) that is suppose to create an IAM policy. However, on terraform apply, I get the error:
Error: creating IAM Policy autoscale-policy: MalformedPolicyDocument: The policy failed legacy parsing
Terraform code:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.52.0"
}
}
}
provider "aws" {
region = "us-west-2"
}
resource "aws_iam_policy" "autoscale_policy" {
name = "autoscale-policy"
description = "EBS Autoscaling Policy"
policy = <<EOT
{
"Version": "2012-10-17",
"Statement": {
"Action": [
"ec2:AttachVolume",
"ec2:DescribeVolumeStatus",
"ec2:DescribeVolumes",
"ec2:ModifyInstanceAttribute",
"ec2:DescribeVolumeAttribute",
"ec2:CreateVolume",
"ec2:DeleteVolume",
"ec2:CreateTags",
"kms:Decrypt",
"kms:CreateGrant",
"kms:Encrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],
"Resource": "*",
"Effect": "Allow"
}
}
EOT
}
However, when I use the AWS cli with the exact same policy, the policy is created in AWS with no issue:
--policy-name TestPolicy \
--policy-document \
'{
"Version": "2012-10-17",
"Statement": {
"Action": [
"ec2:AttachVolume",
"ec2:DescribeVolumeStatus",
"ec2:DescribeVolumes",
"ec2:ModifyInstanceAttribute",
"ec2:DescribeVolumeAttribute",
"ec2:CreateVolume",
"ec2:DeleteVolume",
"ec2:CreateTags",
"kms:Decrypt",
"kms:CreateGrant",
"kms:Encrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],
"Resource": "*",
"Effect": "Allow"
}
}'
Does anyone see where there might be a difference between the TF code and the CLI command? How come my TF code returns a MalformedPolicyDocument error when the policy works fine from the cli?
Statement should be an array.
resource "aws_iam_policy" "autoscale_policy" {
name = "autoscale-policy"
description = "EBS Autoscaling Policy"
policy = <<EOT
{
"Version": "2012-10-17",
"Statement": [{
"Action": [
"ec2:AttachVolume",
"ec2:DescribeVolumeStatus",
"ec2:DescribeVolumes",
"ec2:ModifyInstanceAttribute",
"ec2:DescribeVolumeAttribute",
"ec2:CreateVolume",
"ec2:DeleteVolume",
"ec2:CreateTags",
"kms:Decrypt",
"kms:CreateGrant",
"kms:Encrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
],
"Resource": "*",
"Effect": "Allow"
}]
}
EOT
}
tested it works
OR you can you data resource to define your policies.
resource "aws_iam_policy" "autoscale_policy" {
name = "autoscale-policy"
description = "EBS Autoscaling Policy"
policy = data.aws_iam_policy_document.example.json
}
data "aws_iam_policy_document" "example" {
statement {
actions = [
"ec2:AttachVolume",
"ec2:DescribeVolumeStatus",
"ec2:DescribeVolumes",
"ec2:ModifyInstanceAttribute",
"ec2:DescribeVolumeAttribute",
"ec2:CreateVolume",
"ec2:DeleteVolume",
"ec2:CreateTags",
"kms:Decrypt",
"kms:CreateGrant",
"kms:Encrypt",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:DescribeKey"
]
resources = ["*"]
effect = "Allow"
}
}
I am trying to create a simple MediaConnect job with Python.
My pipeline is simple. S3Put triggers a Python lambda, and I am trying to create a simple job.
I created a simple job using AWS Console and the json job is this -
{
"Queue": "arn:aws:mediaconvert:ap-south-1:----:queues/Default",
"UserMetadata": {},
"Role": "arn:aws:iam::----:role/mediaConverterRole",
"Settings": {
"TimecodeConfig": {
"Source": "ZEROBASED"
},
"OutputGroups": [
{
"Name": "File Group",
"Outputs": [
{
"Preset": "System-Generic_Hd_Mp4_Av1_Aac_16x9_640x360p_24Hz_250Kbps_Qvbr_Vq6",
"Extension": ".mp4",
"NameModifier": "converted"
}
],
"OutputGroupSettings": {
"Type": "FILE_GROUP_SETTINGS",
"FileGroupSettings": {
"Destination": "s3://----/"
}
}
}
],
"Inputs": [
{
"AudioSelectors": {
"Audio Selector 1": {
"DefaultSelection": "DEFAULT"
}
},
"VideoSelector": {},
"TimecodeSource": "ZEROBASED",
"FileInput": "s3://----/videos/sample786.mp4"
}
]
},
"AccelerationSettings": {
"Mode": "DISABLED"
},
"StatusUpdateInterval": "SECONDS_60",
"Priority": 0
}
Please note that the Role worked fine while using on AWS console. So far this is ok.
Now coming to my pipeline with s3Put -> Python Lambda -> MediaConnect, the infrastructure is written using Terraform. My iam.tf file -
# create a role
# reseource_type - resource_name
resource "aws_iam_role" "lambda_role" {
name = "${local.resource_component}-lambda-role"
assume_role_policy = jsonencode({
"Version": "2012-10-17",
"Statement": [{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
},
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "mediaconvert.amazonaws.com"
},
"Sid": "",
"Effect": "Allow",
}
]
})
}
# create policy
resource "aws_iam_policy" "policy" {
name = "${local.resource_component}-lambda-policy"
policy = jsonencode({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": "arn:aws:s3:::*"
}
]
})
}
# attach policy to the role
resource "aws_iam_role_policy_attachment" "policy_attachment" {
role = "${aws_iam_role.lambda_role.name}"
policy_arn = "${aws_iam_policy.policy.arn}"
}
The lambda code gets triggered by S3Put successfully. But the lambda throws error -
(AccessDeniedException) when calling the CreateJob operation: User: arn:aws:sts::---:assumed-role/vidstream-inputVideoProcessor-lambda-role/vidstream-inputVideoProcessor is not authorized to perform: iam:PassRole on resource: arn:aws:iam::---:role/mediaConverterRole
I have tried to find boto3 simple examples but nothing simpler is found online.
The lambda Python Code is here -
import json
import logging
import boto3
# initialize logger
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def handler(event, context):
# get input bucket
input_bucket_name = event['Records'][0]['s3']['bucket']['name']
# get file/object name
media_object = event['Records'][0]['s3']['object']['key']
# open json mediaconvert template
with open("job.json", "r") as jsonfile:
job_object = json.load(jsonfile)
# prepare data for mediaconvert job
input_file = f's3://{input_bucket_name}/{media_object}'
# edit job object
job_object['Settings']['Inputs'][0]['FileInput'] = input_file
# updated job object
logger.info("updated job object")
# Create MediaConvert client
mediaconvert_client = boto3.client('mediaconvert')
try:
# try to create a job
mediaconvert_client.create_job(**job_object)
except Exception as e:
logger.error(e)
return {
'statusCode': 200,
'body': json.dumps(event)
}
The boto3 MediaConvert documentation is provided by AWS
I am at a loss, no idea what to do. Is there any simpler example anyone can help me with?
I just need to create a simple job with Lambda that works, no complexity.
Any kind of help will be highly appreciated.
Okay I solved this issue by putting iam:PassRole to lambda policy.
{
"Effect": "Allow",
"Action": [
"iam:PassRole"
],
"Resource": "*"
}
So the updated iam.tf file is -
# create a role
# reseource_type - resource_name
resource "aws_iam_role" "lambda_role" {
name = "${local.resource_component}-lambda-role"
assume_role_policy = jsonencode({
"Version": "2012-10-17",
"Statement": [{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
},
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "mediaconvert.amazonaws.com"
},
"Sid": "",
"Effect": "Allow",
}
]
})
}
# create policy
resource "aws_iam_policy" "policy" {
name = "${local.resource_component}-lambda-policy"
policy = jsonencode({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": "arn:aws:logs:*:*:*"
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": [
"iam:PassRole"
],
"Resource": "*"
}
]
})
}
# attach policy to the role
resource "aws_iam_role_policy_attachment" "policy_attachment" {
role = "${aws_iam_role.lambda_role.name}"
policy_arn = "${aws_iam_policy.policy.arn}"
}
I first added this to lambda policy from aws console. After that worked I added this on my tf file. Be careful when editing something on console while the main infrastructure is written in IACs such as Terraform, this might cause drift if you forget what you have done.
I've created a role with an attached Policy "AmazonSSMManagedInstanceCore":
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ssm:DescribeAssociation",
"ssm:GetDeployablePatchSnapshotForInstance",
"ssm:GetDocument",
"ssm:DescribeDocument",
"ssm:GetManifest",
"ssm:GetParameter",
"ssm:GetParameters",
"ssm:ListAssociations",
"ssm:ListInstanceAssociations",
"ssm:PutInventory",
"ssm:PutComplianceItems",
"ssm:PutConfigurePackageResult",
"ssm:UpdateAssociationStatus",
"ssm:UpdateInstanceAssociationStatus",
"ssm:UpdateInstanceInformation"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ssmmessages:CreateControlChannel",
"ssmmessages:CreateDataChannel",
"ssmmessages:OpenControlChannel",
"ssmmessages:OpenDataChannel"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"ec2messages:AcknowledgeMessage",
"ec2messages:DeleteMessage",
"ec2messages:FailMessage",
"ec2messages:GetEndpoint",
"ec2messages:GetMessages",
"ec2messages:SendReply"
],
"Resource": "*"
}
]
}
And Trust relationships:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
I've then attached the IAM role to the instance. When I start the SSM agent in the instance I get the following error:
2022-03-16 23:14:49 ERROR [HandleAwsError # awserr.go.49] [ssm-agent-worker] [MessageService] [MDSInteractor] error when calling AWS APIs. error details - GetMessages Error: AccessDeniedException: User: arn:aws:sts::XXXX:assumed-role/SSMandCloudWatch/i-YYYYY is not authorized to perform: ec2messages:GetMessages on resource: arn:aws:ssm:eu-central-1:XXXX:* with an explicit deny in a service control policy
status code: 400, request id: zzzz
The call it's complaining about is explicitly allowed in the policy. I've tried restarting the agent but didn't make any difference.
AWS permission evaluation can be complex. I like this AWS diagram below, so it is a good one to follow to track down permissions issues.
So there are a few other things to check or be aware of that could still be limiting access.
I need AWS accounts A, B to have read + write access to bucket Q in account C. However, account C should have full ownership over all objects in bucket Q. In line with these docs, I added the following permissions policy.
{
"Version": "2012-10-17",
"Statement": [
// WRITE PERMISSIONS
{
"Effect": "Allow",
"Principal": {
"AWS": [
// Account A
"arn:aws:iam::<Account A ID>:root",
// Account B
"arn:aws:iam::<Account B ID>:root"
]
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
// Bucket Q
"Resource": "arn:aws:s3:::<Bucket Q name>/*",
// Basically, all writes must supply bucket-owner-full-control as the ACL or they will be rejected
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
},
// READ PERMISSION for account A
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<Account A ID>:root"
},
"Action": [
"s3:GetObject*",
"s3:GetBucket*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::<Bucket Q name>",
"arn:aws:s3:::<Bucket Q name>/*"
]
},
// READ PERMISSION for account B
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<Account B ID>:root"
},
"Action": [
"s3:GetObject*",
"s3:GetBucket*",
"s3:List*"
],
"Resource": [
"arn:aws:s3:::<Bucket Q name>",
"arn:aws:s3:::<Bucket Q name>/*"
]
}
]
}
However, writes (e.g. s3.meta.client.put_object(Bucket=bucket, Key=key, Body=data, ACL='bucket-owner-full-control')) fail with ClientError: An error occurred (AccessDenied) when calling the PutObject operation: Access Denied. What am I missing? If it matters at all, writes are being issued from a SageMaker notebook in account B.
Solution:
Attach a resource policy to the IAM role that the SageMaker notebook uses. This policy should give the role s3:putObject and s3:putObjectAcl permissions (i.e. the same permissions the bucket policy lists). You can find the role that the SageMaker notebook uses by viewing your notebook instance in the SageMaker UI.
Attach an identity policy to the IAM role that the SageMaker notebook uses. This policy should allow the account root to assume the role.
In my case, the resource policy was:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::<Bucket Q name>/*"
]
}
]
}
and the identity policy was
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<Account B ID>:root"
},
"Action": "sts:AssumeRole"
}
Referring to this Doc
I have created IAM policy which allows accessing only one EC2 Instance.And I have created an IAM user with the policy with that policy. But when I logged in with that user into my AWS account I got the Error "An error occurred fetching instance data: You are not authorized to perform this operation."
Policy document:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"ec2:*"
],
"Condition": {
"StringEquals": {
"ec2:ResourceTag/test": "test"
}
},
"Resource": [
"arn:aws:ec2:us-east-1:AccountNumber:instance/*
],
"Effect": "Allow"
}
]
}
You must add EC2 describe to describe all EC2 resources, then base on other statement to filter resource by tag.
But with this policy, other IAM account still viewable other EC2 instances without any permission.
Here is what you need.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1507182214000",
"Effect": "Allow",
"Action": [
"ec2:*"
],
"Condition": {
"StringEquals": {
"ec2:ResourceTag/TAG_NAME": "TAG_VALUE"
}
},
"Resource": [
"arn:aws:ec2:AWS_REGION:AWS_ACCOUNT:instance/*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:DescribeTags"
],
"Resource": "*"
},
{
"Effect": "Deny",
"Action": [
"ec2:CreateTags",
"ec2:DeleteTags"
],
"Resource": "*"
}
]
}