I just created an AWS ECS cluster and task definition and ran it all just fine. I was able to connect to the server. The task is running on Fargate and runs on demand. I am now attempting to create a Lambda that will run the RunTask command to start the server. Here is my Lambda definition in Terraform.
data "aws_iam_policy_document" "startup_lambda_assume_role" {
statement {
effect = "Allow"
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}
}
}
resource "aws_iam_role" "startup_lambda" {
name = "report_lambda_role"
assume_role_policy = data.aws_iam_policy_document.startup_lambda_assume_role.json
}
resource "aws_cloudwatch_log_group" "startup_lambda" {
name = "/aws/lambda/${aws_lambda_function.startup.function_name}"
retention_in_days = 14
}
data "aws_iam_policy_document" "startup_lambda" {
statement {
effect = "Allow"
actions = [
"logs:CreateLogStream",
"logs:CreateLogGroup",
]
resources = [aws_cloudwatch_log_group.startup_lambda.arn]
}
statement {
effect = "Allow"
actions = ["logs:PutLogEvents"]
resources = ["${aws_cloudwatch_log_group.startup_lambda.arn}:*"]
}
statement {
effect = "Allow"
actions = [
"ecs:RunTask",
]
resources = [
aws_ecs_task_definition.game.arn
]
}
statement {
effect = "Allow"
actions = [
"iam:PassRole",
]
resources = [
aws_iam_role.ecs_task_execution.arn,
aws_iam_role.game_task.arn
]
}
}
resource "aws_iam_role_policy" "startup_lambda" {
name = "startup_lambda_policy"
policy = data.aws_iam_policy_document.startup_lambda.json
role = aws_iam_role.startup_lambda.id
}
data "archive_file" "startup_lambda" {
type = "zip"
source_file = "${path.module}/startup/lambda_handler.py"
output_path = "${path.module}/startup/lambda_handler.zip"
}
resource "aws_lambda_function" "startup" {
function_name = "startup_lambda"
filename = data.archive_file.startup_lambda.output_path
handler = "lambda_handler.handler"
source_code_hash = data.archive_file.startup_lambda.output_base64sha256
runtime = "python3.8"
role = aws_iam_role.startup_lambda.arn
environment {
variables = {
CLUSTER_ARN = aws_ecs_cluster.game.arn,
TASK_ARN = aws_ecs_cluster.game.arn,
SUBNET_IDS = "${aws_subnet.subnet_a.id},${aws_subnet.subnet_b.id},${aws_subnet.subnet_c.id}"
}
}
}
This is my Python code located in startup/lambda_handler.py which does appear properly as the code for the function when I checked in the AWS console.
import os
import boto3
def handler (event, callback):
client = boto3.client("ecs")
response = client.run_task(
cluster = os.getenv("CLUSTER_ARN"),
taskDefinition = os.getenv("TASK_ARN"),
launchType = "FARGATE",
count = 1,
networkConfiguration = {
"awsvpcConfiguration": {
"subnets": os.getenv("SUBNET_IDS", "").split(","),
"assignPublicIp": "ENABLED",
},
},
)
When I run a test of the Lambda function in the console using an empty JSON object as an argument, I expect to see my ECS task spin up, but instead I get the following error.
Response
{
"errorMessage": "An error occurred (AccessDeniedException) when calling the RunTask operation: User: arn:aws:sts::703606424838:assumed-role/report_lambda_role/startup_lambda is not authorized to perform: ecs:RunTask on resource: * because no identity-based policy allows the ecs:RunTask action",
"errorType": "AccessDeniedException",
"stackTrace": [
" File \"/var/task/lambda_handler.py\", line 6, in handler\n response = client.run_task(\n",
" File \"/var/runtime/botocore/client.py\", line 386, in _api_call\n return self._make_api_call(operation_name, kwargs)\n",
" File \"/var/runtime/botocore/client.py\", line 705, in _make_api_call\n raise error_class(parsed_response, operation_name)\n"
]
}
Notice that I do have a statement for ecs:RunTask allowed on my task defintion in the IAM policy document attached to my Lambda. I am not sure why this doesn't give the Lambda permission to run the task.
The TASK_ARN you pass to your lambda container is wrong. Should probably be aws_ecs_task_definition.game.arn instead of a duplicate aws_ecs_cluster.game.arn.
Related
I'm trying to deploy some event rules using Terraform. From what I've seen in the docs, my (JSON) format is fine. I can't figure out why it's throwing that error.
resource "aws_kinesis_firehose_delivery_stream" "kinesis_stream" {
name = var.delivery_stream_name
destination = "s3"
s3_configuration {
role_arn = aws_iam_role.kinesis_data_firehose_role.arn
bucket_arn = aws_s3_bucket.s3_bucket.arn
}
}
resource "aws_cloudwatch_event_rule" "successful_sign_in_rule" {
description = "Auth0 User Successfully signed in"
event_bus_name = aws_cloudwatch_event_bus.event_bridge_event_bus.arn
event_pattern = <<EOF
{
"detail-type": [
"s"
]
}
EOF
}
resource "aws_cloudwatch_event_target" "successful_sign_in_rule_target" {
rule = aws_cloudwatch_event_rule.successful_sign_in_rule.name
arn = aws_kinesis_firehose_delivery_stream.kinesis_stream.arn
}
I want to create an IAM policy document and attached two values taken from snowflake error integration as Trusted Relationship in the policy. Following this tutorial Step 5.
The idea is that to add SF_AWS_IAM_USER_ARN and SF_AWS_EXTERNAL_ID created from
SNOWFLAKE NOTIFICATION INTEGRATION to the policy.
The integration is succesfully created.
This is part of my code:
resource "random_id" "random" {
byte_length = 8
}
resource "aws_sns_topic" "my_sns_topic" {
name = "${var.bucket_name}-errors-${random_id.random.id}"
}
data "aws_iam_policy_document" "snowflake_notification_error" {
version = "2008-10-17"
statement {
sid = "__default_statement_ID"
actions = [
"SNS:GetTopicAttributes",
"SNS:SetTopicAttributes",
"SNS:AddPermission",
"SNS:RemovePermission",
"SNS:DeleteTopic",
"SNS:Subscribe",
"SNS:ListSubscriptionsByTopic",
"SNS:Publish",
"SNS:Receive",
]
principals {
type = "AWS"
identifiers = ["*"]
}
resources = [aws_sns_topic.my_sns_topic.arn]
condition {
test = "StringEquals"
variable = "AWS:SourceOwner"
values = [data.aws_caller_identity.current.account_id]
}
}
statement {
sid = "allow_s3_notification"
principals {
type = "Service"
identifiers = ["s3.amazonaws.com"]
}
actions = ["SNS:Publish"]
resources = [aws_sns_topic.my_sns_topic.arn]
condition {
test = "ArnLike"
variable = "aws:SourceArn"
values = [data.aws_s3_bucket.bucket.arn]
}
}
statement {
sid = "allow_snowflake_subscription"
principals {
type = "AWS"
identifiers = [snowflake_storage_integration.integration.storage_aws_iam_user_arn]
}
actions = ["SNS:Subscribe"]
resources = [aws_sns_topic.my_sns_topic.arn]
}
# Error starts in this block I believe
# The json file looks like in the tutorial shown.
statement {
sid = "allow_error_integration"
principals {
type = "AWS"
identifiers = [snowflake_notification_integration.error_integration.aws_sns_iam_user_arn]
}
actions = ["sts:AssumeRole"]
condition {
test = "StringEquals"
variable = "sts:ExternalId"
values = [snowflake_notification_integration.error_integration.aws_sns_external_id]
}
resources = [aws_sns_topic.my_sns_topic.arn]
}
}
# ERROR HERE
resource "aws_sns_topic_policy" "snowflake_s3_pipe_notification_error" {
arn = aws_sns_topic.my_sns_topic.arn
policy = data.aws_iam_policy_document.snowflake_notification_error.json
}
The error is:
Error: InvalidParameter: Invalid parameter: Policy statement action out of service scope!status code: 400, request id: 5c75a285-294b-56b7-ad4d-f915d5e0b01b
with module.datalake_dev["my-snowpipe"].module.s3_integration.aws_sns_topic_policy.snowflake_notification_error, on ../snowflake/s3_integration/s3_integration/error_integration.tf line 79, in resource "aws_sns_topic_policy" "snowflake_notification_error":
79: resource "aws_sns_topic_policy" "snowflake_notification_error" {
The action "SNS:Receive" is not allowed to be in the policy statement. All of the allowed actions are listed in the AWS documentation at https://docs.aws.amazon.com/sns/latest/dg/sns-access-policy-language-api-permissions-reference.html#sns-valid-policy-actions
I am trying to create a dynamodb table and lambda trigger using Terraform. This is how I define the table, role policy and lambda trigger:
resource "aws_dynamodb_table" "filenames" {
name = local.dynamodb_table_filenames
billing_mode = "PROVISIONED"
read_capacity = 1000
write_capacity = 1000
hash_key = "filename"
stream_enabled = true
stream_view_type = "NEW_IMAGE"
#range_key = ""
attribute {
name = "filename"
type = "S"
}
tags = var.tags
}
resource "aws_iam_role_policy" "dynamodb_policy" {
policy = jsonencode(
{
Version: "2012-10-17",
Statement: [
{
Action: [
"dynamodb:GetItem",
"dynamodb:PutItem",
"dynamodb:UpdateItem",
"dynamodb:Query",
"dynamodb:GetRecords",
"dynamodb:GetShardIterator",
"dynamodb:DescribeStream",
"dynamodb:ListShards",
"dynamodb:ListStreams",
],
Effect: "Allow",
Resource: aws_dynamodb_table.filenames.arn
}
]
}
)
role = aws_iam_role.processing_lambda_role.id
}
resource "aws_lambda_event_source_mapping" "allow_dynamodb_table_to_trigger_lambda" {
event_source_arn = aws_dynamodb_table.filenames.stream_arn
function_name = aws_lambda_function.trigger_stepfunction_lambda.arn
starting_position = "LATEST"
}
I am getting this error even though I have already added the relevant policies added in the role:
error creating Lambda Event Source Mapping (arn:aws:dynamodb:eu-central-12:table/tablename/stream): InvalidParameterValueException: Cannot access stream arn:aws:dynamodb:eu-central-1:299093934558:table/4tablename/stream. Please ensure the role can perform the GetRecords, GetShardIterator, DescribeStream, ListShards, and ListStreams Actions on your stream in IAM.
How can I fix this?
The stream actions apply to streams, not to tables. The ARN for stream has the form of:
arn:${Partition}:dynamodb:${Region}:${Account}:table/${TableName}/stream/${StreamLabel}
Thus, you should use (or something equivalent):
Resource: "${aws_dynamodb_table.filenames.arn}/stream/*"
or more general:
Resource: "${aws_dynamodb_table.filenames.arn}/*"
I created 2 SQS and the DeadLetterQueue with the code in my main.tf calling the SQS/main.tf module.I would like to destroy and create them again but this time,I want to call IAM/iam_role.tf as well to create one IAM role together with the policy documents.I don't know how to specify that in my main.tf so that the resources section of the data policy document has both CloudTrail_SQS created ,meaning "CloudTrail_SQS_Data_Event" and "cloudTrail_SQS_Management_Event" and the resources arn of the S3 give the role access to the 2 different buckets used for the SQS,meaning "cloudtrail-management-event-logs" and "aws-cloudtrail143-sqs-logs"
SQS/main.tf
resource "aws_sqs_queue" "CloudTrail_SQS"{
name = var.sqs_queue_name
redrive_policy = jsonencode({
deadLetterTargetArn = aws_sqs_queue.CloudTrail_SQS_DLQ.arn
maxReceiveCount = 4
})
}
resource "aws_sqs_queue" "CloudTrail_SQS_DLQ"{
name = var.dead_queue_name
IAM/iam_role.tf
resource "aws_iam_role" "access_role" {
name = var.role_name
description = var.description
assume_role_policy = data.aws_iam_policy_document.trust_relationship.json
}
trust policy
data "aws_iam_policy_document" "trust_relationship" {
statement {
sid = "AllowAssumeRole"
actions = ["sts:AssumeRole"]
principals {
type = "AWS"
identifiers = [var.account_id]
}
condition {
test = "StringEquals"
variable = "sts:ExternalId"
values = [var.external_id]
}
}
}
data "aws_iam_policy_document" "policy_document"{
statement{
actions = [
"sqs:GetQueueUrl",
"sqs:ReceiveMessage",
"sqs:SendMessage"
]
effect = "Allow"
resources = aws_sqs_queue.CloudTrail_SQS.arn
}
statement {
actions = ["sqs:ListQueues"]
effect = "Allow"
resources = ["*"]
}
statement {
actions = ["s3:GetObject", "s3:GetBucketLocation"]
resources = [
"arn:aws:s3:::${var.cloudtrail_event_log_bucket_name}/*"
]
effect = "Allow"
}
statement {
actions = ["s3:ListBucket"]
resources = [
"arn:aws:s3:::${var.cloudtrail_event_log_bucket_name}"
]
effect = "Allow"
}
statement {
actions = ["kms:Decrypt", "kms:GenerateDataKey","kms:DescribeKey" ]
effect = "Allow"
resources = [var.kms_key_arn]
}
}
main.tf
module "data_events"{
source = "../SQS"
cloudtrail_event_log_bucket_name = "aws-cloudtrail143-sqs-logs"
sqs_queue_name = "CloudTrail_SQS_Data_Event"
dead_queue_name = "CloudTrail_DLQ_Data_Event"
}
module "management_events"{
source = "../SQS"
cloudtrail_event_log_bucket_name = "cloudtrail-management-event-logs"
sqs_queue_name = "cloudTrail_SQS_Management_Event"
dead_queue_name = "cloudTrail_DLQ_Management_Event"
}
The role would be created as shown below. But your question has so many mistakes and missing information, that its impossible to provide full, working code. So the below code should be treated as a template which you need to adjust for your use.
resource "aws_iam_role" "access_role" {
name = var.role_name
description = var.description
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "ec2.amazonaws.com"
}
},
]
})
inline_policy {
name = "allow-access-to-s3-sqs"
policy = data.aws_iam_policy_document.policy_document.json
}
}
data "aws_iam_policy_document" "policy_document"{
statement{
actions = [
"sqs:GetQueueUrl",
"sqs:ReceiveMessage",
"sqs:SendMessage"
]
effect = "Allow"
resources = [
module.data_events.sqs.arn,
module.management_events.sqs.arn,
]
}
statement {
actions = ["sqs:ListQueues"]
effect = "Allow"
resources = ["*"]
}
statement {
actions = ["s3:GetObject", "s3:GetBucketLocation"]
resources = [
"arn:aws:s3:::aws-cloudtrail143-sqs-logs/*"
"arn:aws:s3:::cloudtrail-management-event-logs/*"
]
effect = "Allow"
}
statement {
actions = ["s3:ListBucket"]
resources = [
"arn:aws:s3:::aws-cloudtrail143-sqs-logs",
"arn:aws:s3:::cloudtrail-management-event-logs"
]
effect = "Allow"
}
statement {
actions = ["kms:Decrypt", "kms:GenerateDataKey","kms:DescribeKey" ]
effect = "Allow"
resources = [var.kms_key_arn]
}
}
You can use the data sources of terraform.
At this time, you should write the output for SQS folder, write them as data in IAM folder and use it
I want to create a policy so a specific aws role (not in the same account) let's say arn:aws:iam::123123123123:role/sns-read-role can subscribe and receive messages from my SNS topic in AWS.
From the official terraform docs about aws_sns_topic_policy example it would be
resource "aws_sns_topic" "test" {
name = "my-topic-with-policy"
}
resource "aws_sns_topic_policy" "default" {
arn = aws_sns_topic.test.arn
policy = data.aws_iam_policy_document.sns_topic_policy.json
}
data "aws_iam_policy_document" "sns_topic_policy" {
statement {
actions = [
"SNS:Subscribe",
"SNS:Receive"
]
condition {
test = "StringEquals"
variable = "AWS:SourceOwner"
values = [
123123123123
]
}
effect = "Allow"
principals {
type = "AWS"
identifiers = ["*"]
}
resources = [
aws_sns_topic.test.arn
]
}
}
But this would translate to arn:aws:iam::123123123123:root and filter only on account-id.
From AWS JSON policy elements: Principal I understand the AWS syntax is
"Principal": { "AWS": "arn:aws:iam::AWS-account-ID:role/role-name" }
Adding the role in the condition like this
condition {
test = "StringEquals"
variable = "AWS:SourceOwner"
values = [
arn:aws:iam::123123123123:role/sns-read-role
]
}
does not work.
It would make sense to add the role to the principal like this
principals {
type = "AWS"
identifiers = ["arn:aws:iam::123123123123:role/sns-read-role"]
}
When I try to subscribe, I get an AuthorizationError: "Couldn't subscribe to topic..."
Do I need the condition together with the principal? Why even bother with the condition if you can use the principal in the first place?
After some experimenting, I found that I don't need the condition. This works for me:
resource "aws_sns_topic" "test" {
name = "my-topic-with-policy"
}
resource "aws_sns_topic_policy" "default" {
arn = aws_sns_topic.test.arn
policy = data.aws_iam_policy_document.sns_topic_policy.json
}
data "aws_iam_policy_document" "sns_topic_policy" {
statement {
actions = [
"SNS:Subscribe",
"SNS:Receive"
]
effect = "Allow"
principals {
type = "AWS"
identifiers = [
"arn:aws:iam::123123123123:role/sns-read-role"
]
}
resources = [
aws_sns_topic.test.arn
]
}
}
In case you want to use parameters for your module:
principals {
type = "AWS"
identifiers = [
"${var.account_arn}:role/${var.role}"
]
}