Error in Cloudwatch events to Kinesis with Terraform - amazon-web-services

I am writing terraform script which takes Security hub events in Cloudwatch events and sends them to Kinesis Data Stream. Here's my code:
resource "aws_cloudwatch_event_rule" "sechub" {
name = "SecHub"
description = "Security Hub findings to CW log group"
event_bus_name = "default"
event_pattern = <<EOF
{
"source": ["aws.securityhub"],
"detail-type": ["Security Hub Findings - Imported"]
}
EOF
tags = {
Name = "Sec-Hub-findings"
Application = "splunk-integartion"
}
}
resource "aws_cloudwatch_event_target" "cw_target" {
rule = aws_cloudwatch_event_rule.sechub.name
target_id = "SendToKinesis"
arn = aws_kinesis_stream.sechub_stream.arn
}
This is the error I am getting:
Error: creating EventBridge Target (SecHub-SendToKinesis): ValidationException: Rule SecHub does not have RoleArn assigned to invoke target arn:aws:kinesis:eu-west-1:959718193161:stream/sechub-kinesis-stream.
│ status code: 400, request id: 4f8304c6-cc61-4b53-864b-584d68060667
│
│ with aws_cloudwatch_event_target.cw_target,
│ on main.tf line 20, in resource "aws_cloudwatch_event_target" "cw_target":
│ 20: resource "aws_cloudwatch_event_target" "cw_target" {
How to provide a role to the event target to successfully invoke the Kinesis? I assumed this happens internally. I don't see any such example as well on the terraform where there's an explicit role creation.

Related

Terraform - Error when Creating Lambda Versions

I'm trying to do an AWS-Terraform-GitHub pipeline for a serverless app. In terraform i define a lambda function and on push i want to update the lambda function code and create a new lambda function version (to be used with an alias at a later date).
This is my code
data "archive_file" "zip" {
type = "zip"
source_file = "${path.module}/lambda/hello.js"
output_path = "${path.module}/lambda/hello.zip"
}
resource "aws_lambda_function" "hello_terraform" {
filename = data.archive_file.zip.output_path
source_code_hash = filebase64sha256(data.archive_file.zip.output_path)
function_name = var.project_name
role = aws_iam_role.lambda_role.arn
handler = "hello.handler"
runtime = "nodejs12.x"
timeout = 10
publish = true
}
data "aws_iam_policy_document" "lambda_assume_role_policy" {
statement {
actions = ["sts:AssumeRole"]
principals {
type = "Service"
identifiers = ["lambda.amazonaws.com"]
}
}
}
resource "aws_iam_role" "lambda_role" {
name = "${var.project_name}-lambda-role"
assume_role_policy = data.aws_iam_policy_document.lambda_assume_role_policy.json
}
When i do the initial push , or a a change that does not involve the code in lambda function everything works. However when i do a code modification i get this error on github workflow (on terraform apply)
│ Error: Error publishing Lambda Function (lambda-terraform-github-actions) version: ResourceConflictException: The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda:us-east-1:961736190498:function:lambda-terraform-github-actions
│ {
│ RespMetadata: {
│ StatusCode: 409,
│ RequestID: "d8c86252-a471-46be-9662-751fc935083c"
│ },
│ Message_: "The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda:us-east-1:961736190498:function:lambda-terraform-github-actions",
│ Type: "User"
│ }
│
│ with aws_lambda_function.hello_terraform,
│ on lambda.tf line 9, in resource "aws_lambda_function" "hello_terraform":
│ 9: resource "aws_lambda_function" "hello_terraform" {
│
╵
Operation failed: failed running terraform apply (exit 1)
I try adding depends_on but i still have the same problem .
I also try the same thing on a local environment , doing terraform apply on the same code without the pipeline but the same thing happens.
If i remove the "publish" the terraform apply works, the function gets updates but of course there is no new function version.

How to create S3 bucket and Lambda in Terraform when they are codependent on each other?

I am creating a Lambda function that has it's handler code stored in an S3 bucket. I need to create these resources and I am using Terraform.
It appears the S3 bucket is dependent on the Lambda's ARN output so that I can set the correct Principal config for the bucket.
The Lambda is also dependent on the S3 bucket existing so I can configure the bucket which stores the handler code.
I have 2 modules creating the required resources
# S3 Bucket module
resource "aws_s3_bucket" "s3-lambda" {
bucket = var.bucket_name
acl = "private"
policy = data.aws_iam_policy_document.s3_lambda_permissions.json
tags = {
Name = var.tag_name
Environment = var.env_name
}
}
# Lambda module
resource "aws_lambda_function" "redirect_lambda" {
s3_bucket = var.bucket_name
s3_key = var.key
handler = var.handler
runtime = var.runtime
role = aws_iam_role.redirect_lambda.arn
function_name = "redirect_lambda-${var.env_name}"
publish = true
}
I am then calling these modules in my main.tf
module "qr_redirect_lambda" {
source = "./modules/qr-redirect"
env_name = var.env_name
bucket_name = var.qr_redirect_lambda_bucket_name
key = var.lambda_key
runtime = var.lambda_runtime_16
handler = var.lambda_handler
tag_name = "tag name
}
How can I create these 2 resources that are codependent on each other?
Error output:
Error: error creating Lambda Function (1): InvalidParameterValueException: Error occurred while GetObject. S3 Error Code: NoSuchBucket. S3 Error Message: The specified bucket does not exist
│ {
│ RespMetadata: {
│ StatusCode: 400,
│ RequestID: "xxx-xxx"
│ },
│ Message_: "Error occurred while GetObject. S3 Error Code: NoSuchBucket. S3 Error Message: The specified bucket does not exist",
│ Type: "User"
│ }
│
│ with module.qr_redirect_lambda.aws_lambda_function.qr_redirect_lambda,
│ on modules/qr-lambda/main.tf line 1, in resource "aws_lambda_function" "qr_redirect_lambda":
│ 1: resource "aws_lambda_function" "qr_redirect_lambda" {
I think you can do this it in three stages, instead of two:
Create bucket without bucket policy
Create lambda. You can use depends_on to create the lambda only after the bucket.
Use aws_s3_bucket_policy to create the bucket policy.

How to fix 403 error when applying Terraform?

I created a new EC2 instance on AWS.
I am trying to create a Terraform on the AWS server and I'm getting an error.
I don't have the AMI previously created, so I'm not sure if this is the issue.
I checked my keypair and ensured it is correct.
I also checked the API details and they are correct too. I'm using a college AWS App account where the API details are the same for all users. Not sure if that would be an issue.
This is the error I'm getting after running terraform apply:
Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
│ status code: 403, request id: be2bf9ee-3aa4-401a-bc8b-f15c8a1e63d0
│
│ with provider["registry.terraform.io/hashicorp/aws"],
│ on main.tf line 10, in provider "aws":
│ 10: provider "aws" {
My main.tf file:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.27"
}
}
required_version = ">= 0.14.9"
}
provider "aws" {
profile = "default"
region = "eu-west-1"
}
resource "aws_instance" "app_server" {
ami = "ami-04505e74c0741db8d"
instance_type = "t2.micro"
key_name = "<JOEY'S_KEYPAIR>"
tags = {
Name = "joey_terraform"
}
}
Credentials:
AWS Access Key ID [****************LRMC]:
AWS Secret Access Key [****************6OO3]:
Default region name [eu-west-1]:
Default output format [None]:

Using Terraform to create an AWS EC2 bastion

I am trying to spin-up an AWS bastion host on AWS EC2. I am using the Terraform module provided by Guimove. I am getting stuck on the bastion_host_key_pair field. I need to provide a keypair that can be used to launch the EC2 template, but the bucket (aws_s3_bucket.bucket) that needs to contain the public key of the key pair gets created during the module, therefore the key isn't there when it tries to launch the instance and it fails. It feels like a chicken and egg scenario, so I am obviously doing something wrong. What am I doing wrong?
Error:
╷
│ Error: Error creating Auto Scaling Group: AccessDenied: You are not authorized to use launch template: lt-004b0af2895c684b3
│ status code: 403, request id: c6096e0d-dc83-4384-a036-f35b8ca292f8
│
│ with module.bastion.aws_autoscaling_group.bastion_auto_scaling_group,
│ on .terraform\modules\bastion\main.tf line 300, in resource "aws_autoscaling_group" "bastion_auto_scaling_group":
│ 300: resource "aws_autoscaling_group" "bastion_auto_scaling_group" {
│
╵
Terraform:
resource "tls_private_key" "bastion_host" {
algorithm = "RSA"
rsa_bits = 4096
}
resource "aws_key_pair" "bastion_host" {
key_name = "bastion_user"
public_key = tls_private_key.bastion_host.public_key_openssh
}
resource "aws_s3_bucket_object" "bucket_public_key" {
bucket = aws_s3_bucket.bucket.id
key = "public-keys/${aws_key_pair.bastion_host.key_name}.pub"
content = aws_key_pair.bastion_host.public_key
kms_key_id = aws_kms_key.key.arn
}
module "bastion" {
source = "Guimove/bastion/aws"
bucket_name = "${var.identifier}-ssh-bastion-bucket-${var.env}"
region = var.aws_region
vpc_id = var.vpc_id
is_lb_private = "false"
bastion_host_key_pair = aws_key_pair.bastion_host.key_name
create_dns_record = "false"
elb_subnets = var.public_subnet_ids
auto_scaling_group_subnets = var.public_subnet_ids
instance_type = "t2.micro"
tags = {
Name = "SSH Bastion Host - ${var.identifier}-${var.env}",
}
}
I had the same issue. The fix was to go into AWS Market place, accept the EULA and subscribe to the AMI I was trying to use.

Unable to find subscription for given ARN

I am testing my AWS terraform configuration with LocalStack. The final goal is to make a queue listen to my topic.
I am running Localstack with the following command:
docker run --rm -it -p 4566:4566 localstack/localstack
After running the command terraform destroy I get the error message:
aws_sns_topic_subscription.subscription: Destroying... [id=arn:aws:sns:us-east-1:000000000000:topic:a0d47652-3ae4-46df-9b63-3cb6e154cfcd]
╷
│ Error: error waiting for SNS topic subscription (arn:aws:sns:us-east-1:000000000000:topic:a0d47652-3ae4-46df-9b63-3cb6e154cfcd) deletion: InvalidParameter: Unable to find subscription for given ARN
│ status code: 400, request id: 2168e636
│
│
╵
I have run the code against the real AWS without a problem.
Here is the code for the terraform file
terraform {
required_version = ">= 0.12.26"
}
provider "aws" {
region = "us-east-1"
s3_force_path_style = true
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
endpoints {
sns = "http://localhost:4566"
sqs = "http://localhost:4566"
}
}
resource "aws_sqs_queue" "queue" {
name = "queue"
}
resource "aws_sns_topic" "topic" {
name = "topic"
}
resource "aws_sns_topic_subscription" "subscription" {
endpoint = aws_sqs_queue.queue.arn
protocol = "sqs"
topic_arn = aws_sns_topic.topic.arn
}
Sadly this is an issue with AWS, you have to create a ticket look here and https://stackoverflow.com/a/64568018/6085193
"When you delete a topic, subscriptions to the topic will not be "deleted" immediately, but become orphans. SNS will periodically clean these orphans, usually every 10 hours, but not guaranteed. If you create a new topic with the same topic name before these orphans are cleared up, the new topic will not inherit these orphans. So, no worry about them"
This has been fixed with issue:
https://github.com/localstack/localstack/issues/4022