Unable to find subscription for given ARN - amazon-web-services

I am testing my AWS terraform configuration with LocalStack. The final goal is to make a queue listen to my topic.
I am running Localstack with the following command:
docker run --rm -it -p 4566:4566 localstack/localstack
After running the command terraform destroy I get the error message:
aws_sns_topic_subscription.subscription: Destroying... [id=arn:aws:sns:us-east-1:000000000000:topic:a0d47652-3ae4-46df-9b63-3cb6e154cfcd]
╷
│ Error: error waiting for SNS topic subscription (arn:aws:sns:us-east-1:000000000000:topic:a0d47652-3ae4-46df-9b63-3cb6e154cfcd) deletion: InvalidParameter: Unable to find subscription for given ARN
│ status code: 400, request id: 2168e636
│
│
╵
I have run the code against the real AWS without a problem.
Here is the code for the terraform file
terraform {
required_version = ">= 0.12.26"
}
provider "aws" {
region = "us-east-1"
s3_force_path_style = true
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
endpoints {
sns = "http://localhost:4566"
sqs = "http://localhost:4566"
}
}
resource "aws_sqs_queue" "queue" {
name = "queue"
}
resource "aws_sns_topic" "topic" {
name = "topic"
}
resource "aws_sns_topic_subscription" "subscription" {
endpoint = aws_sqs_queue.queue.arn
protocol = "sqs"
topic_arn = aws_sns_topic.topic.arn
}

Sadly this is an issue with AWS, you have to create a ticket look here and https://stackoverflow.com/a/64568018/6085193
"When you delete a topic, subscriptions to the topic will not be "deleted" immediately, but become orphans. SNS will periodically clean these orphans, usually every 10 hours, but not guaranteed. If you create a new topic with the same topic name before these orphans are cleared up, the new topic will not inherit these orphans. So, no worry about them"

This has been fixed with issue:
https://github.com/localstack/localstack/issues/4022

Related

Error in Cloudwatch events to Kinesis with Terraform

I am writing terraform script which takes Security hub events in Cloudwatch events and sends them to Kinesis Data Stream. Here's my code:
resource "aws_cloudwatch_event_rule" "sechub" {
name = "SecHub"
description = "Security Hub findings to CW log group"
event_bus_name = "default"
event_pattern = <<EOF
{
"source": ["aws.securityhub"],
"detail-type": ["Security Hub Findings - Imported"]
}
EOF
tags = {
Name = "Sec-Hub-findings"
Application = "splunk-integartion"
}
}
resource "aws_cloudwatch_event_target" "cw_target" {
rule = aws_cloudwatch_event_rule.sechub.name
target_id = "SendToKinesis"
arn = aws_kinesis_stream.sechub_stream.arn
}
This is the error I am getting:
Error: creating EventBridge Target (SecHub-SendToKinesis): ValidationException: Rule SecHub does not have RoleArn assigned to invoke target arn:aws:kinesis:eu-west-1:959718193161:stream/sechub-kinesis-stream.
│ status code: 400, request id: 4f8304c6-cc61-4b53-864b-584d68060667
│
│ with aws_cloudwatch_event_target.cw_target,
│ on main.tf line 20, in resource "aws_cloudwatch_event_target" "cw_target":
│ 20: resource "aws_cloudwatch_event_target" "cw_target" {
How to provide a role to the event target to successfully invoke the Kinesis? I assumed this happens internally. I don't see any such example as well on the terraform where there's an explicit role creation.

How to fix 403 error when applying Terraform?

I created a new EC2 instance on AWS.
I am trying to create a Terraform on the AWS server and I'm getting an error.
I don't have the AMI previously created, so I'm not sure if this is the issue.
I checked my keypair and ensured it is correct.
I also checked the API details and they are correct too. I'm using a college AWS App account where the API details are the same for all users. Not sure if that would be an issue.
This is the error I'm getting after running terraform apply:
Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
│ status code: 403, request id: be2bf9ee-3aa4-401a-bc8b-f15c8a1e63d0
│
│ with provider["registry.terraform.io/hashicorp/aws"],
│ on main.tf line 10, in provider "aws":
│ 10: provider "aws" {
My main.tf file:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 3.27"
}
}
required_version = ">= 0.14.9"
}
provider "aws" {
profile = "default"
region = "eu-west-1"
}
resource "aws_instance" "app_server" {
ami = "ami-04505e74c0741db8d"
instance_type = "t2.micro"
key_name = "<JOEY'S_KEYPAIR>"
tags = {
Name = "joey_terraform"
}
}
Credentials:
AWS Access Key ID [****************LRMC]:
AWS Secret Access Key [****************6OO3]:
Default region name [eu-west-1]:
Default output format [None]:

Terraform AWS not accessing localstack

I'm having trouble getting a terraform AWS provider to talk to localstack. Whatever I try I just get the same error:
Error: error configuring Terraform AWS Provider: error validating provider credentials: error calling sts:GetCallerIdentity: InvalidClientTokenId: The security token included in the request is invalid.
status code: 403, request id: dc96c65d-84a7-4e64-947d-833195464538
This error suggest that the provider is making contact with a HTTP server but the credentials are being rejected (as per any 403). You might imagine the problem is that I'm feeding in the wrong credentials (through environment variables).
However the hostname local-aws exists in my /etc/hosts file, but blahblahblah does not. If I swap the endpoint to point to http://blahblahblah:4566 I still get the same 403. So I think the problem is that the provider isn't using my local endpoint. I can't work out why.
resource "aws_secretsmanager_secret_version" "foo" {
secret_id = aws_secretsmanager_secret.foo.id
secret_string = "bar"
}
resource "aws_secretsmanager_secret" "foo" {
name = "rabbitmq_battery_emulator"
}
provider "aws" {
region = "eu-west-2"
endpoints {
secretsmanager = "http://local-aws:4566"
}
}
Firstly check that localstack is configured to run sts. In docker-compose this was just the SERVICES environment variable:
services:
local-aws:
image: localstack/localstack
environment:
EDGE_PORT: 4566
SERVICES: secretsmanager, sts
Then make sure that you set the sts endpoint as well as the service you require:
provider "aws" {
region = "eu-west-2"
endpoints {
sts = "http://local-aws:4566"
secretsmanager = "http://local-aws:4566"
}
}
In addition to the SERVICES and sts endpoint config mentioned by #philip-couling, I also had to remove a terraform block from my main.tf:
#terraform {
# backend "s3" {
# bucket = "valid-bucket"
# key = "terraform/state/account/terraform.tfstate"
# region = "eu-west-1"
# }
# required_providers {
# local = {
# version = "~> 2.1"
# }
# }
#}

How to create EC2 instance on LocalStack with terraform?

I am trying to run EC2 instance on LocalStack using Terraform.
After 50 minutes of trying to create the instance
I got this response from terraform apply:
Error: error getting EC2 Instance (i-cf4da152ddf3500e1) Credit
Specifications: SerializationError: failed to unmarshal error message
status code: 500, request id: caused by: UnmarshalError: failed to
unmarshal error message caused by: expected element type <Response>
but have <title>
on main.tf line 34, in resource "aws_instance" "example": 34:
resource "aws_instance" "example" {
For LocalStack and Terraform v0.12.18 I use this configuration:
provider "aws" {
access_key = "mock_access_key"
region = "us-east-1"
s3_force_path_style = true
secret_key = "mock_secret_key"
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
endpoints {
apigateway = "http://localhost:4567"
cloudformation = "http://localhost:4581"
cloudwatch = "http://localhost:4582"
dynamodb = "http://localhost:4569"
es = "http://localhost:4578"
firehose = "http://localhost:4573"
iam = "http://localhost:4593"
kinesis = "http://localhost:4568"
lambda = "http://localhost:4574"
route53 = "http://localhost:4580"
redshift = "http://localhost:4577"
s3 = "http://localhost:4572"
secretsmanager = "http://localhost:4584"
ses = "http://localhost:4579"
sns = "http://localhost:4575"
sqs = "http://localhost:4576"
ssm = "http://localhost:4583"
stepfunctions = "http://localhost:4585"
sts = "http://localhost:4592"
ec2 = "http://localhost:4597"
}
}
resource "aws_instance" "example" {
ami = "ami-0c55b159cbfafe1f0"
instance_type = "t2.micro"
}
When I run LocalStack with docker-compose up directly from newest github (https://github.com/localstack/localstack)
From logs I have seen that EC2 related endpoint was setup.
I appreciate any advice that would help me to run EC2 on LocalStack.
Working fine with below docker image of localstack.
docker run -it -p 4500-4600:4500-4600 -p 8080:8080 --expose 4572 localstack/localstack:0.11.1
resource "aws_instance" "web" {
ami = "ami-0d57c0143330e1fa7"
instance_type = "t2.micro"
tags = {
Name = "HelloWorld"
}
}
provider "aws" {
region = "us-east-1"
s3_force_path_style = true
skip_credentials_validation = true
skip_metadata_api_check = true
skip_requesting_account_id = true
endpoints {
apigateway = "http://localhost:4567"
cloudformation = "http://localhost:4581"
cloudwatch = "http://localhost:4582"
dynamodb = "http://localhost:4569"
es = "http://localhost:4578"
firehose = "http://localhost:4573"
iam = "http://localhost:4593"
kinesis = "http://localhost:4568"
lambda = "http://localhost:4574"
route53 = "http://localhost:4580"
redshift = "http://LOCALHOST:4577"
s3 = "http://localhost:4572"
secretsmanager = "http://localhost:4584"
ses = "http://localhost:4579"
sns = "http://localhost:4575"
sqs = "http://localhost:4576"
ssm = "http://localhost:4583"
stepfunctions = "http://localhost:4585"
sts = "http://localhost:4592"
ec2 = "http://localhost:4597"
}
}
terraform apply
aws_instance.web: Destroying... [id=i-099392def6b574255]
aws_instance.web: Still destroying... [id=i-099392def6b574255, 10s elapsed]
aws_instance.web: Destruction complete after 10s
aws_instance.web: Creating...
aws_instance.web: Still creating... [10s elapsed]
aws_instance.web: Creation complete after 12s [id=i-9c942d138970d44a4]
Apply complete! Resources: 1 added, 0 changed, 1 destroyed.
Note: Its a dummy instance so won't be available for ssh and all. however, suitable for testing of terraform apply/destroy use case on ec2.

SNS topic subscription to AmazonIpSpaceChanged using terraform

I am trying to subscribe to the Aws AmazonIpSpaceChanged SNS topic using terraform. However, I keep getting the below error
SNS Topic subscription to AWS
resource "aws_sns_topic_subscription" "aws_ip_change_sns_subscription" {
topic_arn = "arn:aws:sns:us-east-1:806199016981:AmazonIpSpaceChanged"
protocol = "lambda"
endpoint = "${aws_lambda_function.test_sg_lambda_function.arn}"
}
Error:
* module.test-lambda.aws_sns_topic_subscription.aws_ip_change_sns_subscription: 1 error(s) occurred:
* aws_sns_topic_subscription.aws_ip_change_sns_subscription: Error creating SNS topic: InvalidParameter: Invalid parameter: TopicArn
status code: 400, request id: 3daa2940-8d4b-5fd8-86e7-7b074a16ada9
I tried the same using aws cli and it failed the first time when I didn't include the option --region us-east-1. But once it is included, it was able to subscribe just fine.
Any ideas?
I know it's an old question, but there are no accepted answers - maybe this will help someone if you agree with it and mark it as accepted?
The SNS topic arn:aws:sns:us-east-1:806199016981:AmazonIpSpaceChanged is only available in the region us-east-1, so you need to use a provider within Terraform that is configured for that region.
You also need to give permissions to the SNS topic to invoke the Lambda function (not sure if you'd just left this off the question).
This also works if your lambda function is defined in a different region.
provider "aws" {
region = "{your target region}"
}
provider "aws" {
alias = "us_east_1"
region = "us-east-1"
}
resource "aws_lambda_function" "my_function" {
# This uses your default target region
:
:
}
resource "aws_lambda_permission" "lambda_permission" {
# This uses your default target region
statement_id = "AllowExecutionFromSNS"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.my_function.function_name
principal = "sns.amazonaws.com"
source_arn = "arn:aws:sns:us-east-1:806199016981:AmazonIpSpaceChanged"
}
resource "aws_sns_topic_subscription" "aws_ip_change_sns_subscription" {
# This needs to use the same region as the SNS topic
provider = aws.us_east_1
topic_arn = "arn:aws:sns:us-east-1:806199016981:AmazonIpSpaceChanged"
protocol = "lambda"
endpoint = aws_lambda_function.my_function.arn
}
Your topic_arn is hardcoded to region us-east-1:
arn:aws:sns:us-east-1:806199016981:AmazonIpSpaceChanged
So when you have AWS_DEFAULT_REGION or similar configuration and point to another region, your code will fail.
That's the reason if you nominate the region, the code runs fine.
To avoid hardcodes, such as region, account id, you can do this:
data "aws_caller_identity" "current" {}
variable "region" {
type = "string"
default = "us-east-1"
}
resource "aws_sns_topic_subscription" "aws_ip_change_sns_subscription" {
topic_arn = "arn:aws:sns:${var.region}:${data.aws_caller_identity.current.account_id}:AmazonIpSpaceChanged"
protocol = "lambda"
endpoint = "${aws_lambda_function.test_sg_lambda_function.arn}"
}
With that, you should be fine and more flexible to run it in other region and other aws account as well.