Error: Error creating CloudTrail: InvalidCloudWatchLogsLogGroupArnException - amazon-web-services

I am trying to create cloudtrail for an organization in AWS. When I try to run the plan on a targeted apply for
resource "aws_cloudtrail" "nfcisbenchmark" {
name = "nf-cisbenchmark-${terraform.workspace}"
s3_bucket_name = aws_s3_bucket.nfcisbenchmark_cloudtrail.id
enable_logging = var.enable_logging
# 3.2 Ensure CloudTrail log file validation is enabled (Automated)
enable_log_file_validation = var.enable_log_file_validation
# 3.1 Ensure CloudTrail is enabled in all regions (Automated)
is_multi_region_trail = var.is_multi_region_trail
include_global_service_events = var.include_global_service_events
is_organization_trail = "${local.environments[terraform.workspace] == "origin"? true : var.is_organization_trail}"
# 3.7 Ensure CloudTrail logs are encrypted at rest using KMS CMKs (Automated)
kms_key_id = aws_kms_key.nfcisbenchmark.arn
depends_on = [aws_s3_bucket.nfcisbenchmark_cloudtrail]
cloud_watch_logs_role_arn = aws_iam_role.cloudwatch.arn
cloud_watch_logs_group_arn = "${aws_cloudwatch_log_group.nfcisbenchmark.arn}:*"
event_selector {
# 3.11 Ensure that Object-level logging for read events is enabled for S3 bucket (Automated)
read_write_type = "All"
include_management_events = true
}
}
I get Error: Error creating CloudTrail: InvalidCloudWatchLogsLogGroupArnException: Access denied. Check the permissions for your role. Any help with this issue would be greatly appreciated.

Version 3.0.0 of the AWS provider bundled a breaking change to the aws_cloudwatch_log_group resource's ARN output by stripping the :* suffix returned previously. Instead you now have to explicitly add this in places where the AWS API wants the :* suffix. All of the documentation was then updated to follow this pattern as well which is why you see this in the aws_cloudtrail resource documentation:
resource "aws_cloudwatch_log_group" "example" {
name = "Example"
}
resource "aws_cloudtrail" "example" {
# ... other configuration ...
cloud_watch_logs_group_arn = "${aws_cloudwatch_log_group.example.arn}:*" # CloudTrail requires the Log Stream wildcard
}
For you though, on v2.6.0, your ARN already includes this :* so you don't need to add it an extra time but you do need to remember to strip the :* suffix on resources where the AWS API doesn't want that suffix (by the looks of this issue then the aws_datasync_task resource is one of those).
Alternatively you could update your AWS provider to > v3.0.0 and keep the suffix there which will help you with a lot of other potential issues in the future.

Related

Terraform Provider issue: registry.terraform.io/hashicorp/s3

I current have code that I have been using for quiet sometime that calls a custom S3 module. Today I tried to run the same code and I started getting an error regarding the provider.
╷ │ Error: Failed to query available provider packages │ │ Could not
retrieve the list of available versions for provider hashicorp/s3:
provider registry registry.terraform.io does not have a provider named
│ registry.terraform.io/hashicorp/s3 │ │ All modules should specify
their required_providers so that external consumers will get the
correct providers when using a module. To see which modules │ are
currently depending on hashicorp/s3, run the following command: │
terraform providers
Doing some digging seems that terraform is looking for a module registry.terraform.io/hashicorp/s3, which doesn't exist.
So far, I have tried the following things:
Validated that the S3 Resource code meets the standards of the upgrade Hashicorp did to 4.x this year. Plus I have been using it for a couple of months with no issues.
Delete .terraform directory and rerun terraform init (No success same error)
Delete .terraform directory and .terraform.hcl lock and run terraform init -upgrade (No Success)
I have tried to update my provider's file to try to force an upgrade (no Success)
I tried to change the provider to >= current version to pull the latest version with no success
Reading further, it refers to a caching problem of the terraform modules. I tried to run terraform providers lock and received this error.
Error: Could not retrieve providers for locking │ │ Terraform failed
to fetch the requested providers for darwin_amd64 in order to
calculate their checksums: some providers could not be installed: │ -
registry.terraform.io/hashicorp/s3: provider registry
registry.terraform.io does not have a provider named
registry.terraform.io/hashicorp/s3.
Kind of at my wits with what could be wrong. below is a copy of my version.tf which I changed from providers.tf based on another post I was following:
version.tf
# Configure the AWS Provider
provider "aws" {
region = "us-east-1"
use_fips_endpoint = true
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.9.0"
}
local = {
source = "hashicorp/local"
version = "~> 2.2.1"
}
}
required_version = ">= 1.2.0" #required terraform version
}
S3 Module
I did not include locals, outputs, or variables unless someone thinks we need to see them. As I said before, the module was running correctly until today. Hopefully, this is all you need for the provider's issue. Let me know if other files are needed.
resource "aws_s3_bucket" "buckets" {
count = length(var.bucket_names)
bucket = lower(replace(replace("${var.bucket_names[count.index]}-s3", " ", "-"), "_", "-"))
force_destroy = var.bucket_destroy
tags = local.all_tags
}
# Set Public Access Block for each bucket
resource "aws_s3_bucket_public_access_block" "bucket_public_access_block" {
count = length(var.bucket_names)
bucket = aws_s3_bucket.buckets[count.index].id
block_public_acls = var.bucket_block_public_acls
ignore_public_acls = var.bucket_ignore_public_acls
block_public_policy = var.bucket_block_public_policy
restrict_public_buckets = var.bucket_restrict_public_buckets
}
resource "aws_s3_bucket_acl" "bucket_acl" {
count = length(var.bucket_names)
bucket = aws_s3_bucket.buckets[count.index].id
acl = var.bucket_acl
}
resource "aws_s3_bucket_versioning" "bucket_versioning" {
count = length(var.bucket_names)
bucket = aws_s3_bucket.buckets[count.index].id
versioning_configuration {
status = "Enabled"
}
}
resource "aws_s3_bucket_lifecycle_configuration" "bucket_lifecycle_rule" {
count = length(var.bucket_names)
bucket = aws_s3_bucket.buckets[count.index].id
rule {
id = "${var.bucket_names[count.index]}-lifecycle-${count.index}"
status = "Enabled"
expiration {
days = var.bucket_backup_expiration_days
}
transition {
days = var.bucket_backup_days
storage_class = "GLACIER"
}
}
}
# AWS KMS Key Server Encryption
resource "aws_s3_bucket_server_side_encryption_configuration" "bucket_encryption" {
count = length(var.bucket_names)
bucket = aws_s3_bucket.buckets[count.index].id
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = aws_kms_key.bucket_key[count.index].arn
sse_algorithm = var.bucket_sse
}
}
}
Looking for any other ideas I can use to fix this issue. thank you!!
Although you haven't included it in your question, I'm guessing that somewhere else in this Terraform module you have a block like this:
resource "s3_bucket" "example" {
}
For backward compatibility with modules written for older versions of Terraform, terraform init has some heuristics to guess what provider was intended whenever it encounters a resource that doesn't belong to one of the providers in the module's required_providers block. By default, a resource "belongs to" a provider by matching the prefix of its resource type name -- s3 in this case -- to the local names chosen in the required_providers block.
Given a resource block like the above, terraform init would notice that required_providers doesn't have an entry s3 = { ... } and so will guess that this is an older module trying to use a hypothetical legacy official provider called "s3" (which would now be called hashicorp/s3, because official providers always belong to the hashicorp/ namespace).
The correct name for this resource type is aws_s3_bucket, and so it's important to include the aws_ prefix when you declare a resource of this type:
resource "aws_s3_bucket" "example" {
}
This resource is now by default associated with the provider local name "aws", which does match one of the entries in your required_providers block and so terraform init will see that you intend to use hashicorp/aws to handle this resource.
My colleague and I finally found the problem. Turns out that we had a data call to the S3 bucket. Nothing was wrong with the module but the place I was calling the module had a local.tf action where I was calling s3 in a legacy format see the change below:
WAS
data "s3_bucket" "MyResource" {}
TO
data "aws_s3_bucket" "MyResource" {}
Appreciate the responses from everyone. Resource was the root of the problem but forgot that data is also a resource to check.

Is there a way to configure date-partitioned folders for AWS DMS endpoint target S3?

I'm using terraform in order to configure this DMS migration task that migrates (full-load+cdc) the data from a MySQL instance to a S3 bucket.
The problem is that the configuration seems not to take effect and no partition-folder is created. All the migrated files are created in the same directory within the bucket.
In the documentation they say the endpoint s3 setting DatePartitionEnabled, introduced in the version 3.4.2, is supported both for CDC and FullLoad+CDC.
My terraform configuration spec:
resource "aws_dms_endpoint" "example" {
endpoint_id = "example"
endpoint_type = "target"
engine_name = "s3"
s3_settings {
bucket_name = "example"
bucket_folder = "example-folder"
compression_type = "GZIP"
data_format = "parquet"
parquet_version = "parquet-2-0"
service_access_role_arn = var.service_access_role_arn
date_partition_enabled = true
}
tags = {
Name = "example"
}
}
But in the respective s3 bucket I get no folders, but sequential files as if this option wasn't there.
LOAD00000001.parquet
LOAD00000002.parquet
...
I'm using terraform 1.0.7, aws provider 3.66.0 and a DMS Replication Instance 3.4.6.
Does anyone know what could be this issue?

Terraform throws Error setting IAM policy for service account ... Permission iam.serviceAccounts.setIamPolicy is required

I am trying to create a very simple structure on GCP using Terraform: a compute instance + storage bucket. I did some research across GCP documentation, Terraform documentation, SO questions as well and still can't understand what's the trick here. There is one suggestion to use google_project_iam_binding, but reading thruogh some articles it seems to be dangerous (read: insecure solution). There's also a general answer with only GCP descriptions, nit using tf terms here, which is still a bit confusing. And also concluding the similar question here, I confirm that the domain name ownership was verified via Google Console.
So, I ended up with the following:
data "google_iam_policy" "admin" {
binding {
role = "roles/iam.serviceAccountUser"
members = [
"user:myemail#domain.name",
"serviceAccount:${google_service_account.serviceaccount.email}",
]
}
}
resource "google_service_account" "serviceaccount" {
account_id = "sa-1"
}
resource "google_service_account_iam_policy" "admin-acc-iam" {
service_account_id = google_service_account.serviceaccount.name
policy_data = data.google_iam_policy.admin.policy_data
}
resource "google_storage_bucket_iam_policy" "policy" {
bucket = google_storage_bucket.storage_bucket.name
policy_data = data.google_iam_policy.admin.policy_data
}
resource "google_compute_network" "vpc_network" {
name = "vpc-network"
auto_create_subnetworks = "true"
}
resource "google_compute_instance" "instance_1" {
name = "instance-1"
machine_type = "f1-micro"
boot_disk {
initialize_params {
image = "cos-cloud/cos-stable"
}
}
network_interface {
network = google_compute_network.vpc_network.self_link
access_config {
}
}
}
resource "google_storage_bucket" "storage_bucket" {
name = "bucket-1"
location = "US"
force_destroy = true
website {
main_page_suffix = "index.html"
not_found_page = "404.html"
}
cors {
origin = ["http://the.domain.name"]
method = ["GET", "HEAD", "PUT", "POST", "DELETE"]
response_header = ["*"]
max_age_seconds = 3600
}
}
but if I terraform apply, logs show me an error like that
Error: Error setting IAM policy for service account 'trololo': googleapi: Error 403: Permission iam.serviceAccounts.setIamPolicy is required to perform this operation on service account trololo., forbidden
2020/09/28 19:19:34 [TRACE] statemgr.Filesystem: removing lock metadata file .terraform.tfstate.lock.info
on main.tf line 35, in resource "google_service_account_iam_policy" "admin-acc-iam":
35: resource "google_service_account_iam_policy" "admin-acc-iam" {
2020/09/28 19:19:34 [TRACE] statemgr.Filesystem: unlocking terraform.tfstate using fcntl flock
Error: googleapi: Error 403: The bucket you tried to create is a domain name owned by another user., forbidden
on main.tf line 82, in resource "google_storage_bucket" "storage_bucket":
and some useless debug info. What's wrong? What account is missing what permissions and how to assign them securely?
I found the problem. As always, in 90% of cases, the issue is sitting in front of the computer.
Here are the steps that helped me to understand and to resolve the problem:
I read few more articles and especially this and this answer were very helpful to understand relations between users, service accounts, permissions
I understood that doing terraform destroy is also very important since there is no rollback of unsuccessful deploy of a new infrastructure changes (like with DB migrations for example) - thus you have to clean up either with destroy or manually
completely removed the "user:${var.admin_email}" user account IAM policy since it useless; everything has to be managed by the newly created service account
left the main service account with most permissions untouched (the one which was created manually and downloaded the access key) since Terraform used it's credentials
and changed the IAM policy for the new service account as roles/iam.serviceAccountAdmin instead of a User - thanks #Wojtek_B for the hint
After this everything works smooth!

Terraform - AWS IAM user with Programmatic access

I'm working with aws via terraform.
I'm trying to create an IAM user with Access type of "Programmatic access".
With the AWS management console this is quite simple:
When trying with Terraform (reference to docs) it seems that only the following arguments are supported:
name
path
permissions_boundary
force_destroy
tags
Maybe this should be configured via a policy?
Any help will be appreciated.
(*) Related question with different scenario.
You can use aws_iam_access_key (https://www.terraform.io/docs/providers/aws/r/iam_access_key.html) terraform resource to create Access keys for the user and that should imply that user has Programmatic Access.
Hope this helps.
The aws_iam_user resource needs to also have an aws_iam_access_key resource created for it.
The iam-user module has a comprehensive example of using it.
You could also use that module straight from the registry and let that do everything for you.
If you dont want to encrypt and just looking for Access key & Secret key into plain text you can use this
main.tf
resource "aws_iam_access_key" "sagemaker" {
user = aws_iam_user.user.name
}
resource "aws_iam_user" "user" {
name = "user-name"
path = "/"
}
data "aws_iam_policy" "sagemaker_policy" {
arn = "arn:aws:iam::aws:policy/AmazonSageMakerFullAccess"
}
resource "aws_iam_policy_attachment" "attach-policy" {
name = "sagemaker-policy-attachment"
users = [aws_iam_user.user.name]
policy_arn = data.aws_iam_policy.sagemaker_policy.arn
}
output.tf
output "secret_key" {
value = aws_iam_access_key.user.secret
}
output "access_key" {
value = aws_iam_access_key.user.id
}
you will get the Access key and secret key into the plain text you can directly use it.

Terraform - aws_config_config_rule - setting event_source to specific ResourceType

I am using Terraform to configure AWS Config Custom rules. In the custom rule config I want to limit the event 'Resource' to 'CloudTrail:Trail' but the only valid value I can find is the default value of 'aws.config'.
Is this the only valid 'Resource' you can specify in a Terraform built AWS Custom Config Rule?
resource "aws_config_config_rule" "custom_rule_01" {
name = "CUSTOM_CloudTrail_EnableLogFileValidation"
description = "Some Description"
source {
owner = "CUSTOM_LAMBDA"
source_identifier = "${aws_lambda_function.lambda_01.arn}"
source_detail {
event_source = "**aws.config**"
message_type = "ConfigurationItemChangeNotification"
}
}
}
Appreciate any guidance.
https://www.terraform.io/docs/providers/aws/r/config_config_rule.html#source-1
event_source - (Optional) The source of the event, such as an AWS service, that triggers AWS Config to evaluate your AWS resources. This defaults to aws.config and is the only valid value.
What you are looking for is resourceType
http://docs.aws.amazon.com/config/latest/APIReference/API_ResourceIdentifier.html#config-Type-ResourceIdentifier-resourceType
Which have the type of AWS::CloudTrail::Trail