I'm creating a flow log for VPC that sends the logs to a cloudwatch group. I'm using the exact same code from CloudWatch Logging section of this link: https://www.terraform.io/docs/providers/aws/r/flow_log.html and just changing the vpc_id with my VPC's id.
Although the flow log gets created, but after around 15 minutes the status changes from "Active" to "Access error: The log destination is not accessible."
1) It isn't a policy issue as when I'm doing the same from console, I'm using the same IAM role that terraform created and it is working perfectly fine.
2) I tried entering the ARN of an already existing cloudwatch log group rather than creating one from the terraform code but it isn't working as well.
Please let me know where I'm going wrong.
To fix this, look at my example:
resource "aws_flow_log" "management-vpc-flow-log-reject" {
log_destination = "arn:aws:logs:ap-southeast-2:XXXXXXXXXXX:log-group:REJECT-TRAFFIC-VPC-SHARED-SERVICES"
iam_role_arn = "${aws_iam_role.management-flow-log-role.arn}"
vpc_id = "${aws_vpc.management.id}"
traffic_type = "REJECT"
}
The error is in the log_destination. Terraform adds a ":*" to the end of the ARN. I tested this by manually creating the log group in the AWS console, and then importing it into terraform, and then doing a terraform state show to compare the two.
My log groups and streams are now working.
So it turned out to be a bug in the terraform. It seems the issue https://github.com/terraform-providers/terraform-provider-aws/issues/6373 will be resolved in the next version 1.43.0(provider AWS).
Related
I am receiving the following errors in the EC2 CloudWatch Agent logs, /var/logs/awslogs.log:
I verified the EC2 has a role:
And the role has the correct policies:
I have set the correct region in /etc/awslogs/awscli.conf:
I noticed that running aws configure list in the EC2 gives this:
Is this incorrect? Should it list the profile (EC2_Cloudwatch_Profile) there?
I was using terraform and reprovisioning by doing:
terraform destroy && terraform apply
Looks like due to IAM being a global service it is "eventually consistent" and not "immediately consistent", when the profile instance was destroyed, the terraform apply began too quickly. Despite the "destroy" being complete, the arn for the previous profile instance was still there, and was re-used. However, the ID changed to a new ID.
Replacing the EC2 would bring it up to speed with the correct ID. However, my solution is to just wait longer between terraform destroy and apply.
I am trying to create new aws account within our AWS org, but I am still getting no changes after terraform plan:
"No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed."
Am I missing something? This is the code:
resource "aws_organizations_account" "new_aws_member_account" {
name = "XXX"
email = "XXX#XXX"
iam_user_access_to_billing = "ALLOW"
}
I already tried to deploy new IAM policy (within AWS org account) and there was no problem, but I just can't create new account using this code, I probably missed something, but don't know what.
Our AWS org is created manually using AWS console, so not via terraform, but this shouldn't be a problem or yes?
Can you help please?
We have an AWS SecretsManager Secret that was created once. That secret will be updated by an external job every hour.
I have the problem that sometimes the terraform plan/apply fails with the following message:
AWS Provider 2.48
Error: Error refreshing state: 1 error occurred:
* module.xxx.xxx: 1 error occurred:
* module.xxx.aws_secretsmanager_secret_version.xxx:
aws_secretsmanager_secret_version.xxx: error reading Secrets Manager Secret Version: InvalidRequestException: You can't perform this operation on secret version 68AEABC3-34BE-4723-8BF5-469A44F9B1D9 because it was deleted.
We've tried two solutions:
1) Force delete the whole secret via aws cli, but this has the side effect that one of our dependend resources will also be recreated (ecs template definition depends on that secret). This works, but we do not want the side effect of recreating the ecs thing.
2) Manually edit the backend .tfstate file and set the current AWS secret version. Then run the plan again.
Both solution seem to be hacky in a way. What is the best way to solve this issue ?
You can use terraform import to reconcile the state difference before you run a plan or apply.
In your case, this would look like:
terraform import module.xxx.aws_secretsmanager_secret_version.xxx arn:aws:secretsmanager:some_region:some_account_id:secret:example-123456|xxxxx-xxxxxxx-xxxxxxx-xxxxx
I think perhaps the problem you are having is that by default AWS tries to "help you" by not letting you delete secrets automatically until 7 days have elapsed. AWS tries the "help you" by telling you they give you a grace period of 7 days to update your "code" that may rely on this. Which makes automation more difficult.
I have worked around this by setting the recovery window period to "0 days", effectively eliminating that grace period that AWS provides.
Then you can have terraform, rename, or delete your secret at will, either manually (via AWS CLI) or via terraform.
You can update an existing secret by putting in this value FIRST. Then change the name of the secret (if you wish to), or delete it (this terraform section) as desired and run the terraform again after the recovery window days = 0 has been applied.
Here is an example:
resource "aws_secretsmanager_secret" "mySecret" {
name = "your secret name"
recovery_window_in_days = "0"
// this is optional and can be set to true | false
lifecycle {
create_before_destroy = true
}
}
*Note, there is also an option to "create before destroy" you can set on the lifecyle.
https://www.terraform.io/docs/configuration/resources.html
Also, you can use the terraform resource to update the secret values like this:
This example will set the secret values once and then tell terraform to ignore any changes made to the values (username, password in this example) after the initial creation.
If you remove the lifecyle section, then terraform will keep track of whether or not the secret values themselves have changed. If they have changed they would revert back to the value in the terraform state.
If you store your tfstate files in an s3 protected bucket that is safer than not doing so, because they are plaintext in the statefile, so anyone with access to your terraform state file could see your secret values.
I would suggest: 1) figuring out what is deleting your secrets unexpectedly? 2) having your "external job" be a terraform bash script to update the values using a resource as in the example below.
Hope this gives you some ideas.
resource "aws_secretsmanager_secret_version" "your-secret-data" {
secret_id = aws_secretsmanager_secret.your-secret.id
secret_string = <<-EOF
{
"username": "usernameValue",
"password": "passwordValue"
}
EOF
// ignore any updates to the initial values above done after creation.
lifecycle {
ignore_changes = [
secret_string
]
}
}
I have the AWS CLI installed on my Windows computer, and running this command "works" exactly like I want it to.
aws ec2 describe-images
I get the following output, which is exactly what I want to see, because although I have access to AWS through my corporation (e.g. to check code into CodeCommit), I can see in the AWS web console for EC2 that I don't have permission to list running instances:
An error occurred (UnauthorizedOperation) when calling the DescribeImages operation: You are not authorized to perform this operation.
I've put terraform.exe onto my computer as well, and I've created a file "example.tf" that contains the following:
provider "aws" {
region = "us-east-1"
}
I'd like to issue some sort of Terraform command that would yell at me, explaining that my AWS account is not allowed to list Amazon instances.
Most Hello World examples involve using terraform plan against a resource to do an "almost-write" against AWS.
Personally, however, I always feel more comfortable knowing that things are behaving as expected with something a bit more "truly read-only." That way, I really know the round-trip to AWS worked but I didn't modify any of my corporation's state.
There's a bunch of stuff on the internet about "data sources" and their "aws_ami" or "aws_instances" flavors, but I can't find anything that tells me how to actually use it with a Terraform command for a simple print()-type interaction (the way it's obvious that, say, "resources" go with the "terraform plan" and "terraform apply" commands).
Is there something I can do with Terraform commands to "hello world" an attempt at listing all my organization's EC2 servers and, accordingly, watching AWS tell me to buzz off because I'm not authorized?
You can use the data source for AWS instances. You create a data source similar to the below:
data "aws_instances" "test" {
instance_tags = {
Role = "HardWorker"
}
filter {
name = "instance.group-id"
values = ["sg-12345678"]
}
instance_state_names = ["running", "stopped"]
}
This will attempt to perform a read action listing your EC2 instances designated by the filter you put in the config. This will also utilize the IAM associated with the Terraform user you are performing the terraform plan with. This will result in the error you described regarding lack of authorization, which is your stated goal. You should modify the filter to target your organization's EC2 instances.
I'm very new to Terraform and am trying use it to replicate what I've successfully created via the AWS console.
I'm trying to specify a "SSM Run Command" as a target for a Cloudwatch Rule and can get everything defined using the aws_cloudwatch_event_target resource except the "Document" field. The rule target and all other associated bits and pieces are all successfully created but when I edit the rule from the console, the document section is not filled out (screenshot below). Consequently the rule fails to fire.
target-as-shown-in-console
Looking at the Terraform documentation for aws_cloudwatch_event_target, I can't see any parameters to specify for the Document so I'm wondering if this is even possible? Which would be odd given every other parameter seems to be covered.
Below is the code I'm using to create the target - there is hard coded stuff in there but I'm just trying to get it to work at this point.
resource "aws_cloudwatch_event_target" "autogrow" {
rule = "autogrow"
arn = "arn:aws:ssm:eu-west-1:999999999999:document/AWS-RunShellScript"
role_arn = "arn:aws:iam::999999999999:role/ec2-cloudwatch"
run_command_targets {
key = "tag:InstanceIds"
values = ["i-99999999999"]
}
input = <<INPUT
{
"commands": "/data/ssmscript.sh",
"workingDirectory" : "/data",
"executionTimeout" : "300"
}
INPUT
}
Is it possible to do what I'm trying to do via Terraform? It does work via the console but I'm wondering if the functionality just isn't in Terraform yet? I'd expect a "Document" parameter to be able to be specified but all you can specify is "arn" for the target.
Any help would be greatly appreciated!
I had the same problem of the document not being selected correctly when created via cloudformation.
What was I doing wrong?
The ARN for the AWS Managed document I had was wrong
When I fixed the ARN for AWS-RunShellScript it started showing up in console after cloudformation created the resource
arn:aws:ssm:ap-southeast-2::document/AWS-RunShellScript
Most documentations I went through had an account ID in the ARN. Removing the account ID solved the problem.
I think what you need to do is create one of these:
https://www.terraform.io/docs/providers/aws/r/ssm_document.html
This will create an SSM document in AWS for you, then once you have that you need to associate that document with your instances with an ssm_document_association.
https://www.terraform.io/docs/providers/aws/r/ssm_association.html
Once you have the document associated with your instances the event should be able to be triggered via cloudwatch.
You can create your own SSM Document or you can use AWS made Documents.
To get the contents of the document owned by AWS:
data "aws_ssm_document" "aws_doc" {
name = "AWS-RunShellScript"
document_format = "JSON"
}
output "content" {
value = "${data.aws_ssm_document.aws_doc.content}"
}
Reference: https://www.terraform.io/docs/providers/aws/d/ssm_document.html