I am new to terraform.
I have created a security group in aws with some ingress rules using terraform. Now someone added a new ingress rule of 5429 port using console.I want to bring this change inside terraform, so i used below command
terraform apply -refresh-only
Now i can see the port open for 5429 in terraform statefile.But when i did $terrafrom apply changes were gone from console and statefile.
I want the changes to persist in console as well as terrafrom.Please suggest
Related
I'd like to enable/allow this AWS EC2 Instance Setting "Access to tags in instance metadata" using one of my Terraform's resources aws_launch_configuration OR aws_autoscaling_group.
I have tried to use this argument metadata_options of aws_launch_configuration resource but it did not work.
In addition find this GitHub Issue aws_launch_configuration add support for Instance Metadata Options #14621
How can I solve this issue?
Maybe someone could help.
We are creating AWS EKS Cluster on our project using Terraform.
I’m working on security groups. I created two security groups, and one is created by eks itself.
The problem is, that this security group is against company’s security policy. I need to change inbound and outbound rules for this security group. All this need to be done using Terraform (or maybe there is other workaround) but everything need to be done automatically.
I was able to get this security groups output, but no luck when tryed to use this id to create rule, and currently no idea how I can delete existing rules.
Sorry, if there is something stupid I have asked, I’m new on this, hope you can give some advice.
I'm doing the same, and my workaround is:
To import the existing SG, and then modify it. It's not nice, because your configuration drifts, but Maybe somebody has some idea to use/update the original statefile. So:
Deploy the eks (I'm not pasting the code here, but I'm using the default aws module)
prepare another module with a security rule as I wish:
resource "aws_security_group_rule" "egress" {
type= "egress"
protocol = -1
from_port = 0
to_port = 0
source_security_group_id = data.aws_eks_cluster.delta-cluster.vpc_config[0].cluster_security_group_id
security_group_id = data.aws_eks_cluster.delta-cluster.vpc_config[0].cluster_security_group_id
}
do a terraform import into another module (notice YOU HAVE TO CHANGE THE APPROPRIATE SG -but just that, the rest is AWS' magic):
terraform import aws_security_group_rule.egress sg-004582110c1572053_egress_all_0_65535_0.0.0.0/0
terraform apply
I am looking for a way to provision an instance with a configuration file that contains the endpoints to connect to a database cluster in an automatic way, using terraform. I am using a aws_rds_cluster resource, from which I can get the endpoint using the expression aws_rds_cluster.my-cluster.endpoint. Then, I would like to provision machines instantiated with an aws_instance resource so that the value of that expression is stored in the file /DBConfig.sh.
The content of the DBConfig.sh file would look like this :
#!/bin/bash
ENDPOINT=<$aws_rds_cluster.my-cluster.endpoint$>
READER_ENDPOINT=<$aws_rds_cluster.my-cluster.reader_endpoint$>
Truth be told, once I successfully reach that point, I'd like to be able to do the same thing for machines created by a aws_launch_configuration resource.
Is this something that can be done with terraform? If not, what other tools can I use to achieve this kind of automation? Thanks for your help!
There are few ways which can achieve that. I think all of the would involve user_data.
For example, you could have aws_instance with the user_data as follows:
resource "aws_instance" "web" {
# other atrributes
user_data = <<-EOL
#!/bin/bash
cat >./DBConfig.sh <<-EOL2
#!/bin/bash
ENDPOINT=${aws_rds_cluster.my-cluster.endpoint}
READER_ENDPOINT=${aws_rds_cluster.my-cluster.reader_endpoint}
EOL2
chmod +x ./DBConfig.sh
EOL
}
The above will launch an istance which will have DBConfig.sh with resolved values of the endpoints in its root (/) directory.
I'm creating a flow log for VPC that sends the logs to a cloudwatch group. I'm using the exact same code from CloudWatch Logging section of this link: https://www.terraform.io/docs/providers/aws/r/flow_log.html and just changing the vpc_id with my VPC's id.
Although the flow log gets created, but after around 15 minutes the status changes from "Active" to "Access error: The log destination is not accessible."
1) It isn't a policy issue as when I'm doing the same from console, I'm using the same IAM role that terraform created and it is working perfectly fine.
2) I tried entering the ARN of an already existing cloudwatch log group rather than creating one from the terraform code but it isn't working as well.
Please let me know where I'm going wrong.
To fix this, look at my example:
resource "aws_flow_log" "management-vpc-flow-log-reject" {
log_destination = "arn:aws:logs:ap-southeast-2:XXXXXXXXXXX:log-group:REJECT-TRAFFIC-VPC-SHARED-SERVICES"
iam_role_arn = "${aws_iam_role.management-flow-log-role.arn}"
vpc_id = "${aws_vpc.management.id}"
traffic_type = "REJECT"
}
The error is in the log_destination. Terraform adds a ":*" to the end of the ARN. I tested this by manually creating the log group in the AWS console, and then importing it into terraform, and then doing a terraform state show to compare the two.
My log groups and streams are now working.
So it turned out to be a bug in the terraform. It seems the issue https://github.com/terraform-providers/terraform-provider-aws/issues/6373 will be resolved in the next version 1.43.0(provider AWS).
Is there a way to export Google Cloud configuration for an object, such as for the load balancer, in the same way as one would use to set it up via the API?
I can quickly configure what I need in the console site, but I am spending tons of time trying to replicate that with Terraform. It would be great if I can generate Terraform files, or at least the Google API output, from the system I already have configured.
If you have something already created outside of Terraform and want to have Terraform manage it or want to work out how to best configure it with Terraform you could use Terraform's import command for any resource that supports it.
So if you have created a forwarding rule called terraform-test via the Google Cloud console and want to know how that maps to Terraform's google_compute_forwarding_rule resource then you could run terraform import google_compute_forwarding_rule.default terraform-test to import this into Terraform's state file.
If you then ran a plan, Terraform will tell you that it has the google_compute_forwarding_rule.default in its state but the resource is not defined in your code and as such it will want to remove it.
If you add the minimal config needed to make the plan work:
resource "google_compute_forwarding_rule" "default" {
name = "terraform-test"
}
And run the plan again Terraform will then tell you what things it needs to change to make your imported forwarding rule look like the config you have defined. Assuming you've done something like set the description on the load balancer Terraform's plan will show something like this:
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
~ update in-place
-/+ destroy and then create replacement
Terraform will perform the following actions:
~ google_compute_forwarding_rule.default
description: "This is the description I added via the console" => ""
Plan: 5 to add, 5 to change, 3 to destroy.
This tells you that Terraform wants to remove the description on the forwarding rule to make it match the config.
If you then update your resource definition to something like:
resource "google_compute_forwarding_rule" "default" {
name = "terraform-test"
description = "This is the description I added via the console"
}
Terraform's plan will then show an empty change set:
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, no
actions need to be performed.
At this point you have now aligned your Terraform code with the reality of the resource in Google Cloud and should be able to easily see what needs to be set on the Terraform side to make things happen as expected in the Google Cloud console.