I have an AWS Terraform repo where i have an architecture for an AWS solution.
Over time people have gone onto the management console and made changes to the architecture without changing the terraform code causing a drift between the repo and the actual architecture on aws.
Is there a way i can change detect the drift and update my main.tf file to match the new architecture? I know you can use terraform apply -refresh to update the state file but does this affect the main.tf file aswell? Does anyone have a solution for a problem like this so that all my files are updated correctly? Thanks!
his affect the main.tf file aswell
Sadly no. main.tf is not affected.
Does anyone have a solution for a problem like this so that all my files are updated correctly?
Such a solution does not exist unless you develop your own. You have to manually update your main.tf to match the state of your resources.
However a bit of help can come from former2 which can scan your resources and produce terraform code.
Terraform's work of evaluating the given configuration to determine the desired state is inherently lossy. The desired state used to produce a plan, and the updated state obtained by applying that plan, include only the final values resulting from evaluating any expressions, and it isn't possible in general to reverse updated values back to updated expressions that would produce those values.
For example, imagine that you have an argument like this:
foo = sha1("hello")
This produces a SHA-1 checksum of the string "hello". If someone changes the checksum in the remote system, Terraform can see that the checksum no longer matches but it cannot feasibly determine what new string must be provided to sha1 to produce that new checksum. This is an extreme example using an inherently irreversible function, but this general problem applies to any argument whose definition is more than just a literal value.
Instead, terraform plan -refresh-only will show you the difference between the previous run result and the refreshed state, so you can see how the final results for each argument have changed. You'll need to manually update your configuration so that it will somehow produce a value that matches that result, which is sometimes as simple as just copying the value literally into your configuration but is often more complicated because arguments in a resource block can be derived from data elsewhere in your module and transformed arbitrarily using Terraform functions.
Related
I have the following architecture
I followed this link to the T, https://github.com/aws/amazon-sagemaker-examples/blob/main/step-functions-data-science-sdk/automate_model_retraining_workflow/automate_model_retraining_workflow.ipynb. I am not sure how to debug to see what is going wrong. Any suggestions would be appreciated.
To provide more context, this is a machine learning deployment project. What I am doing in the picture is chaining processes together. The "Query Training Results" part is a Lambda function that pulls the training metrics data from an S3 location. For some reason this part gets cancelled.
From what I found online (Why would a step function cancels itself when there are no errors), “this happens in step functions when you have a Choice state, and the Variable you are referencing is not actually in the state input.” There is also some answers in that post that suggest that the dictionary metrics need to be of string type which I made sure I casted it as such.
The problem I am having is when you click on that grey box it provides no information other than the fact that it was cancelled, so I have no clue what is going wrong.
I stuck at old code written by another developer 2 years before, that I can't pass the terraform validate. I tried validate with old terrafrom version as well, 0.11, 0.13, 0.15 and 1.0
variable "aws_name" {
default = "${aws:name}"
}
I am confused by the sytax, that looks the author try to reference variable from another variable in terraform, but I don't think terraform supports this feature, from beginning.
I mean no support by this way from old versions, such as terrafrom 0.6, to current version 1.0.x
If the code use ${var.xxxx}, I think it was the code created before terraform 0.12, because after that, we don't need "${ }" to reference a variable, we can directly reference it via var.aws_name
Second we can't reference a variable as "aws:name" without "var" in front of it, and colon will misleading in terraform as well.
Any one knew this way in terraform, is it validated in some of terrafrom versions?
Update
As #Matt Schuchard mentioned, the azure pipeline task replacetokens#4 does support other style for the replacement (the fourth)
try to reference variable from another variable in terraform, but I don't think terraform supports this feature
That's correct. You can't do this. The only reason for that I can think of is that the terraform was part of some CI/CD pipeline. Thus, maybe before the actual TF scripts were run, they were per-processed by an external tool, which made simple find-and-replace of ${aws:name} string to valid values.
One possibility could be Before Hooks in terragrunt.
efx/
...
aws_account/
nonprod/
account-variables.tf
dev/
account-variables.tf
common.tf
app1.tf
app2.tf
app3.tf
...
modules/
tf_efxstack_app1
tf_efxstack_app2
tf_efxstack_app3
...
In a given environment (dev in the example above), we have multiple modules (app1, app2, app3, etc.) which are based on individual applications we are running in the infrastructure.
I am trying to update the state of one module at a time (e.g. app1.tf). I am not sure how I can do this.
Use Case: I would like only one of the module's LC to be updated to use the latest AMI or security group.
I tried the -target command in terrafrom, but this does not seem to work because it does not check the terraform remote state file.
terraform plan -target=app1.tf
terraform apply -target=app1.tf
Therefor, no changes take place. I believe this is a bug with terraform.
Any ideas how I can accomplish this?
Terraform's -target should be for exceptional use cases only and you should really know what you're doing when you use it. If you genuinely need to regularly target different parts at a time then you should separate your applications into different directory so you can easily apply the whole directory at a time.
This might mean you need to use data sources or rethink the structure of things a bit more but means you also limit the blast radius of any single Terraform action which is always useful.
My DynamoDB table is quite large and I don't particularly want to dump the whole thing. There is one column that I want to test on, so I would like a dump of all of its values that I could have locally to code/test with. However I am not finding anything that lets me do this.
I found RazorSQL and it semi worked (in the sense that it let me pull down just one column of information from the table but it clearly didn't pull down all the data).
I also found a Data Pipeline Template on AWS but from what I can tell this will dump the entire table. I am relatively new to AWS so it's possible I'm not understanding something about pipelines properly.
I'm okay with writing to S3 because I can pull down all the data from there, but anything that gets to my local machine is fine by me
Thanks for the help!
UPDATE: This tutorial looks promising but I want to achieve this effect in a non-interactive method
What I'm trying to achieve is the following:
I have multiple dependent configurations for a single, logical build. The very first configuration runs a script that does a bit of work and returns a value. You can think of this configuration as the setup step. I need to be able store this value and use it in subsequent steps. All dependent configurations for a single build should receive the same value.
Setup() computes a value x. I then have configurations B(x) and A(x) that run after Setup() and need to be fed the calculated value x.
Previously, I've managed to do something similar for things that are calculated as part of the TeamCity configuration. E.g. I generated a unique build id for the entire build chain and was able to access it via %dep.{team_city_configuration_id}.system.build.number%.
This time, the value I need to propagate is calculated in the guts of a build script and not as part of the TeamCity plumbing. I've managed to wrap the setup script in question and grep out the value I need, but I don't know how to propagate it between configurations.
Is this even possible, or am I barking up the wrong tree? If I cannot do this in a non-insane way, is there a better alternative I'm missing?
Thanks
Can a mod close this, please? It's a dupe. My colleague found this, which does exactly what we wanted.