I stuck at old code written by another developer 2 years before, that I can't pass the terraform validate. I tried validate with old terrafrom version as well, 0.11, 0.13, 0.15 and 1.0
variable "aws_name" {
default = "${aws:name}"
}
I am confused by the sytax, that looks the author try to reference variable from another variable in terraform, but I don't think terraform supports this feature, from beginning.
I mean no support by this way from old versions, such as terrafrom 0.6, to current version 1.0.x
If the code use ${var.xxxx}, I think it was the code created before terraform 0.12, because after that, we don't need "${ }" to reference a variable, we can directly reference it via var.aws_name
Second we can't reference a variable as "aws:name" without "var" in front of it, and colon will misleading in terraform as well.
Any one knew this way in terraform, is it validated in some of terrafrom versions?
Update
As #Matt Schuchard mentioned, the azure pipeline task replacetokens#4 does support other style for the replacement (the fourth)
try to reference variable from another variable in terraform, but I don't think terraform supports this feature
That's correct. You can't do this. The only reason for that I can think of is that the terraform was part of some CI/CD pipeline. Thus, maybe before the actual TF scripts were run, they were per-processed by an external tool, which made simple find-and-replace of ${aws:name} string to valid values.
One possibility could be Before Hooks in terragrunt.
Related
I have an AWS Terraform repo where i have an architecture for an AWS solution.
Over time people have gone onto the management console and made changes to the architecture without changing the terraform code causing a drift between the repo and the actual architecture on aws.
Is there a way i can change detect the drift and update my main.tf file to match the new architecture? I know you can use terraform apply -refresh to update the state file but does this affect the main.tf file aswell? Does anyone have a solution for a problem like this so that all my files are updated correctly? Thanks!
his affect the main.tf file aswell
Sadly no. main.tf is not affected.
Does anyone have a solution for a problem like this so that all my files are updated correctly?
Such a solution does not exist unless you develop your own. You have to manually update your main.tf to match the state of your resources.
However a bit of help can come from former2 which can scan your resources and produce terraform code.
Terraform's work of evaluating the given configuration to determine the desired state is inherently lossy. The desired state used to produce a plan, and the updated state obtained by applying that plan, include only the final values resulting from evaluating any expressions, and it isn't possible in general to reverse updated values back to updated expressions that would produce those values.
For example, imagine that you have an argument like this:
foo = sha1("hello")
This produces a SHA-1 checksum of the string "hello". If someone changes the checksum in the remote system, Terraform can see that the checksum no longer matches but it cannot feasibly determine what new string must be provided to sha1 to produce that new checksum. This is an extreme example using an inherently irreversible function, but this general problem applies to any argument whose definition is more than just a literal value.
Instead, terraform plan -refresh-only will show you the difference between the previous run result and the refreshed state, so you can see how the final results for each argument have changed. You'll need to manually update your configuration so that it will somehow produce a value that matches that result, which is sometimes as simple as just copying the value literally into your configuration but is often more complicated because arguments in a resource block can be derived from data elsewhere in your module and transformed arbitrarily using Terraform functions.
I'm updating a currently running google cloud dataflow job from the v1.8 Java Dataflow SDK to v2.4 Java Dataflow SDK and as part of that process as per the release notes for the 1.x -> 2.x move (https://cloud.google.com/dataflow/release-notes/release-notes-java-2#changed_pubsubio_api) I'm changing the function PubsubIO.Read as used below:
PCollection<String> streamData =
pipeline
.apply(PubsubIO.Read
.timestampLabel(PUBSUB_TIMESTAMP_LABEL_KEY)
.topic(options.getPubsubTopic()));
to instead be PubsubIO.readStrings() as below:
PCollection<String> streamData =
pipeline
.apply(PubsubIO.readStrings()
.withTimestampAttribute(PUBSUB_TIMESTAMP_LABEL_KEY)
.fromTopic(options.getPubsubTopic()));
Which then leads me to need to use the transform mapping command line argument like so
'--transformNameMapping={\"PubsubIO.Read\": \"PubsubIO.Read/PubsubUnboundedSource\"}'
But I get a compatabiltiy check failure:
Workflow failed. Causes: The new job is not compatible with
2016-12-13_15_23_40-..... The original job has not been aborted., The
Coder or type for step PubsubIO.Read/PubsubUnboundedSource has
changed.
This confuses me a bit as it seems like the old code was working with strings and the new code is still using strings, can anyone help me understand what this error message is telling me? Is there perhaps a way for me to add a logging statement that will tell me what Coder I am using so that I can run my tests with my old code and new code and see what the difference is?
I think that the problem is that you are trying to update an existing job. As the 2.x release introduced breaking changes, streaming jobs cannot be updated. There is a warning for users upgrading from 1.x at the top of that documentation page that reads:
Update Incompatibility: The Dataflow SDK 2.x for Java is update-incompatible with Dataflow 1.x. Streaming jobs using a Dataflow
1.x SDK cannot be updated to use a Dataflow 2.x SDK. Dataflow 2.x pipelines may only be updated across versions starting with SDK
version 2.0.0.
Regarding the Coder changes there is some explanation on BEAM-1415:
There's no longer a way to read/write a generic type T. Instead,
there's PubsubIO. {read,write} {Strings,Protos,PubsubMessages}.
Strings and protos are a very common case so they have shorthands. For
everything else, use PubsubMessage and parse it yourself. In case of
read, you can read them with or without attributes. This gets rid of
the ugly use of Coder for decoding a message's payload (forbidden by
the style guide), and since PubsubMessage is easily encodable, again
the style guide also dictates to use that explicitly as the
input/return type of the transforms
In your tests you can use CoderRegistry.getCoder() as in here.
I'm upgrading a google cloud dataflow job from dataflow java sdk 1.8 to version 2.4 and then trying to update its existing dataflow job on google cloud using the --update and --transformNameMapping arguments, but I can't figure out how to properly write the transformNameMappings such that the upgrade succeeds and passes the compatibility check.
My code fails at the compatibility check with the error:
Workflow failed. Causes: The new job is not compatible with 2018-04-06_13_48_04-12999941762965935736. The original job has not been aborted., The new job is missing steps BigQueryIO.writeTableRows/BigQueryIO.StreamWithDeDup/Reshuffle/GroupByKey, PubsubIO.readStrings. If these steps have been renamed or deleted, please specify them with the update command.
The dataflow transform names for the existing, currently running job are:
PubsubIO.Read
ParDo(ExtractJsonPath) - A custom function we wrote
ParDo(AddMetadata) - Another custom function we wrote
BigQueryIO.Write
In my new code that uses the 2.4 sdk, I've changed the 1st and 4th transforms/functions because of some libraries being renamed and deprecation of some of the old sdk's functions in the new version.
You can see the specific transform code below:
The 1.8 SDK version:
PCollection<String> streamData =
pipeline
.apply(PubsubIO.Read
.timestampLabel(PUBSUB_TIMESTAMP_LABEL_KEY)
//.subscription(options.getPubsubSubscription())
.topic(options.getPubsubTopic()));
streamData
.apply(ParDo.of(new ExtractJsonPathFn(pathInfos)))
.apply(ParDo.of(new AddMetadataFn()))
.apply(BigQueryIO.Write
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND)
.to(tableRef)
The 2.4 SDK version I rewrote:
PCollection<String> streamData =
pipeline
.apply("PubsubIO.readStrings", PubsubIO.readStrings()
.withTimestampAttribute(PUBSUB_TIMESTAMP_LABEL_KEY)
//.subscription(options.getPubsubSubscription())
.fromTopic(options.getPubsubTopic()));
streamData
.apply(ParDo.of(new ExtractJsonPathFn(pathInfos)))
.apply(ParDo.of(new AddMetadataFn()))
.apply("BigQueryIO.writeTableRows", BigQueryIO.writeTableRows()
.withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED)
.withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND)
.to(tableRef)
So it seems to me like PubsubIO.Read should map to PubsubIO.readStrings and BigQueryIO.Write should map to BigQueryIO.writeTableRows. But I could be misunderstanding how this works.
I've been trying a wide variety of things - I tried to give those two transforms that I'm failing to remap defined names as they formerly were not explicity named, so I updated my applys to .apply("PubsubIO.readStrings" and .apply("BigQueryIO.writeTableRows" and then set my transformNameMapping argument to:
--transformNameMapping={\"BigQueryIO.Write\":\"BigQueryIO.writeTableRows\",\"PubsubIO.Read\":\"PubsubIO.readStrings\"}
or
--transformNameMapping={\"BigQueryIO.Write/BigQueryIO.StreamWithDeDup/Reshuffle/GroupByKey\":\"BigQueryIO.writeTableRows/BigQueryIO.StreamWithDeDup/Reshuffle/GroupByKey\",\"PubsubIO.Read\":\"PubsubIO.readStrings\"}
or even trying to remap all the internal transforms inside the composite transform
--transformNameMapping={\"BigQueryIO.Write/BigQueryIO.StreamWithDeDup/Reshuffle/GroupByKey\":\"BigQueryIO.writeTableRows/BigQueryIO.StreamWithDeDup/Reshuffle/GroupByKey\",\"BigQueryIO.Write/BigQueryIO.StreamWithDeDup/Reshuffle\":\"BigQueryIO.writeTableRows/BigQueryIO.StreamWithDeDup/Reshuffle\",\"BigQueryIO.Write/BigQueryIO.StreamWithDeDup\":\"BigQueryIO.writeTableRows/BigQueryIO.StreamWithDeDup\",\"BigQueryIO.Write\":\"BigQueryIO.writeTableRows\",\"PubsubIO.Read\":\"PubsubIO.readStrings\"}
but I seem to get the same exact error no matter what:
The new job is missing steps BigQueryIO.writeTableRows/BigQueryIO.StreamWithDeDup/Reshuffle/GroupByKey, PubsubIO.readStrings.
Wondering if I'm doing something seriously wrong? Anybody whose written a transform mapping before who would be willing to share the format they used? I can't find any examples online at all besides the main google documentation on updating dataflow jobs which doesn't really cover anything but the most simple case --transformNameMapping={"oldTransform1":"newTransform1","oldTransform2":"newTransform2",...} and doesn't make the example very concrete.
It turns out there was additional information in the logs in the google cloud web console dataflow job details page that I was missing. I needed to adjust the log level from info to show any log level and then I found several step fusion messages like for example (although there were far more):
2018-04-16 (13:56:28) Mapping original step BigQueryIO.Write/BigQueryIO.StreamWithDeDup/Reshuffle/GroupByKey to write/StreamingInserts/StreamingWriteTables/Reshuffle/GroupByKey in the new graph.
2018-04-16 (13:56:28) Mapping original step PubsubIO.Read to PubsubIO.Read/PubsubUnboundedSource in the new graph.
Instead of trying to map PubsubIO.Read to PubsubIO.readStrings I needed to map to the steps that I found mentioned in that additional logging. In this case I got past my errors by mapping PubsubIO.Read to PubsubIO.Read/PubsubUnboundedSource and BigQueryIO.Write/BigQueryIO.StreamWithDeDup to BigQueryIO.Write/StreamingInserts/StreamingWriteTables. So try mapping your old steps to those that are mentioned in the full logs before the job failure message in the logs.
Unfortunately I'm not working through a failure of the compatibility check due to a change in the coder used from the old code to the new code, but my missing step errors are solved.
efx/
...
aws_account/
nonprod/
account-variables.tf
dev/
account-variables.tf
common.tf
app1.tf
app2.tf
app3.tf
...
modules/
tf_efxstack_app1
tf_efxstack_app2
tf_efxstack_app3
...
In a given environment (dev in the example above), we have multiple modules (app1, app2, app3, etc.) which are based on individual applications we are running in the infrastructure.
I am trying to update the state of one module at a time (e.g. app1.tf). I am not sure how I can do this.
Use Case: I would like only one of the module's LC to be updated to use the latest AMI or security group.
I tried the -target command in terrafrom, but this does not seem to work because it does not check the terraform remote state file.
terraform plan -target=app1.tf
terraform apply -target=app1.tf
Therefor, no changes take place. I believe this is a bug with terraform.
Any ideas how I can accomplish this?
Terraform's -target should be for exceptional use cases only and you should really know what you're doing when you use it. If you genuinely need to regularly target different parts at a time then you should separate your applications into different directory so you can easily apply the whole directory at a time.
This might mean you need to use data sources or rethink the structure of things a bit more but means you also limit the blast radius of any single Terraform action which is always useful.
Whether it is possible to create AWS SDK in Perl?. I need to use AWS transcoder service from my perl script. But I wonder AWS SDK is not available for Perl(http://aws.amazon.com/code). Or do they have any other method to use PHP SDK in a Perl script?
The API is just "sending specific things over HTTP". You don't need a language specific library for that, although it does make things easier. Anyone can write such a wrapper, and some people already have done that for Perl.
Years later, there is now Paws, a Perl AWS interface. It's on CPAN.
It's fairly easy to write your own Perl modules to work with the AWS API. As remarked above, if you can make HTTP calls and create an HMAC signature, any language can do it.
However, there are already a lot of Perl modules on CPAN that address specific AWS functions, such as S3 or EC2. Go to http://www.cpan.org/src/ to search for what you need (e.g., SNS). You'll generally find something that will meet your need.
http://www.timkay.com/aws/
I have found Tim Kay's "aws" and "s3" tools quite useful. They are written in Perl.
It has the added advantage of --exec, so you can append commands directly to the output, in their original state from AWS. It has been a terror for me to have international characters and other junk floating about as sad excuse for file names. With Tim's toolset, I was able to workaround the problem by using the --exec to call for the prefix of the filename (also unique) and then act upon it directly, instead of mucking about with metacharacters and other nonsense.
For example:
/123/456/789/You can't be serious that this is really a filename.txt
/123/456/901/Oh!Yes I can! *LOL* Honest!.txt
To nuke the first one:
aws ls --no-vhost mybucketname/123/456/789/ --exec='system "aws", "rm", "--no-vhost", "$bucket/$key"'
Simply put, the tool performs an equivalent "ls" on the S3 bucket, for that prefix, and returns ALL file names in that prefix, which are passed into the exec function. From there, you can see I am blindly deleting whatever files are held within.
(note: --no-vhost helps resolve bucketnames with periods in them and you don't need to use long URLs to get from point a to point b.)