I am attempting to import an existing EC2 instance into Terraform. I have taken the EC2 instance User Data, and added it to my TF config file e.g.
user_data = <<EOF
<powershell>
& $env:SystemRoot\System32\control.exe "intl.cpl,,/f:`"UKRegion.xml`""
& tzutil /s "GMT Standard Time"
Set-Culture en-GB
</powershell>
EOF
The resource imports OK, but when I run terraform plan I get TF wanting to destroy and recreate the instance, as a 'change' in user_data 'forces new resource'.
user_data: "946f756af0df239b19f86a72653e58dcc04c4b27" => "811599030dc713b18c3e35437a82b35095190a81" (forces new resource)
I have tried copy and pasting the user data from EC2 console into the TF file, but this is not working. Is this at all possible?
You can't copy the user_data value from the state, as it is an encoded string and if you copy it into the resource configuration, it'll get encoded again before it is compared to the current state, and it won't match.
But you can copy the current user data value from the Instance settings in the EC2 Console, and paste it into the user_data attribute of the resoure in the .tf file.
If there are multiple lines of data, you will need to replace each new-line with a \n in the file, as you can't use a multi-line string.
For example:
resource "aws_instance" "instance1" {
ami = "ami-0123456789abcdef0"
instance_type = "t4.medium"
user_data = "setting1=value\nsetting2=value\nsetting3=value"
}
Note: there is a 16KB size limit on the user data in EC2, but if you're copying from EC2 to your Terraform configuration, this won't be an issue. (But it will make it difficult to read and manage, so you may want to consider storing the config in a custom image instead.)
Or, as you suggested already, if your setup supports this and you can afford to shut the instance down temporarily, remove the user data from the instance settings altogether.
Per this github issue, it looks like this is an issue with how terraform interprets the user_data as a "computed" value. There appears to be a work around.
First run a plan/apply cycle with your plan command including the extra argument on your command line:
-target=template_file.userdata-consul. This will tell Terraform to do the minimal work it needs to update the template file, which should
leave your launch configuration untouched.
Now run plan again, and since the template_file has now already been recreated it should interpolate the resolved template as
expected into the user_data, and there should then be no diff
because the "new" template rendering should be the same as the
"old" one.
Related
I have a secret stored in AWS secret manager and trying to integrate that within terraform during runtime. We are using terraform 0.11.13 version, and updating to latest terraform is in the roadmap.
We all want to use the jsondecode() available as part of latest terraform, but need to get few things integrated before we upgrade our terraform.
We tried to use the below helper external data program suggested as part of https://github.com/terraform-providers/terraform-provider-aws/issues/4789.
data "external" "helper" {
program = ["echo", "${replace(data.aws_secretsmanager_secret_version.map_example.secret_string, "\\\"", "\"")}"]
}
But we ended up getting this error now.
data.external.helper: can't find external program "echo"
Google search didn't help much.
Any help will be much appreciated.
OS: Windows 10
It sounds like you want to use a data source for the aws_secretsmanager_secret.
Resources in terraform create new resources. Data sources in terraform reference the value of existing resources in terraform.
data "aws_secretsmanager_secret" "example" {
arn = "arn:aws:secretsmanager:us-east-1:123456789012:secret:example-123456"
}
data "aws_secretsmanager_secret_version" "example" {
secret_id = data.aws_secretsmanager_secret.example.id
version_stage = "example"
}
Note: you can also use the secret name
Docs: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/secretsmanager_secret
Then you can use the value from this like so:
output MySecretJsonAsString {
value = data.aws_secretsmanager_secret_version.example.secret_string
}
Per the docs, the secret_string property of this resource is:
The decrypted part of the protected secret information that was originally provided as a string.
You should also be able to pass that value into jsondecode and then access the properties of the json body individually.
but you asked for a terraform 0.11.13 solution. If the secret value is defined by terraform you can use the terraform state datasource to get the value. This does trust that nothing else is updating the secret other than terraform. But the best answer is to upgrade your terraform. This could be a useful stopgap until then.
As a recommendation, you can make the version of terraform specific to a module and not your whole organization. I do this through the use of docker containers that run specific versions of the terraform bin. There is a script in the root of every module that will wrap the terraform commands to come up in the version of terraform meant for that project. Just a tip.
When ever I am running a terraform plan using the following:
resource "google_sql_user" "users" {
name = "me"
instance = "${google_sql_database_instance.master.name}"
host = "me.com"
password = "changeme"
}
This only happens when I am running this a Postgres instance on Google Cloud SQL.
Terraform always outputs that the plan will create a user, even though it's already created. I'm using terraform version 0.11.1. Is there something that i'm missing? I tried setting the id value, however there it still recretes itself.
It turns out that the terraform state of the user was not being added, the cause seemed to be that when using postgres, the value host = "me.com" is not required, but causes problems if it's left there.
Once that was removed, then the terraform state will be correct.
I am writing a small script that takes a small file from my local machine and puts it into an AWS S3 bucket.
My terraform.tf:
provider "aws" {
region = "us-east-1"
version = "~> 1.6"
}
terraform {
backend "s3" {
bucket = "${var.bucket_testing}"
kms_key_id = "arn:aws:kms:us-east-1:12345678900:key/12312313ed-34sd-6sfa-90cvs-1234asdfasd"
key = "testexport/exportFile.tfstate"
region = "us-east-1"
encrypt = true
}
}
data "aws_s3_bucket" "pr-ip" {
bucket = "${var.bucket_testing}"
}
resource "aws_s3_bucket_object" "put_file" {
bucket = "${data.aws_s3_bucket.pr-ip.id}"
key = "${var.file_path}/${var.file_name}"
source = "src/Datafile.txt"
etag = "${md5(file("src/Datafile.txt"))}"
kms_key_id = "arn:aws:kms:us-east-1:12345678900:key/12312313ed-34sd-6sfa-90cvs-1234asdfasd"
server_side_encryption = "aws:kms"
}
However, when I init:
terraform init
#=>
Terraform initialized in an empty directory!
The directory has no Terraform configuration files. You may begin working with Terraform immediately by creating Terraform configuration files.
and then try to apply:
terraform apply
#=>
Error: No configuration files found!
Apply requires configuration to be present. Applying without a configuration would mark everything for destruction, which is normally not what is desired. If you would like to destroy everything, please run 'terraform destroy' instead which does not require any configuration files.
I get the error above. Also, I have setup my default AWS Access Key ID and value.
What can I do?
This error means that you have run the command in the wrong place. You have to be in the directory that contains your configuration files, so before running init or apply you have to cd to your Terraform project folder.
Error: No configuration files found!
The above error arises when you are not present in the folder, which contains your configuration file.
To remediate the situation you can create a .tf in your project folder you will be working.
Note - An empty .tf will also eliminate the error, but will be of limited use as it does not contain provider info.
See the example below:-
provider "aws" {
region = "us-east" #Below value will be asked when the terraform apply command is executed if not provided here
}
So, In order for the successful execution of the terraform apply command you need to make sure the below points:-
You need to be present in your terraform project folder (Can be any directory).
Must contain .tf preferably should contain terraform provider info.
Execute terraform init to initialize the backend & provider plugin.
you are now good to execute terraform apply (without any no config error)
In case any one comes across this now, I ran into an issue where my TF_WORSPACE env var was set to a different workspace than the directory I was in. Double check your workspace with
terraform workspace show
to show your available workspaces
terraform workspace list
to use one of the listed workspaces:
terraform workspace select <workspace name>
If the TF_WORKSPACE env var is set when you try to use terraform workspace select TF will print a message telling you of the potential issue:
The selected workspace is currently overridden using the TF_WORKSPACE
environment variable.
To select a new workspace, either update this environment variable or unset
it and then run this command again.
I had the same error emulated by you, In my case it was not a VPN error but incorrect file
system naming. I was in the project folder.To remedy the situation, i created a .tf file
with vim editor with the command vi aws.tf, then populated the file with defined variables. Mine is working.
See my attached images
I too had the same issue, remember terraform filename should end with .tf as extension
Another possible reason could be if you are using modules where the URL is incorrect.
When I had:
source = "git::ssh://git#git.companyname.com/observability.git//modules/ec2?ref=v2.0.0"
instead of:
source = "git::ssh://git#git.companyname.com/observability.git//terraform/modules/ec2?ref=v2.0.0"
I was seeing the same error message as you.
I got this error this morning when deploying to production, on a project which has been around for years and nothing had changed. We finally traced it down to the person who created the production deploy ticket had pasted this command into an email using Outlook:
terraform init --reconfigure
Microsoft, in its infinite wisdom, combined the two hyphens into one and the one hyphen wasn't even the standard ASCII hyphen character (I think it's called an "en-dash"):
terraform init –reconfigure
This caused Terraform 0.12.31 to give the helpful error message:
Terraform initialized in an empty directory!
The directory has no Terraform configuration files. You may begin working
with Terraform immediately by creating Terraform configuration files.
It took us half an hour and another pair of eyes to notice that the hyphens were incorrect and needed to be re-typed! (I think terraform thought "reconfigure" was the name of the directory we wanted to run the init in, which of course didn't exist. Perhaps terraform could be improved to name the directory it's looking in when it reports this error?)
Thanks Microsoft for always being helpful (not)!
New to terraform, so perhaps it just not supposed to work this way. I want to use aws_s3_bucket_object to upload a package to a bucket- this is part of an app deploy. Im going to be changing the package for each deploy and I want to keep the old versions.
resource "aws_s3_bucket_object" "object" {
bucket = "mybucket-app-versions"
key = "version01.zip"
source = "version01.zip"
}
But after running this for a future deploy I will want to upload version02 and then version03 etc. Terraform replaces the old zip with the new one- expected behavior.
But is there a way to have terraform not destroy the old version? Is this a supported use case here or is this not how I'm supposed to use terraform? I wouldn't want to force this with an ugly hack if terraform doesn't have official support for doing something like what I'm trying to do here.
I could of course just call the S3 api via script, but it would be great to have this defined with the rest of the terraform definition for this app.
When using Terraform for application deployment, the recommended approach is to separate the build step from the deploy step and use Terraform only for the latter.
The responsibility of the build step -- which is implemented using a separate tool, depending on the method of deployment -- is to produce some artifact (an archive, a docker container, a virtual machine image, etc), publish it somewhere, and then pass its location or identifier to Terraform for deployment.
This separation between build and deploy allows for more complex situations, such as rolling back to an older artifact (without rebuilding it) if the new version has problems.
In simple scenarios it is possible to pass the artifact location to Terraform using Input Variables. For example, in your situation where the build process would write a zip file to S3, you might define a variable like this:
variable "archive_name" {
}
This can then be passed to whatever resource needs it using ${var.archive_name} interpolation syntax. To deploy a particular artifact, pass its name on the command line using -var:
$ terraform apply -var="archive_name=version01.zip"
Some organizations prefer to keep a record of the "current" version of each application in some kind of data store, such as HashiCorp Consul, and read it using a data source. This approach can be easier to orchestrate in an automated build pipeline, since it allows this separate data store to be used to indirectly pass the archive name between the build and deploy steps, without needing to pass any unusual arguments to Terraform itself.
Currently, you tell terraform to manage one aws_s3_bucket_object and terraform takes care of its whole life-cycle, meaning terraform will also replace the file if it sees any changes to it.
What you are maybe looking for is the null_resource. You can use it to run a local-exec provisioner to upload the file you need with a script. That way, the old file won't be deleted, as it is not directly managed by terraform. You'd still be calling the API via a script then, but the whole process of uploading to s3 would still be included in your terraform apply step.
Here an outline of the null_resource:
resource "null_resource" "upload_to_s3" {
depends_on = ["<any resource that should already be created before upload>"]
...
triggers = ["<A resource change that must have happened so terraform starts the upload>"]
provisioner "local-exec" {
command = "<command to upload local package to s3>"
}
}
I have shared a bunch of AMIs from an AWS account to another.
I used this EC2conn1.modify_image_attribute(AMI_id, operation='add', attribute='launchPermission', user_ids=[second_aws_account_id]) to do it.
But, by only adding launch permission for the 2nd account, I can launch an instance but I cannot copy the shared AMI to another region [in the 2nd account].
When I tick the checkbox to "create volume" from the UI of the 1st account, I can copy the shared AMI from the 2nd:
I can modify the launch permissions using the modify_image_attribute function from boto.
In the documentation says, attribute (string) – The attribute you wish to change but I understand that it can only change the launch permissions and add an account.
Yet, the get_image_attribute has 3 options Valid choices are: * launchPermission * productCodes * blockDeviceMapping.
So, is there a way to programmatically change it from the API along with the launch permissions or, it has not been implemented yet??
The console uses the API so there's almost nothing you can do in the console that you can't to using the API.
Remember that an AMI is just a configuration entity -- basic launch configuration, linked to (not containing) one or more backing snapshots, which are technically separate entities.
The console is almost certainly making an additional API request the ModifySnapshotAttribute API when it offers to optionally "add Create Volume permissions to the following associated snapshot."
See also http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modifying-snapshot-permissions.html
Presumably, copying a snapshot to another region relies on the same "Create Volume" permission (indeed, you'll see that a copied snapshot has a fake source volume ID, presumably an artifact of the copying process).
Based on the accepted answer, this is the code I wrote for anyone interested.
# Add copy permission to the image's snapshot
# Find the snapshot of the specific AMI
image_object = EC2conn.get_image(AMI_id)
# Grab the block device mapping dynamically
ami_devices = []
for key in image_object.block_device_mapping.iterkeys():
# print key #debug
ami_devices.append(key)
# print ami_devices #debug
for ami_device in ami_devices:
snap_id = image_object.block_device_mapping[ami_device].snapshot_id
# Add permission
EC2conn.modify_snapshot_attribute(snap_id, attribute='createVolumePermission', operation='add', user_ids=second_aws_account_id)
print "{0} [{1}] Permission added to snapshot".format(AMI_name,snap_id)