How to convert the aws secret manager string to map in terraform (0.11.13) - amazon-web-services

I have a secret stored in AWS secret manager and trying to integrate that within terraform during runtime. We are using terraform 0.11.13 version, and updating to latest terraform is in the roadmap.
We all want to use the jsondecode() available as part of latest terraform, but need to get few things integrated before we upgrade our terraform.
We tried to use the below helper external data program suggested as part of https://github.com/terraform-providers/terraform-provider-aws/issues/4789.
data "external" "helper" {
program = ["echo", "${replace(data.aws_secretsmanager_secret_version.map_example.secret_string, "\\\"", "\"")}"]
}
But we ended up getting this error now.
data.external.helper: can't find external program "echo"
Google search didn't help much.
Any help will be much appreciated.
OS: Windows 10

It sounds like you want to use a data source for the aws_secretsmanager_secret.
Resources in terraform create new resources. Data sources in terraform reference the value of existing resources in terraform.
data "aws_secretsmanager_secret" "example" {
arn = "arn:aws:secretsmanager:us-east-1:123456789012:secret:example-123456"
}
data "aws_secretsmanager_secret_version" "example" {
secret_id = data.aws_secretsmanager_secret.example.id
version_stage = "example"
}
Note: you can also use the secret name
Docs: https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/secretsmanager_secret
Then you can use the value from this like so:
output MySecretJsonAsString {
value = data.aws_secretsmanager_secret_version.example.secret_string
}
Per the docs, the secret_string property of this resource is:
The decrypted part of the protected secret information that was originally provided as a string.
You should also be able to pass that value into jsondecode and then access the properties of the json body individually.
but you asked for a terraform 0.11.13 solution. If the secret value is defined by terraform you can use the terraform state datasource to get the value. This does trust that nothing else is updating the secret other than terraform. But the best answer is to upgrade your terraform. This could be a useful stopgap until then.
As a recommendation, you can make the version of terraform specific to a module and not your whole organization. I do this through the use of docker containers that run specific versions of the terraform bin. There is a script in the root of every module that will wrap the terraform commands to come up in the version of terraform meant for that project. Just a tip.

Related

How can I insure that my retrieval of secrets is secure?

Currently I am using Terraform and Aws Secrets Manager to store and retrieve secrets, and I would like to have some insight if my implementation is secure, and if not how can I make it more secure. Let me illustrate with what I have tried.
In secrets.tf I create a secret like (this needs to be implemented with targeting):
resource "aws_secretsmanager_secret" "secrets_of_life" {
name = "top-secret"
}
I then go to the console and manually set the secret in AWS Secrets manager.
I then retrieve the secrets in secrets.tf like:
data "aws_secretsmanager_secret_version" "secrets_of_life_version" {
secret_id = aws_secretsmanager_secret.secrets_of_life.id
}
locals {
creds = jsondecode(data.aws_secretsmanager_secret_version.secrets_of_life.secret_string)
}
And then I proceed to use the secret (export them as K8s secrets for example) like:
resource "kubernetes_secret" "secret_credentials" {
metadata {
name = "kubernetes_secret"
namespace = kubernetes_namespace.some_namespace.id
}
data = {
top_secret = local.creds["SECRET_OF_LIFE"]
}
type = "kubernetes.io/generic"
}
It's worth mentioning that I store tf state remotely. Is my implementation secure? If not, how can I make it more secure?
yes I can confirm it is secure since you accomplished the following:
plain text secrets out of your code.
Your secrets are stored in a dedicated secret store that enforces encryption and strict access control.
Everything is defined in the code itself. There are no extra manual steps or wrapper scripts required.
Secret manager support rotating secrets, which is useful in case a secret got compromised.
The only thing I can wonder about is using a Terraform backend that supports encryption like s3, and avoid commet the state file to your source control.
Looks good, as #asri suggests it a good secure implementation.
The risk of exposure will be in the remote state. It is possible that the secret will be stored there in plain text. Assuming you are using S3, make sure that the bucket is encrypted. If you share tf state access with other developers, they may have access to those values in the remote state file.
From https://blog.gruntwork.io/a-comprehensive-guide-to-managing-secrets-in-your-terraform-code-1d586955ace1
These secrets will still end up in terraform.tfstate in plain text! This has been an open issue for more than 6 years now, with no clear plans for a first-class solution. There are some workarounds out there that can scrub secrets from your state files, but these are brittle and likely to break with each new Terraform release, so I don’t recommend them.
Hi I'm working on similar things, here're some thoughts:
when running Terraform for the second time, the secret will be in plain text in state files which are stored in S3, is S3 safe enough to store those sensitive strings?
My work is using the similar approach: run terraform create an empty secret / dummy strings as placeholder -> manually update to real credentials -> run Terraform again to tell the resource to use the updated credentials. The thing is that when we deploy in production, we want the process as automate as possible, this approach is not ideal ut I haven't figure out a better way.
If anyone has better ideas please feel free to leave a comment below.

Create instance using terrafrom from GCP marketplace

I m trying to create terraform script to launch the fastai instance from the marketplace.
I m adding image name as,
boot_disk {
initialize_params {
image = "<image name>"
}
}
When I add
click-to-deploy-images/deeplearning
from url
https://console.cloud.google.com/marketplace/details/click-to-deploy-images/deeplearning
is giving error,
Error: Error resolving image name 'click-to-deploy-images/deeplearning': Could not find image or family click-to-deploy-images/deeplearning
on fastai.tf line 13, in resource "google_compute_instance" "default":
13: resource "google_compute_instance" "default" {
If I use
debian-cloud/debian-9
from url
https://console.cloud.google.com/marketplace/details/debian-cloud/debian-stretch?project=<>
is working.
Can we deploy fastai image through terraform?
I made a deployment from the deep learning marketplace VM instance you share and review the source image[1], you should be able to use that url I provided to deploy with Terraform. I also notice a warning image stating that image is deprecated and there is this new version[2].
Hope this helps!
[1]sourceImage: https://www.googleapis.com/compute/v1/projects/click-to-deploy-images/global/images/tf2-2-1-cu101-20200109
[2]https://www.googleapis.com/compute/v1/projects/click-to-deploy-images/global/images/tf2-2-1-cu101-20200124
In this particular case, the name was "deeplearning-platform-release/pytorch-latest-gpu",
boot_disk {
initialize_params {
image = "deeplearning-platform-release/pytorch-latest-gpu"
...
}
}
Now I m able to create the instance.
To other newbies like me:
Apparently GCP Marketplace is using Deployment Manager which is google's own declarative tool to manage infrastructure. (I think modules are the closest abstraction in terraform to it.)
Hence, there is no simple/single answer to the question in the title.
In my opinion - if you start from scratch and/or can afford the effort the time - the best is to use terraform modules instead of GCP marketplace solutions - if such exists.
However, changes are good that you are importing an existing infra and you cannot just replace it immediately (or there is no such module).
In this case, I think the best that you can do is go to Deployment Manager in google console and open the particular deployment you need to import.
At this point you can see what resources make up the deployment. Probably there will be vm template(s), vm(s), firewall rule(s), etc...
Clicking on vm instance and the template will show you a lot of useful details.
Most importantly you can deduce what image was used.
E.g.:
In my case it showed:
sourceImage https://www.googleapis.com/compute/v1/projects/openvpn-access-server-200800/global/images/aspub275
From this I could define (based on an answer on issue #7319)
data "google_compute_image" "openvpn_server" {
name = "aspub275"
project = "openvpn-access-server-200800"
}
Which I could in turn use in google_compute_instance resource.
This will force a recreation of the VM though.

Terraform and AWS: No Configuration Files Found Error

I am writing a small script that takes a small file from my local machine and puts it into an AWS S3 bucket.
My terraform.tf:
provider "aws" {
region = "us-east-1"
version = "~> 1.6"
}
terraform {
backend "s3" {
bucket = "${var.bucket_testing}"
kms_key_id = "arn:aws:kms:us-east-1:12345678900:key/12312313ed-34sd-6sfa-90cvs-1234asdfasd"
key = "testexport/exportFile.tfstate"
region = "us-east-1"
encrypt = true
}
}
data "aws_s3_bucket" "pr-ip" {
bucket = "${var.bucket_testing}"
}
resource "aws_s3_bucket_object" "put_file" {
bucket = "${data.aws_s3_bucket.pr-ip.id}"
key = "${var.file_path}/${var.file_name}"
source = "src/Datafile.txt"
etag = "${md5(file("src/Datafile.txt"))}"
kms_key_id = "arn:aws:kms:us-east-1:12345678900:key/12312313ed-34sd-6sfa-90cvs-1234asdfasd"
server_side_encryption = "aws:kms"
}
However, when I init:
terraform init
#=>
Terraform initialized in an empty directory!
The directory has no Terraform configuration files. You may begin working with Terraform immediately by creating Terraform configuration files.
and then try to apply:
terraform apply
#=>
Error: No configuration files found!
Apply requires configuration to be present. Applying without a configuration would mark everything for destruction, which is normally not what is desired. If you would like to destroy everything, please run 'terraform destroy' instead which does not require any configuration files.
I get the error above. Also, I have setup my default AWS Access Key ID and value.
What can I do?
This error means that you have run the command in the wrong place. You have to be in the directory that contains your configuration files, so before running init or apply you have to cd to your Terraform project folder.
Error: No configuration files found!
The above error arises when you are not present in the folder, which contains your configuration file.
To remediate the situation you can create a .tf in your project folder you will be working.
Note - An empty .tf will also eliminate the error, but will be of limited use as it does not contain provider info.
See the example below:-
provider "aws" {
region = "us-east" #Below value will be asked when the terraform apply command is executed if not provided here
}
So, In order for the successful execution of the terraform apply command you need to make sure the below points:-
You need to be present in your terraform project folder (Can be any directory).
Must contain .tf preferably should contain terraform provider info.
Execute terraform init to initialize the backend & provider plugin.
you are now good to execute terraform apply (without any no config error)
In case any one comes across this now, I ran into an issue where my TF_WORSPACE env var was set to a different workspace than the directory I was in. Double check your workspace with
terraform workspace show
to show your available workspaces
terraform workspace list
to use one of the listed workspaces:
terraform workspace select <workspace name>
If the TF_WORKSPACE env var is set when you try to use terraform workspace select TF will print a message telling you of the potential issue:
The selected workspace is currently overridden using the TF_WORKSPACE
environment variable.
To select a new workspace, either update this environment variable or unset
it and then run this command again.
I had the same error emulated by you, In my case it was not a VPN error but incorrect file
system naming. I was in the project folder.To remedy the situation, i created a .tf file
with vim editor with the command vi aws.tf, then populated the file with defined variables. Mine is working.
See my attached images
I too had the same issue, remember terraform filename should end with .tf as extension
Another possible reason could be if you are using modules where the URL is incorrect.
When I had:
source = "git::ssh://git#git.companyname.com/observability.git//modules/ec2?ref=v2.0.0"
instead of:
source = "git::ssh://git#git.companyname.com/observability.git//terraform/modules/ec2?ref=v2.0.0"
I was seeing the same error message as you.
I got this error this morning when deploying to production, on a project which has been around for years and nothing had changed. We finally traced it down to the person who created the production deploy ticket had pasted this command into an email using Outlook:
terraform init --reconfigure
Microsoft, in its infinite wisdom, combined the two hyphens into one and the one hyphen wasn't even the standard ASCII hyphen character (I think it's called an "en-dash"):
terraform init –reconfigure
This caused Terraform 0.12.31 to give the helpful error message:
Terraform initialized in an empty directory!
The directory has no Terraform configuration files. You may begin working
with Terraform immediately by creating Terraform configuration files.
It took us half an hour and another pair of eyes to notice that the hyphens were incorrect and needed to be re-typed! (I think terraform thought "reconfigure" was the name of the directory we wanted to run the init in, which of course didn't exist. Perhaps terraform could be improved to name the directory it's looking in when it reports this error?)
Thanks Microsoft for always being helpful (not)!

Can I have terraform keep the old versions of objects?

New to terraform, so perhaps it just not supposed to work this way. I want to use aws_s3_bucket_object to upload a package to a bucket- this is part of an app deploy. Im going to be changing the package for each deploy and I want to keep the old versions.
resource "aws_s3_bucket_object" "object" {
bucket = "mybucket-app-versions"
key = "version01.zip"
source = "version01.zip"
}
But after running this for a future deploy I will want to upload version02 and then version03 etc. Terraform replaces the old zip with the new one- expected behavior.
But is there a way to have terraform not destroy the old version? Is this a supported use case here or is this not how I'm supposed to use terraform? I wouldn't want to force this with an ugly hack if terraform doesn't have official support for doing something like what I'm trying to do here.
I could of course just call the S3 api via script, but it would be great to have this defined with the rest of the terraform definition for this app.
When using Terraform for application deployment, the recommended approach is to separate the build step from the deploy step and use Terraform only for the latter.
The responsibility of the build step -- which is implemented using a separate tool, depending on the method of deployment -- is to produce some artifact (an archive, a docker container, a virtual machine image, etc), publish it somewhere, and then pass its location or identifier to Terraform for deployment.
This separation between build and deploy allows for more complex situations, such as rolling back to an older artifact (without rebuilding it) if the new version has problems.
In simple scenarios it is possible to pass the artifact location to Terraform using Input Variables. For example, in your situation where the build process would write a zip file to S3, you might define a variable like this:
variable "archive_name" {
}
This can then be passed to whatever resource needs it using ${var.archive_name} interpolation syntax. To deploy a particular artifact, pass its name on the command line using -var:
$ terraform apply -var="archive_name=version01.zip"
Some organizations prefer to keep a record of the "current" version of each application in some kind of data store, such as HashiCorp Consul, and read it using a data source. This approach can be easier to orchestrate in an automated build pipeline, since it allows this separate data store to be used to indirectly pass the archive name between the build and deploy steps, without needing to pass any unusual arguments to Terraform itself.
Currently, you tell terraform to manage one aws_s3_bucket_object and terraform takes care of its whole life-cycle, meaning terraform will also replace the file if it sees any changes to it.
What you are maybe looking for is the null_resource. You can use it to run a local-exec provisioner to upload the file you need with a script. That way, the old file won't be deleted, as it is not directly managed by terraform. You'd still be calling the API via a script then, but the whole process of uploading to s3 would still be included in your terraform apply step.
Here an outline of the null_resource:
resource "null_resource" "upload_to_s3" {
depends_on = ["<any resource that should already be created before upload>"]
...
triggers = ["<A resource change that must have happened so terraform starts the upload>"]
provisioner "local-exec" {
command = "<command to upload local package to s3>"
}
}

Exporting AWS Data Pipeline as CloudFormation template to use it in Terraform

I'm trying to export existing AWS Data Pipeline task to Terraform infrastructure somehow.
Accordingly, to this issue, there is no direct support for Data Pipelines, but it still seems achievable using CloudFormation templates (terraform resource).
The problem is that I cannot find a way to export existing pipeline into CloudFormation template.
Exporting the pipeline with its specific definition syntax won't work as I've not found a way to include this definition into CloudFormation. CloudFormer does not support exporting pipelines either.
Does anybody know how to export a pipeline to CloudFormation or any other way to get AWS Data Pipeline automated with Terraform?
Thank you for your help!
UPD [Jul. 2019]: Some progress has been made in the terraform repository. aws_datapipeline_pipeline resource has been implemented, but it is not yet clear how to use it. Merged pull request
Original answer:
As a solution to this problem, I've come up with a node.js script, which covers my use case. In addition, I've created a Terraform module to be used in Terraform configuration.
Here is the link to the gist with the code
Will copy usage examples here.
Command Line:
node converter-cli.js ./template.json "Data Pipeline Cool Name" "Data Pipeline Cool Description" "true" >> cloudformation.json
Terraform:
module "some_cool_pipeline" {
source = "./pipeline"
name = "cool-pipeline"
description = "The best pipeline!"
activate = true
template = "${file("./cool-pipeline-template.json")}"
values = {
myDatabase = "some_database",
myUsername = "${var.db_user}",
myPassword = "${var.db_password}",
myTableName = "some_table",
}
}