I have just started working on a project that is hosted on an AWS EC2 Windows Instance with an IIS. I want to move this setup to more reliable place, and one of the first things I wanted to do was to move away from snowflake servers that are setup and configured by hand.
So started looking at Terraform from Hashicorp. My thought was that I could define the entire setup including network etc in Terraform and that way make sure it was configured correctly.
I thought I would start with defining a server. A simple Windows Server instance with an IIS installed. But this is where I run into my first problems. I thought I could configure the IIS from Terraform. I guess you can't. So my next thought was to combine Terraform with Powershell Desired State Configuration.
I can setup an IIS server on a box using DSC. But I am stuck invoking DSC from Terraform. I can provision a vanilla server easily. I have tried looking for a good blog post on how to use DSC in combination with Terraform, but I can't find one that explains how to do it.
Can anyone point me towards a good place to read up on this? Or alternatively if the reason I can't find this is that it is just bad practice and I should do it in another way, then please educate me.
Thanks
How can I provision IIS on EC2 Windows with a resource?
You can run arbitrary PowerShell scripts on startup as follows:
resource "aws_instance" "windows_2016_server" {
//...
user_data = <<-EOF
<powershell>
$file = $env:SystemRoot + "\Temp\${var.some_variable}" + (Get-Date).ToString("MM-dd-yy-hh-mm")
New-Item $file -ItemType file
</powershell>
EOF
//...
}
You'll need a variable like this defined to use that (I'm providing a more complex example so there's a more useful starting point)
variable "some_variable" {
type = string
default = "UserDataTestFile"
}
Instead of creating a timestamp file like the example above, you can invoke DSC to set up IIS as you normally would interactively from PowerShell on a server.
You can read more about user_data on Windows here:
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/ec2-windows-user-data.html
user_data will include your PowerShell directly.
You can use a templatefile("${module.path}/user-data.ps1, {some_variable = var.some_variable}) instead of an inline script as above.
Have user-data.ps1 in the same directory as the TF file that references it:
<powershell>
$file = $env:SystemRoot + "\Temp\${some_variable}" + (Get-Date).ToString("MM-dd-yy-hh-mm")
New-Item $file -ItemType file
</powershell>
You still need the <powershell></powershell> tags around your script source code. That's a requirement of how Windows on EC2 expects PowerShell user-data scripts.
And then update your TF file as follows:
resource "aws_instance" "windows_2016_server" {
//...
user_data = templatefile("${module.path}/user-data.ps1, {
some_variable = var.some_variable
})
//...
}
Note that in the file read by templatefile has variables like some_variable and NOT var.some_variable.
Read more about templatefile here:
https://www.terraform.io/docs/configuration/functions/templatefile.html
Related
I wish to deploy a infrastructure that is written as a terraform module. This module is as follows:
module "my-module" {
count = var.env == "prod" ? 1 : 0
source = "s3::https://s3-us-east-1.amazonaws.com/my-bucket/my-directory/"
env = var.env
deployment = var.deployment
}
Right now this is in a my-module.tf file, and I am deploying it by running the usual terraform init, plan and apply commands (and passing in the relevant variables).
However, for my specific requirements, I wish to be able to deploy this only by running terraform init, plan and apply commands (and passing in the relevant variables), and not having to store the module in a file on my own machine. I would rather have the module file be stored remotely (e.g. s3 bucket) so other teams/users do not need to have the file on their own machine. Is there any way this terraform could be deployed in such a way that the module file can be stored remotely, and could for example be passed as an option when running terraform plan and apply commands?
could for example be passed as an option when running terraform plan and apply commands?
It's not possible. As explained in TF docs, source must be a literal string. Which means it can't be any dynamic variable.
You would have to develop your own wrapper around TF, which would do simple find-and-replace source place-holders with actual correct values before you use terraform.
The aws command is
aws s3 ls --endpoint-url http://s3.amazonaws.com
can I load endpoint-url from any config file instead of passing it as a parameter?
This is an open bug in the AWS CLI. There's a link there to a cli plugin which might do what you need.
It's worth pointing out that if you're just connecting to standard Amazon cloud services (like S3) you don't need to specify --endpoint-url at all. But I assume you're trying to connect to some other private service and that url in your example was just, well, an example...
alias aws='aws --endpoint-url http://website'
Updated Answer
Here is an alternative alias to address the OP's specific need and comments above
alias aws='aws $([ -r "$SOME_CONFIG_FILE" ] && sed "s,^,--endpoint-url ," $SOME_CONFIG_FILE) '
The SOME_CONFIG_FILE environment variable could point to a aws-endpoint-override file containing
http://localhost:4566
Original Answer
Thought I'd share an alternative version of the alias
alias aws='aws ${AWS_ENDPOINT_OVERRIDE:+--endpoint-url $AWS_ENDPOINT_OVERRIDE} '
This idea I replicated from another alias I use for Terraform
alias terraform='terraform ${TF_DIR:+-chdir=$TF_DIR} '
I happen to use direnv with an /Users/darren/Workspaces/current-client/.envrc containing
source_up
PATH_add bin
export AWS_PROFILE=saml
export AWS_REGION=eu-west-1
export TF_DIR=/Users/darren/Workspaces/current-client/infrastructure-project
...
A possible workflow for AWS-endpoint overriding could entail cd'ing into a docker-env directory, where /Users/darren/Workspaces/current-client/app-project/docker-env/.envrc contains
source_up
...
export AWS_ENDPOINT_OVERRIDE=http://localhost:4566
where LocalStack is running in Docker, exposed on port 4566.
You may not be using Docker or LocalStack, etc, so ultimately you will have to provide the AWS_ENDPOINT_OVERRIDE environment variable via a mechanism and with an appropriate value to suit your use-case.
I am writing a small script that takes a small file from my local machine and puts it into an AWS S3 bucket.
My terraform.tf:
provider "aws" {
region = "us-east-1"
version = "~> 1.6"
}
terraform {
backend "s3" {
bucket = "${var.bucket_testing}"
kms_key_id = "arn:aws:kms:us-east-1:12345678900:key/12312313ed-34sd-6sfa-90cvs-1234asdfasd"
key = "testexport/exportFile.tfstate"
region = "us-east-1"
encrypt = true
}
}
data "aws_s3_bucket" "pr-ip" {
bucket = "${var.bucket_testing}"
}
resource "aws_s3_bucket_object" "put_file" {
bucket = "${data.aws_s3_bucket.pr-ip.id}"
key = "${var.file_path}/${var.file_name}"
source = "src/Datafile.txt"
etag = "${md5(file("src/Datafile.txt"))}"
kms_key_id = "arn:aws:kms:us-east-1:12345678900:key/12312313ed-34sd-6sfa-90cvs-1234asdfasd"
server_side_encryption = "aws:kms"
}
However, when I init:
terraform init
#=>
Terraform initialized in an empty directory!
The directory has no Terraform configuration files. You may begin working with Terraform immediately by creating Terraform configuration files.
and then try to apply:
terraform apply
#=>
Error: No configuration files found!
Apply requires configuration to be present. Applying without a configuration would mark everything for destruction, which is normally not what is desired. If you would like to destroy everything, please run 'terraform destroy' instead which does not require any configuration files.
I get the error above. Also, I have setup my default AWS Access Key ID and value.
What can I do?
This error means that you have run the command in the wrong place. You have to be in the directory that contains your configuration files, so before running init or apply you have to cd to your Terraform project folder.
Error: No configuration files found!
The above error arises when you are not present in the folder, which contains your configuration file.
To remediate the situation you can create a .tf in your project folder you will be working.
Note - An empty .tf will also eliminate the error, but will be of limited use as it does not contain provider info.
See the example below:-
provider "aws" {
region = "us-east" #Below value will be asked when the terraform apply command is executed if not provided here
}
So, In order for the successful execution of the terraform apply command you need to make sure the below points:-
You need to be present in your terraform project folder (Can be any directory).
Must contain .tf preferably should contain terraform provider info.
Execute terraform init to initialize the backend & provider plugin.
you are now good to execute terraform apply (without any no config error)
In case any one comes across this now, I ran into an issue where my TF_WORSPACE env var was set to a different workspace than the directory I was in. Double check your workspace with
terraform workspace show
to show your available workspaces
terraform workspace list
to use one of the listed workspaces:
terraform workspace select <workspace name>
If the TF_WORKSPACE env var is set when you try to use terraform workspace select TF will print a message telling you of the potential issue:
The selected workspace is currently overridden using the TF_WORKSPACE
environment variable.
To select a new workspace, either update this environment variable or unset
it and then run this command again.
I had the same error emulated by you, In my case it was not a VPN error but incorrect file
system naming. I was in the project folder.To remedy the situation, i created a .tf file
with vim editor with the command vi aws.tf, then populated the file with defined variables. Mine is working.
See my attached images
I too had the same issue, remember terraform filename should end with .tf as extension
Another possible reason could be if you are using modules where the URL is incorrect.
When I had:
source = "git::ssh://git#git.companyname.com/observability.git//modules/ec2?ref=v2.0.0"
instead of:
source = "git::ssh://git#git.companyname.com/observability.git//terraform/modules/ec2?ref=v2.0.0"
I was seeing the same error message as you.
I got this error this morning when deploying to production, on a project which has been around for years and nothing had changed. We finally traced it down to the person who created the production deploy ticket had pasted this command into an email using Outlook:
terraform init --reconfigure
Microsoft, in its infinite wisdom, combined the two hyphens into one and the one hyphen wasn't even the standard ASCII hyphen character (I think it's called an "en-dash"):
terraform init –reconfigure
This caused Terraform 0.12.31 to give the helpful error message:
Terraform initialized in an empty directory!
The directory has no Terraform configuration files. You may begin working
with Terraform immediately by creating Terraform configuration files.
It took us half an hour and another pair of eyes to notice that the hyphens were incorrect and needed to be re-typed! (I think terraform thought "reconfigure" was the name of the directory we wanted to run the init in, which of course didn't exist. Perhaps terraform could be improved to name the directory it's looking in when it reports this error?)
Thanks Microsoft for always being helpful (not)!
I have a moderately complex terraform setup with
a module directory containing a main.tf, variables.tf and input.tf
and environments directory containing foo.tf, variables.tf and vars.tf
I can successfully run terraform apply and everything succeeds.
But, if I immediately run terraform apply again it makes changes.
The changes it keeps making are to resources in the module...resources that get attributes from variables in the environments tf files. I'm creating an MQ broker and a dashboard to monitor it.
In the environments directory
top.tf
module "broker" {
source = "modules/broker"
dashboard = "...."
}
In the modules directory
input.tf
variable "dashboard" {
}
amazonmq.tf
resource "aws_cloudwatch_dashboard" "mydash" {
dashboard_name = "foo"
dashboard_body = "${dashboard}"
}
Every time I run terraform apply it says it needs to change the dashboard. Any hints on what I'm doing wrong? (I've tried running with TF_LOG=DEBUG but I can't see anything that says why a change is needed). Thanks in advance.
This seems to be an issue with the terraform provider code itself. The dashboard_body property should have the computed flag attached to it, to allow you to provide it but ignore any incoming changes from aws.
I've opened up an issue on the github page. You'll find it here: https://github.com/terraform-providers/terraform-provider-aws/issues/5729
New to terraform, so perhaps it just not supposed to work this way. I want to use aws_s3_bucket_object to upload a package to a bucket- this is part of an app deploy. Im going to be changing the package for each deploy and I want to keep the old versions.
resource "aws_s3_bucket_object" "object" {
bucket = "mybucket-app-versions"
key = "version01.zip"
source = "version01.zip"
}
But after running this for a future deploy I will want to upload version02 and then version03 etc. Terraform replaces the old zip with the new one- expected behavior.
But is there a way to have terraform not destroy the old version? Is this a supported use case here or is this not how I'm supposed to use terraform? I wouldn't want to force this with an ugly hack if terraform doesn't have official support for doing something like what I'm trying to do here.
I could of course just call the S3 api via script, but it would be great to have this defined with the rest of the terraform definition for this app.
When using Terraform for application deployment, the recommended approach is to separate the build step from the deploy step and use Terraform only for the latter.
The responsibility of the build step -- which is implemented using a separate tool, depending on the method of deployment -- is to produce some artifact (an archive, a docker container, a virtual machine image, etc), publish it somewhere, and then pass its location or identifier to Terraform for deployment.
This separation between build and deploy allows for more complex situations, such as rolling back to an older artifact (without rebuilding it) if the new version has problems.
In simple scenarios it is possible to pass the artifact location to Terraform using Input Variables. For example, in your situation where the build process would write a zip file to S3, you might define a variable like this:
variable "archive_name" {
}
This can then be passed to whatever resource needs it using ${var.archive_name} interpolation syntax. To deploy a particular artifact, pass its name on the command line using -var:
$ terraform apply -var="archive_name=version01.zip"
Some organizations prefer to keep a record of the "current" version of each application in some kind of data store, such as HashiCorp Consul, and read it using a data source. This approach can be easier to orchestrate in an automated build pipeline, since it allows this separate data store to be used to indirectly pass the archive name between the build and deploy steps, without needing to pass any unusual arguments to Terraform itself.
Currently, you tell terraform to manage one aws_s3_bucket_object and terraform takes care of its whole life-cycle, meaning terraform will also replace the file if it sees any changes to it.
What you are maybe looking for is the null_resource. You can use it to run a local-exec provisioner to upload the file you need with a script. That way, the old file won't be deleted, as it is not directly managed by terraform. You'd still be calling the API via a script then, but the whole process of uploading to s3 would still be included in your terraform apply step.
Here an outline of the null_resource:
resource "null_resource" "upload_to_s3" {
depends_on = ["<any resource that should already be created before upload>"]
...
triggers = ["<A resource change that must have happened so terraform starts the upload>"]
provisioner "local-exec" {
command = "<command to upload local package to s3>"
}
}