I am new to Terraform. How to stop GCP vm instances using terraform?
I have tried changing status of the vm instance, it's available for AWS but couldn't find the way to do it for GCP.
Edit
Since version v3.11.0 of Google provider (released 2020/03/02), it is possible to shutdown and start a Compute instance with desired_status field :
compute: added the ability to manage the status of google_compute_instance resources with the desired_status field
Just declare in your Terraform resource :
resource "google_compute_instance" "default" {
name = "test"
machine_type = "n1-standard-1"
zone = "us-central1-a"
[...]
desired_status = "TERMINATED"
}
And apply your changes. If your instance was running before, it should be shut down. This PR shows the modifications that have been added, if you are interested to take a look. The desired_status can either take RUNNING or TERMINATED values.
Previous answer (as of 2019/10/26)
As of the time of the question (2019/09/18), with the latest Google provider available then (version v2.15.0), this is not possible to update the status of a Google Compute instance.
The following issue is opened on the Google Terraform provider on Github :
google_compute_instance should allow to specify instance state #1719
There is also a Pull Request to add this feature :
ability to change instance_state #2956
But unfortunately, this PR seems to be stale (not updated since 2019/03/13).
Related
I do have a terraform script which provisions a Kubernetes deployment and a few clusterroles and clusterrolebingings via Helm.
But now I do need to edit one of the provisioned Clusterroles via Terraform and add another block of permissions. Is there a way to do this or would I need to recreate a similar resource freshly.
This is my block to create the respective deployment for efs-csi-driver.
resource "helm_release" "aws-efs-csi-driver" {
name = "aws-efs-csi-driver"
chart = "aws-efs-csi-driver"
repository = "https://kubernetes-sigs.github.io/aws-efs-csi-driver/"
version = "2.x.x"
namespace = "kube-system"
timeout = 3600
values = [
file("${path.module}/config/values.yaml"),
]
}
Somehow I do need to modify https://github.com/kubernetes-sigs/aws-efs-csi-driver/blob/45c5e752d2256558170100138de835b82d54b8af/deploy/kubernetes/base/controller-serviceaccount.yaml#L11 by adding a couple of more permission blocks. Is there a way that I can patch it out (Or completely overlay)
I'm new on Terraform so I'm sure it is a easy question.
I'm trying to deploy into GCP using terraform.
I have 2 different enviroments both on same GCP project:
nonlive
live
I have alerts for each enviroment so that is what I intend to create:
If I deploy into an enviroment then Terraform must create/update resources for this enviromet but don't update resources for rest of enviroments.
I'm trying to user modules and conditions, it's similar to this:
module "enviroment_live" {
source = "./live"
module_create = (var.environment=="live")
}
resource "google_monitoring_alert_policy" "alert_policy_live" {
count = var.module_create ? 1 : 0
display_name = "Alert CPU LPProxy Live"
Problem:
When I deploy on live enviroment Terraform delete alerts for nonlive enviroment and vice versa.
Is it possible to update resources of one enviroment without deleting those of the other?
Regards
As Marko E suggested solution was to use workspaces:
Terraform workspaces
The steps must be:
Create a workspace for each enviroment.
On deploy (CI/CD) select workspace befor plan/apply:
terraform workspace select $ENVIROMENT
Use conditions (as I explained before) to create/configure the resource.
This is my template:
resource "aws_ecs_cluster" "doesntmatter" {
name = var.doesntmatter_name
capacity_providers = ["FARGATE", "FARGATE_SPOT"]
setting {
name = "containerInsights"
value = "enabled"
}
tags = var.tags
}
When I run it. It properly creates cluster and sets containerInsights to enabled.
But when I run terrafrom again. It wants to change this property as if it wasn't set before.
It doesn't matter how many times I run it. It still thinks it needs to change it every deployment.
Additionally, the terraform state show resName does show that this setting is saved in state file.
It's a bug that is resolved with v3.57.0 of the Terraform AWS Provider (released yesterday).
Amazon ECS is making a change to the ECS Describe-Clusters API. Previously, the response to a successful ECS Describe-Clusters API request included the cluster settings by default. This behavior was incorrect since, as documented here (https://docs.aws.amazon.com/cli/latest/reference/ecs/describe-clusters.html), cluster settings is an optional field that s hould only be included when explicitly requested by the customer. With the change, ECS will no longer surface the cluster settings field in response to the Describe-Clusters API by default. Customers can continue to use the --include SETTINGS flag with the Describe-Clusters API to receive the cluster settings.
Tracking bug: https://github.com/hashicorp/terraform-provider-aws/issues/20684
I am trying to create an RDS Aurora MySQL cluster in AWS using Terraform. However, I notice that any time I alter the cluster in a way the requires it to be replaced, all data is lost. I have configured to take a final snapshot and would like to restore from that snapshot, or restore the original data through an alternative measure.
Example: Change Cluster -> TF Destroys the original cluster -> TF Replaces with new cluster -> Restore Data from original
I have attempted to use the same snapshot identifier for both aws_rds_cluster.snapshot_identifier and aws_rds_cluster.final_snapshot_identifier, but Terraform bombs because the final snapshot of the destroyed cluster doesn't yet exist.
I've also attempted to use the rds-finalsnapshot module, but it turns out it is primarily used for spinning environments up and down, preserving the data. i.e. Destroying an entire cluster, then recreating it as part of a separate deployment. (Module: https://registry.terraform.io/modules/connect-group/rds-finalsnapshot/aws/latest)
module "snapshot_maintenance" {
source="connect-group/rds-finalsnapshot/aws//modules/rds_snapshot_maintenance"
identifier = local.cluster_identifier
is_cluster = true
database_endpoint = element(aws_rds_cluster_instance.cluster_instance.*.endpoint, 0)
number_of_snapshots_to_retain = 3
}
resource "aws_rds_cluster" "provisioned_cluster" {
cluster_identifier = module.snapshot_maintenance.identifier
engine = "aurora-mysql"
engine_version = "5.7.mysql_aurora.2.10.0"
port = 1234
database_name = "example"
master_username = "example"
master_password = "example"
iam_database_authentication_enabled = true
storage_encrypted = true
backup_retention_period = 2
db_subnet_group_name = "example"
skip_final_snapshot = false
final_snapshot_identifier = module.snapshot_maintenance.final_snapshot_identifier
snapshot_identifier = module.snapshot_maintenance.snapshot_to_restore
vpc_security_group_ids = ["example"]
}
What I find is if a change requires destroy and recreation, I don't have a great way to restore the data as part of the same deployment.
I'll add that I don't think this is an issue with my code. It's more of a lifecycle limitation of TF. I believe I can't be the only person who wants to preserve the data in their cluster in the event TF determines the cluster must be recreated.
If I wanted to prevent loss of data due to a change to the cluster that results in a destroy, do I need to destroy the cluster outside of terraform or through the cli, sync up Terraform's state and then apply?
The solution ended up being rather simple, albeit obscure. I tried over 50 different approaches using combinations of existing resource properties, provisioners, null resources (with triggers) and external data blocks with AWS CLI commands and Powershell scripts.
The challenge here was that I needed to ensure the provisioning happened in this order to ensure no data loss:
Stop DMS replication tasks from replicating more data into the database.
Take a new snapshot of the cluster, once incoming data had been stopped.
Destroy and recreate the cluster, using the snapshot_identifier to specify the snapshot taken in the previous step.
Destroy and recreate the DMS tasks.
Of course these steps were based on how Terraform decided it needed to apply updates. It may determine it only needed to perform an in-place update; this wasn't my concern. I needed to handle scenarios where the resources were destroyed.
The final solution was to eliminate the use of external data blocks and go exclusively with local provisioners, because external data blocks would execute even when only running terraform plan. I used the local provisioners to tap into lifecycle events like "create" and "destroy" to ensure my Powershell scripts would only execute during terraform apply.
On my cluster, I set both final_snapshot_identifier and snapshot_identifier to the same value.
final_snapshot_identifier = local.snapshot_identifier
snapshot_identifier = data.external.check_for_first_run.result.isFirstRun == "true" ? null : local.snapshot_identifier
snapshot_identifier is only set after the first deployment, external data blocks allow me to check if a resource exists already in order to achieve the condition. The condition is necessary because on a first deployment, the snapshot won't exist and Terraform will fail during the "planning" step due to this.
Then I execute a Powershell script in a local provisioner on the "destroy" to stop any DMS tasks and then delete the snapshot by the name of local.snapshot_identifier.
provisioner "local-exec" {
when = destroy
# First, stop the inflow of data to the cluster by stopping the dms tasks.
# Next, we've tricked TF into thinking the snapshot we want to use is there by using the same name for old and new snapshots, but before we destroy the cluster, we need to delete the original.
# Then TF will create the final snapshot immediately following the execution of the below script and it will be used to restore the cluster since we've set it as snapshot_identifier.
command = "/powershell_scripts/stop_dms_tasks.ps1; aws rds delete-db-cluster-snapshot --db-cluster-snapshot-identifier benefitsystem-cluster"
interpreter = ["PowerShell"]
}
This clears out the last snapshot and allows Terraform to create a new final snapshot by the same name as the original, just in time to be used to restore from.
Now, I can run Terraform the first time and get a brand-new cluster. All subsequent deployments will use the final snapshot to restore from and data is preserved.
So I've been using AWS AMI in my cloud formation template.
It seems they create new images every month and deprecate the old ones 2 weeks or so after the new one's released. This creates many problems:
Old template stacks becomes broken.
Templates need to be updated.
Am I missing something?
E.G.
I'm staring at
API: ec2:RunInstances Not authorized for images: [ami-1523bd2f]
error in my
cloud formation events.
Looking it up that's the 02.12 image id:
http://thecloudmarket.com/image/ami-1523bd2f--windows-server-2012-rtm-english-64bit-sql-2012-sp1-web-2014-02-12
Where as now there's a new image id:
http://thecloudmarket.com/image/ami-e976efd3--windows-server-2012-rtm-english-64bit-sql-2012-sp1-web-2014-03-12
You are correct indeed. Windows AMI are deprecated when a new version is released (see http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/Basics_WinAMI.html)
There is no "point and click" solution as of today, documentation says : "AWS updates the AWS Windows AMIs several times a year. Updating involves deprecating the previous AMI and replacing it with a new AMI and AMI ID. To find an AMI after it's been updated, use the name instead of the ID. The basic structure of the AMI name is usually the same, with a new date added to the end. You can use a query or script to search for an AMI by name, confirm that you've found the correct AMI, and then launch your instance."
One possible solution might be to develop a CloudFormation Custom Resource that would check for AMI availability before launching an EC2 instance.
See this documentation about CFN Custom Resources : http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/crpg-walkthrough.html
And this talk from re:Invent : https://www.youtube.com/watch?v=ZhGMaw67Yu0#t=945 (and this sample code for AMI lookup)
You also have the option to create your own custom AMI based on an Amazon provided one.Even if you do not modify anything. Your custom AMI will be an exact copy of the one provided by Amazon but will stay available after Amazon AMI's deprecation.
Netflix has open sourced tools to help to manage AMIs, have a look at Aminator
Linux AMI are deprecated years after release (2003.11 is still available today !) but Windows AMI are deprecated as soon as a patched version is available. This is for security reason.
This ps script works for my purposes, we use windows 2012 base image:
$imageId = "xxxxxxx"
if ( (Get-EC2Image -ImageIds $imageId) -eq $null ) {
$f1 = New-Object Amazon.EC2.Model.Filter ; $f1.Name="owner-alias";$f1.Value="amazon"
$f2 = New-Object Amazon.EC2.Model.Filter ; $f2.Name="platform";$f2.Value="windows"
$img = Get-EC2Image -Filters $f1,$f2 | ? {$_.Name.StartsWith("Windows_Server-2012-RTM-English-64Bit-Base")} | Select-Object -First 1
$imageId =$img.ImageId
}
I recently ran into the same error. I had built a custom ami in one account, and was trying to run an EC2 instance from another account.
The issue for me was that the AMI did not have the correct permissions to enable my user from the other account to run it.
To fix it, I logged in the other account and added the required permissions to the ami:
aws ec2 modify-image-attribute --image-id youramiid --launch-permission "Add=[{UserId=youruserid}]"
More information at this documentation page.
If you are using a training material and copied the code, make sure to replace the AMI name with the correct AMI Image values available under list of AMI's visible under your account. Similar with other values. If you are just cut and paste the values from training code may not be available now.