AWS won't let you delete a VPC if there are instances in it.
If I create a non TF-managed instance in a VPC (that I did create with terraform) and then do a terraform destroy TF hangs waiting.
I can then go to AWS console and manually delete the VPC and get a useful response from AWS as to why it cant be deleted and a list of the offending resources I can manually delete.
Is there a verbose switch where Terraform would spit out these messages from the AWS API? I assume the AWS API returns this info, but perhaps it only does that when deleting via the console?
I haven't found any info on how to make the TF destroy command return this info so assuming it's probably not possible but wanted to confirm.
You can get more information from terraform by setting the TF_LOG variable before executing terraform. There are a few levels of logging, which should look familiar if you are familiar with syslog severity levels (i.e. INFO, WARN, ERROR ,etc..). Setting this variable is a very useful debugging strategy.
Setting TF_LOG=DEBUG should at least let you determine which AWS api calls are being called. In my experience with terraform, it's not uncommon for an api call to fail; and terraform sometimes won't report an error, hangs, or does report an error but the information is archaic at best. This is something the terraform community is working on. And there are current github issues open to similar behavior
If after setting the TF_LOG environment variable, the api call is indeed failing, I suggest that you open a github issue with terraform; and please format it using the issues contributing guidelines
Related
My free AWS tier is going to expire in 8 days. I removed every EC2 resource and elastic IP associated with it. Because that is what I recall initializing and experimenting with. I deleted all the roles I created because as I understand it, roles permit AWS to perform actions for AWS services. And yet, when I go to the billing page it shows I have these three services that are in current usage.
[1]: https://i.stack.imgur.com/RvKZc.png
I used the script as recommended by AWS documentation to check for all instances and it shows "no resources found".
Link for script: https://docs.aws.amazon.com/systems-manager-automation-runbooks/latest/userguide/automation-awssupport-listec2resources.html
I tried searching for each service using the dashboard and didn't get anywhere. I found an S3 bucket, I don't remember creating it but I deleted it anyway, and still, I get the same output.
Any help is much appreciated.
ok, I was able to get in touch with AWS support via Live chat, and they informed me that those services in my billing were usages generated before the services were terminated. AWS support was much faster than I expected.
Terraform has a remote stack via well documented plugins, i.e. terraform.backend.s3
https://www.terraform.io/docs/language/settings/backends/s3.html
Can aws cdk provide remote state for the stacks?
I can't find in documentation.
https://docs.aws.amazon.com/cdk/latest/guide/awscdk.pdfstack
I ask about aws cdk because I found pure documentation about aws cdktf.
Found that cloud cloudfront generates a lot of json file as well as uses it. Does the contain state?
The CDK uses CloudFormation under the hood, which manages the remote state of the infrastructure in a similar way like a Terraform state-file.
You get the benefit of AWS taking care of state management for you (for free) without the risks of doing it yourself and messing up your state file.
The drawback is that if there is drift between the state CloudFormation thinks resources are in and their actual state, things get tricky.
I created an AWS ES cluster via terraform, VPC version.
It got me a kibana instance which I can access through a URL.
I access it via a proxy as it is in a VPC and thus not publicly accessible.
All good. But recently I ran out of disk. The infamous Write Status was in red, and nothing was being written into the cluster anymore.
As this is a dev environment. I googled and found the easiest possible to fix this:
curl -XDELETE <URL>/*
So far so good, logs are being written again.
But I now thought I need to fix this. So I did some more reading and was wanting to create a Index State Management Policy. I just took the default one and just changed the notification destination.
But when hitting "Create Policy" I get:
Sorry, there was an error
Authorization Exception
Which is quite odd as AWS just created a kibana instance with no user management whatsoever - so I would assume to have all rights.
Any idea?
Indeed we had to ask support and the reason it was failing was that - as this is a dev environment and not production - we had no master nodes and also no UltraWarm storage. The sample strategy I was trying to install moves from hot to warm - which apparently actually means UltraWarm, and thus needs that UltraWarm storage enabled.
A bit of an inappropriate error message though.
I'm trying to monitor memory on an EC2 Amazon Linux 2 instance. I'm using these instructions for reference and I'm seeing the error ERROR: Cannot obtain instance id from EC2 meta-data.. I disabled IMDVs1 in my instance which I'm guessing is the way the CloudWatch agent is trying to get my instance id. Does anyone know if there're updated docs on this or a way to fix this? I looked at the AWS script here and I think I could figure out how to have it get this instance ID with IMDSv2 but I'd be surprised if they didn't have a way to do this already. I think I'm missing something though.
I figured it out...
TLDR: Don't use the amazon script to monitor memory, use the CloudWatch Agent
I clicked the first link that showed up when looking at how to monitor memory on EC2 and unfortunately that link is to an old way to monitor memory using a script.
They've updated how to monitor memory and the CloudWatch agent can be configured to do this starting here. There's an automated way to set it up from this documentation and a manual way. This will create a Custom Namespace in the CloudWatch metric view.
I've been using GCP and Terraform for a few months - just creating some basic VMs and firewall resources for testing.
Increasingly, about 50% of the time when applying and 100% of the time when trying to destroy an environment using Terraform, I get the following error:
Error creating Firewall: Post https://www.googleapis.com/compute/beta/projects/mkdemos-219107/global/firewalls?alt=json: oauth2: cannot fetch token: Post https://accounts.google.com/o/oauth2/token: net/http: TLS handshake timeout
To destroy, the only way is to log into the console, manually delete the resource and rm my local terraform state files.
Its the intermittent nature of this that is driving me crazy. I've tried creating a new project, re-creating a new json with service credentials and still the same behaviour.
If it consistently failed or had been doing this all the time, I'd assume there was something wrong in my Terraform template or the way I've setup the GCP Service Account. But sometimes it works - sometimes it doesn't - it makes no sense and is making using GCP unworkable for testing.
If anyone has any similar experience of this I'd welcome some thoughts. Surely it can't just be me?? ;-)
FYI:
Terraform: v0.11.7
provider.google: v1.19.0
Mac OSX: 10.13.1
Cheers.
there might be a strange solution. please check that another user in your OS is able to do terraform commands. It means that problem is located in your user profile.
Finally, If it works then try backup and delete all certificates in your login keychain. Retry the terraform commands