How to view differences between config file and real resources - amazon-web-services

i've created a kubernetes cluster using kops and its config file in S3.
The problem is that i've modified some resources manually (such as ec2 properties).
I would like to know if there is some way to view the changes i've made manually.
Hope you can help me.

Considering that you have used AWS config service for auditing the configurations of your AWS resources, you can view the changes either by AWS Config console or by using AWS CLI.
Please refer Viewing Configuration Details to see the required changes.

The way I do this is kops
terraform output https://github.com/kubernetes/kops/blob/master/docs/terraform.md (--target=terraform flag). Then
Create a cluster via terraform
Do smth manually
Run terraform plan. This will show the diff between current and config. Either hit apply to revert manual changes, or code manual changes and re-apply.

Try kubediff from weaveworks.
https://github.com/weaveworks/kubediff

Related

Reverse Engineering AWS Web ACL and WAF Rules

I'm trying to replicate existing AWS WAF and ACL configuration into Terraform so that going forward, the config of the WAF rules etc can be controlled and monitored via Terraform.
The idea being that further configuration can be added via a Terraform Repo's deployment.
I've looked at the import options but I haven't been able to locate any specific resources to allow the WAF config to be exported. I've mainly come across EC2 examples.
Is there a tool within Terraform or another tool which will allow me to pull the current WAF data as TF code so that I can begin editing from there or do I have to replicate this configuration manually first and then run "terraform plan" command to check that nothing is due to be changed? (This would confirm that the code matches the current config)
Thanks in advance

Deploy new container revision to Cloud Run without changing Terraform

I am setting up a CI&CD environment for a GCP project involves Cloud Run. While setting up everything via Terraform is pretty much straightforward, I cannot figure out how to update the environment when the code changes.
The documentation says:
Make a change to the configuration file.
But that couples the application deployment to terraform configuration, which should be responsible only for infrastructure deployment.
Ideally, I use terraform to provision the infrastructure, and another CI step to build and deploy the container.
Is there a best-practice here?
Relevant sources: 1.
I ended up separating Cloud Run service creation (which is still done in Terraform) and deployment to two different workflows.
The key component was to make terraform ignore the actual deployed image so that when the code deployment workflow is done, terraform won't complained that the Cloud Run image is different from the one it manages. I achieved this by setting ignore_changes = [template[0].spec[0].containers[0].image] on the google_cloud_run_service resource.

Setting up CodePipeline with Terraform

I am new to Terraform and building a CI setup. When I want to create a CodePipeline that is going to be connected to a GitHub repo, do I run specific commands inside my Terraform codebase that will reach out to AWS and create the CodePipeline config/instance for me? Or would I set this CodePipeline up manually inside AWS console and hook it up to Terraform after the fact?
do I run specific commands inside my Terraform codebase that will reach out to AWS and create the CodePipeline config/instance for me?
Yes, you use aws_codepipeline which will create new pipeline in AWS.
Or would I set this CodePipeline up manually inside AWS console and hook it up to Terraform after the fact?
You can also import existing resources to terraform.
I see you submitted this eight months ago, so I am pretty sure you have your answer, but for those searching that comes across this question, here are my thoughts on it.
As most of you have researched, terraform is infrastructure as code (IaC). As IaC it needs to be executed somewhere. This means that you either execute locally or inside a pipeline. A pipeline consists of docker containers that emulate a local environment and run commands for you to deploy your code. There is more to that, but the premise of understanding how terraform runs remains the same.
So to the magic question, Terraform is Code, and if you intend to use a pipeline, Jenkins, AWS, GitLab, and more, then you need a code repository to put all your code into. In this case, a code repository where you can store your terraform code so a pipeline can consume it when deploying your code. There are other reasons why you should use a code repository, but your question is directed to terraform and its usage with the pipeline.
Now the magnificent argument, the chicken or the egg, when to create your pipeline and how to do it. To your original question, you could do both. You could store all your terraform code in a repository (i recommend), clone it down, and locally run terraform to create your pipeline. This would be ideal for you to save time and leverage automation. Newbies, you will have to research terraform state files which is an element you need to backup in some form or shape once the pipeline is deployed for you.
If you are not so comfortable with Terraform, the GUI in AWS is also fine, and you can configure it easily to hook your pipeline into Github to run jobs.
You must set up Terraform and AWS locally on your machine or within the pipeline to deploy your code in both scenarios. This article is pretty good and will give you the basic understanding of setting up terraform
Don't forget to configure AWS on your local machine. For you Newbies using pipeline, you can leverage some of the pipeline links to get you started. Remember one thing, within AWS Codepipeine; you have to use IAM roles and not access keys. That will make more sense once you have gone through the first link. Please also go to youtube and search Terraform for beginners in AWS. Various videos can provide a lot more substance to help you get started.

how to update terraform state with manual change done on resources

i had provisioned some resources over AWS which includes EC2 instance as well,but then after that we had attached some extra security groups to these instances which now been detected by terraform and it say's it'll rollback it as per the configuration file.
Let's say i had below code which attaches SG to my EC2
vpc_security_group_ids = ["sg-xxxx"]
but now my problem is how can i update the terraform.tfstate file so that it should not detach manually attached security groups :
I can solve it as below:
i would refresh terraform state file with terraform refresh which will update the state file.
then i have to update my terraform configuration file manually with security group id's that were attached manually
but that possible for a small kind of setup what if we have a complex scenario, so do we have any other mechanism in terraform which would detect the drift and update it
THanks !!
There is no way Terraform will update your source code when detecting a drift on AWS.
The process you mention is right:
Report manual changes done in AWS into the Terraform code
Do a terraform plan. It will refresh the state and show you if there is still a difference
You can use terraform import with the id to import the remote changes to your terraform state file. Later use terraform plan to check if the change is reflected in the code.
This can be achieved by updating terraform state file manually but it is not best practice to update this file manually.
Also, if you are updating your AWS resources (created by Terraform) manually or outside terraform code then it defeats the whole purpose of Infrastructure as Code.
If you are looking to manage complex infrastructure on AWS using Terraform then it is very good to follow best practices and one of them is all changes should be done via code.
Hope this helps.
terraform import <resource>.<resource_name> [unique_id_from_aws]
You may need to temporarily comment out any provider/resource that relies on the output of the manually created resource.
After running the above, un-comment the dependencies and run terraform refresh.
The accepted answer is technically not correct.
As per my testing:
Terraform refresh will update the state file with current live configuration
Terraform plan will only internally update with the live configuration and compare to the code, but not actually update the state file
Terraform apply will update the state file to current live configuration, even if it says no changes to apply (use case = manual change then update TF code to reflect change and now want to update state file)

Enable log file rotation to s3

I have enabled this option.
Problem is:
If I don't press snapshot log button log, is not going to s3.
Is there any method through which log publish to s3 each day?
Or how log file rotation option is working ?
If you are using default instance profile with Elastic Beanstalk, then AWS automatically creates permission to rotate the logs to S3.
If you are using custom instance profile, you have to grant Elastic Beanstalk permission to rotate logs to Amazon S3.
The logs are rotated every 15 minutes.
AWS Elastic Beanstalk: Working with Logs
For a more robust mechanism to push your logs to S3 from any EC2 server instance, you can pair LogRotate with S3. I've put all the details in this post as a reference whicould should be able to achieve exactly what you're describing.
Hope that helps.
NOTICE: if you want to rotate custom log files, then, depending on your container, you need to add links to your custom log files in a proper places. For example, consider Ruby on Rails deployment, if you want to store custom information, eg. some monitoring using Oink gem in oink.log file, add proper link in /var/app/support/logs using .ebextensions
.ebextensions/XXXlog.config
files:
"/var/app/support/logs/oink.log" :
mode: "120400"
content: "/var/app/current/log/oink.log"
This, after deploy, will create symlink:
/var/app/support/logs/oink.log -> /var/app/current/log/oink.log
I'm not sure why permissions 120400 are used, I took it from the example in Amazon AWS doc page http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html (seems like 120xxx is for symlinks in unix fs)
This log file rotation is good for archival purpose, but difficult to search and consolidate when you need the most.
Consider using services like splunk or loggly.