Reverse Engineering AWS Web ACL and WAF Rules - amazon-web-services

I'm trying to replicate existing AWS WAF and ACL configuration into Terraform so that going forward, the config of the WAF rules etc can be controlled and monitored via Terraform.
The idea being that further configuration can be added via a Terraform Repo's deployment.
I've looked at the import options but I haven't been able to locate any specific resources to allow the WAF config to be exported. I've mainly come across EC2 examples.
Is there a tool within Terraform or another tool which will allow me to pull the current WAF data as TF code so that I can begin editing from there or do I have to replicate this configuration manually first and then run "terraform plan" command to check that nothing is due to be changed? (This would confirm that the code matches the current config)
Thanks in advance

Related

How to upgrade from AWS WAF v1 to WAF V2

I have AWS WAF Classic that I would like to upgrade to WAFv2 without having to run a Terraform script to create WAFv2.Please how can I upgrade the current WAF classic to WAFv2 without disturbing the current classic configuration using Terraform
If we are talking a migration of a service about a cloud resource (waf classic to waf2 in your case) you should use new terraform resource blocks for the new service. So no one can write your terraform codes from scratch or refactor them. You should provide new terrafom codes then create new resources from them. Terraform import is also not applicable since the waf2 is completely different than waf and of course terraform codes are different such as "aws_waf_web_acl" "aws_wafv2_web_acl".
I can say that it is not a big issue, just start from scratch. The major change is lack of aws_waf_rule anymore. It is in the description of acl.

How to mark specific resources as exception in AWS config

We have started using AWS config for compliance reasons, but some resources are exceptions and we would like AWS config to ignore those specific resources as they are managed by a third-party CI/CD pipeline. For example, if we have 10 EC2 instances, can we add an exception in AWS config to skip checking some EC2 instances out of 10? I could not find any way at this point. Is there any workaround?
Thank you.

Extract Entire AWS Setup into storable Files or Deployment Package(s)

Is there some way to 'dehydrate' or extract an entire AWS setup? I have a small application that uses several AWS components, and I'd like to put the project on hiatus so I don't get charged every month.
I wrote / constructed the app directly through the various services' sites, such as VPN, RDS, etc. Is there some way I can extract my setup into files so I can save these files in Version Control, and 'rehydrate' them back into AWS when I want to re-setup my app?
I tried extracting pieces from Lambda and Event Bridge, but it seems like I can't just 'replay' these files using the CLI to re-create my application.
Specifically, I am looking to extract all code, settings, connections, etc. for:
Lambda. Code, Env Variables, layers, scheduling thru Event Bridge
IAM. Users, roles, permissions
VPC. Subnets, Route tables, Internet gateways, Elastic IPs, NAT Gateways
Event Bridge. Cron settings, connections to Lambda functions.
RDS. MySQL instances. Would like to get all DDL. Data in tables is not required.
Thanks in advance!
You could use Former2. It will scan your account and allow you to generate CloudFormation, Terraform, or Troposphere templates. It uses a browser plugin, but there is also a CLI for it.
What you describe is called Infrastructure as Code. The idea is to define your infrastructure as code and then deploy your infrastructure using that "code".
There are a lot of options in this space. To name a few:
Terraform
Cloudformation
CDK
Pulumi
All of those should allow you to import already existing resources. At least Terraform has a import command to import an already existing resource into your IaC project.
This way you could create a project that mirrors what you currently have in AWS.
Excluded are things that are strictly taken not AWS resources, like:
Code of your Lambdas
MySQL DDL
Depending on the Lambdas deployment "strategy" the code is either on S3 or was directly deployed to the Lambda service. If it is the first, you just need to find the S3 bucket etc and download the code from there. If it is the second you might need to copy and paste it by hand.
When it comes to your MySQL DDL you need to find tools to export that. But there are plenty tools out there to do this.
After you did that, you should be able to destroy all the AWS resources and then deploy them later on again from your new IaC.

Terraform : Seperate modules VS one big project

I'm working on a Datalake project composed by many services : 1VPC (+ subnets, security groups, internet gateway, ...), S3 buckets, EMR cluster, Redshift, ElasticSearch, some Lambdas functions, API Gateway and RDS.
We can say that some resources are "static" as they will be created only once and will not change in the future, like : VPC + Subnets and S3 buckets
The other resources will change during the developement and production project lifecycle.
My question is what's the best way to manage the structure of the project ?
I first started this way :
-modules
.rds
.main.tf
.variables.tf
.output.tf
-emr
-redshift
-s3
-vpc
-elasticsearch
-lambda
-apigateway
.main.tf
.variables.tf
So this way i only have to do a terraform apply and it deploys all the services.
The second option (i saw some developers using it) is that each service will be in a seperate folder and then we only go the folder of the service that we want to launch it and then execute terraform apply
We will be 2 to 4 developers on this project and some of us will only work on a seperate resources.
What strategy do you advice me to follow ? Or maybe you have other idea and best practice ?
Thanks for your help.
The way we do it is separate modules for each service, with a “foundational” module that sets up VPCs, subnets, security policies, CloudTrail, etc.
The modules for each service are as self-contained as possible. The module for our RDS cluster for example creates the cluster, the security group, all necessary IAM policies, the Secrets Manager entry, CloudWatch alarms for monitoring, etc.
We then have a deployment “module” at the top that includes the foundational module plus any other modules it needs. One deployment per AWS account, so we have a deployment for our dev account, for our prod account, etc.
The deployment module is where we setup any inter-module communication. For example if web servers need to talk to the RDS cluster, we will create a security group rule to connect the SG from the web server module to the SG from the RDS module (both modules pass back their security group ID as an output).
Think of the deployment as a shopping list of modules and stitching between them.
If you are working on a module and the change is self-contained, you can do a terraform apply -target=module.modulename to change your thing without disrupting others. When your account has lots of resources this is also handy so plans and applies can run faster.
P.S. I also HIGHLY recommend that you setup remote state for Terraform stored in S3 with DynamoDB for locking. If you have multiple developers, you DO NOT want to try to manage the state file yourself you WILL clobber each other’s work. I usually have a state.tf file in the deployment module that sets up remote state.

How to view differences between config file and real resources

i've created a kubernetes cluster using kops and its config file in S3.
The problem is that i've modified some resources manually (such as ec2 properties).
I would like to know if there is some way to view the changes i've made manually.
Hope you can help me.
Considering that you have used AWS config service for auditing the configurations of your AWS resources, you can view the changes either by AWS Config console or by using AWS CLI.
Please refer Viewing Configuration Details to see the required changes.
The way I do this is kops
terraform output https://github.com/kubernetes/kops/blob/master/docs/terraform.md (--target=terraform flag). Then
Create a cluster via terraform
Do smth manually
Run terraform plan. This will show the diff between current and config. Either hit apply to revert manual changes, or code manual changes and re-apply.
Try kubediff from weaveworks.
https://github.com/weaveworks/kubediff