Is there a way on Google Kubernetes Engine to prevent a cluster from accidental deletion.
I know that this can be set at the Compute Engine level as described in the relevant docs.
I cannot seem to be able to find sth at the cluster level.
Exactly as you need it, to avoid deletion of a cluster an all the resources involved with it, there is still work to do ahead, some in favor some against as you can read in here [1] it's a discussion that it has been for quite a long time (almost 4 years) and some of those flags are set into the managed resources in GKE so only upgrades (or full cluster bye-bye) can be done but some of the flags may not work in other resources (like "protected") so, the handling for this is still charged to the user whom would need to be careful when applying YAMLs that may affect the configuration, deployment cycles and resources on his/her clusters. In GKE it actually prompts twice (even though it seems like once) when dumping a cluster see [2], but once again, is relying in the client.
I trust this information can be helpful for you.
[1] https://github.com/kubernetes/kubernetes/issues/10179
[2] https://cloud.google.com/kubernetes-engine/docs/how-to/deleting-a-cluster
Related
My customer recently migrated from on-premises to the AWS Cloud. With SYSDBA locked down, I am aware that the Cloud has taken over many of the manual responsibilities -- if not all. When I review the DBA_ADVISOR_RECOMMENDATIONS there are a number of tuning recommendations. I'm questioning whether PGA/SGA parameters should still be maintained -- or not? TIA
The answer I found is that yes, performance tuning is still appropriate, necessary, and complicated. The variations in resource parameters vary between instances in the same federation. The DBA advisor recommendations will be where I focus initial effort (alerts, segments, and bottlenecks). I've increased proactive monitoring and begun trimming tablespaces of allocated but unneeded storage.
Infrastructure team members are creating, deleting and modifying resources in GCP project using console. Security team wants to scan the infra and check weather proper security measures are taken care
I am tryng to create a terraform script which will:
1. Take project ID as input and list all instances of the given project.
2. Loop all the instances and check if the security controls are in place.
3. If any security control is missing, terraform script will be modifying the resource(VM).
I have to repeat the same steps for all resoources available in project like subnet, cloud storage buckets, firewalls etc.
As per my initial investigation to do such task We will have to import the resources to terraform using "terraform import" command and after that will have to think of loops.
Now it looks like using APIs of GCP is the best fit for this task, as it looks terraform is not the good choice for this kind of tasks and I am not sure weather it is achievable using teffarform.
Can somebody provide any directions here?
Curious if by "console" you mean the gcp console (aka by hand), because if you are not already using terraform to create the resources (and do not plan to in the future), then terraform is not the correct tool for what you're describing. I'd actually argue it is increasing the complexity.
Mostly because:
The import feature is not intended for this kind of use case and we still find regular issues with it. Maybe 1 time for a few resources, but not for entire environments and not without it becoming the future source of truth. Projects such as terraforming do their best but still face wild west issues in complex environments. Not all resources even support importing
Terraform will not tell you anything about the VM's that you wouldn't know from the GCP cli already. If you need more information to make an assessment about the controls then you will need to use another tool or have some complicated provisioners. Provisioners at best would end up being a wrapper around other tooling you could probably use directly.
Honestly, I'm worried your team is trying to avoid the pain of converting older practices to IaC. It's uncomfortable and challenging, but yields better fruit in the long run then the path you're describing.
Digress, if you have infra created via terraform then I'd invest more time in some other practices that can accomplish the same results. Some other options are: 1) enforce best practices via parent modules that security has "blessed", 2) implement some CI on your terraform, 3) AWS has Config and Systems Manager, not sure if GCP has an equivalent but I would look around. Also it's worth evaluating using different technologies for different layers of abstraction. What checks your OS might be different from what checks your security groups and that's ok. Knowing is half the battle and might make for a more sane first version then automatic remediation.
With or without terraform, there is a an ecosystem of both products and opensource projects that can help with the compliance or control enforcement. Take a look at tools like inspec, sentinel, or salstack for inspiration.
I have created a terraform stack for all the required resources that we utilise to build out a virtual data center within AWS. VPC, subnet, security groups etc, etc.
It all works beautifully :). I am having a constant argument with network engineers that want to have a completely separate state for networking etc. As a result of this we have to manage multiple state files and it requires 10 to 15 terraform plan/apply commands to being up the data center. Not alone do we have to run the commands multiple times, we cannot reference the module output variables from when creating ec2 instances etc, so now there are "magic" variables appearing within variable files. I want to put the scripts to create the ec2 instances, els etc within the same directory as the "data center" configuration so that we manage one state file (encrypted in s3 with dynamodb lock) and that our git repo has a one to one relationship with our infrastructure. There is also the added benefit that a single terraform plan/apply will build the whole datacenter in a single command.
Question is really, is it a good idea to manage data center resources (vpc, subnets, security groups) and compute resources in a single state file? Are there any issues that I may come across? Has anybody experience in managing an AWS environment with terraform this way?
Regards,
David
To begin with the Terraform provider let's you access output variables from other state files so you don't have to use magic variables. The rest is just a matter of your style. Do you frequently bring the whole datacenter infrastracture up? If so you may consider doing it in one project. If on the other hand you only change some things you may want to make it more modular relying on output from other projects. Keeping them separate makes the planing faster and avoids a very costly terraform destroy mistake.
During the last years there have been a lot of discussion about layouts for Terraform projects.
Times have also changed with Terraform 1.0 so I think this question deserve some love.
As a result of this we have to manage multiple state files and it requires 10 to 15 terraform plan/apply commands to being up the data center.
Using modules is possible to maintain separated states without requiring executing commands for each state.
Not alone do we have to run the commands multiple times, we cannot reference the module output variables
Terraform support output values. Leveraging Terraform Cloud or Terraform remote states is possible to introduce dependencies between states.
A prerequisite to adventure into multiple Terraform states in my opinion is using state locking (OP refers to using AWS DynamoDB lock mechanism but other Storage backend support this too).
Generally having everything in a single state is not the best solution and may be considered an anti-pattern.
Having multiple state is referred to as state isolation.
Why would you want to isolate states?
Reasons are multiple and the benefits are clear:
bugs blast radius. If you introduce a bug somewhere in the code an you apply all the code for the entire datacenter in the worst possible scenario everything will be affected. On the other hand if networking was separated in the worst scenario the bug could only affect networking (which in a DC would be a very severe issue but still better than everything).
state (write) lock. If you use state lock Terraform will lock the state for any operation that may possibly write to the state. This means that with a single state multiple teams working on separate areas are not able to write to the state at the same time, so updating the networking blocks instance provisioning for example.
secrets. Secrets are written plain-text to the state. A single state means all teams secrets will end up in the same state (that you must encrypt, OP is correctly doing this). As with anything with security having all eggs in the same basket is a risk.
A side benefit of isolating state is that file layout may help with code ownership (across teams or project).
How to isolate state?
There are 3 mainly ways:
via file layout (with or without modules)
via workspaces (not to be confused with Terraform Cloud Workspaces)
mix up the above ways (Here be dragons!)
There is no wide consensus on how to do it but for further reading:
https://blog.gruntwork.io/how-to-manage-terraform-state-28f5697e68fa 2016 article, I think this is sort of the root of the discussion.
https://terragrunt.gruntwork.io/docs/features/keep-your-terraform-code-dry/
https://www.terraform-best-practices.com/code-structure
A tool worth looking at may be Terragrunt from Gruntwork.
I'm trying to understand the real-world usefulness of AWS CloudFormation. It seems to be a way of describing AWS infrastructure as a JSON file, but even then I'm struggling to understand what benefits that serves (besides potentially "recording" your infrastructure changes in VCS).
Of what use does CloudFormation's JSON files serve? What benefits does it have over using the AWS web console and making changes manually?
CloudFormation gives you the following benefits:
You get to version control your infrastructure. You have a full record of all changes made, and you can easily go back if something goes wrong. This alone makes it worth using.
You have a full and complete documentation of your infrastructure. There is no need to remember who did what on the console when, and exactly how things fit together - it is all described right there in the stack templates.
In case of disaster you can recreate your entire infrastructure with a single command, again without having to remember just exactly how things were set up.
You can easily test changes to your infrastructure by deploying separate stacks, without touching production. Instead of having permanent test and staging environments you can create them automatically whenever you need to.
Developers can work on their own, custom stacks while implementing changes, completely isolated from changes made by others, and from production.
It really is very good, and it gives you both more control, and more freedom to experiment.
First, you seem to underestimate the power of tracking changes in your infrastructure provisioning and configuration in VCS.
Provisioning and editing your infrastructure configuration via web interface is usually very lengthy process. Having the configuration in a file versus having it in multiple web dashboards gives you the much needed perspective and overall glance at what you use and what is it's configuration. Also, when you repeatedly configure similar stacks, you can re-use the code and avoid errors or mistakes.
It's also important to note that AWS CloudFormation resources frequently lag behind development of services available in the AWS Console. CloudFormation also requires gathering some know-how and time getting used to it, but in the end the benefits prevail.
Google Compute Engine lets you get a group of instances that are semantically local in the sense that only they can talk to each other and all external access has to go through a firewall etc. If I want to run Map-Reduce or other kinds of cluster jobs that are going to induce high network traffic, then I also want machines that are physically local (say, on the same rack). Looking at the APIs and initial documentation, I don't see any way to request that; does anyone know otherwise?
There is no support in GCE right now for specifying rack locality. However, we built the system to work well in the face of large numbers of instances talking to each other in a fully connected way, as long as they are in the same zone.
This is one of the things that allowed MapR to approach the record for a hadoop terasort. You can see that in action in the video for the Criag Mcluckie's talk from IO:
https://developers.google.com/events/io/sessions/gooio2012/302/
The best way to see is to test out your application and see how it works.