I have two GCP projects communicating with each over over a Classic VPN, I'd like to duplicate this entire configuration to another GCP account with two projects. So in addition to the tunnels and gateways, I have one network in each project to duplicate, some firewall rules, and a custom routing rule on one project.
I've found how I can largely dump these using various
gcloud compute [networks | vpn-tunnels | target-vpn-gateways] describe
commands, but looking at the create commands they don't seem setup to be piped to, nor use this output data as a file, not to mention there are some items that won't be applicable in the new projects.
I'm not just trying to save time, I'm trying to make sure I don't miss anything and I also want a hard copy of sorts, of my current configuration.
Is there any way to do this? thank you!
As clarified in other similar cases - like here and here - it's not possible to clone or duplicate entire projects in Google Cloud Platform. As explained in these other cases, you can use Terraformer as to generate Terraform files from existing infrastructure (reverse Terraform) and then, recreate the files in your new instance as explained here.
To summarize, you can try this CLI as a possible alternative to copy part of your structure, but as emphasized in this answer here, there is no automatic way or magic tool that will copy everything, so even your VMs configuration, your app contents, your data content, won't be duplicated.
Related
I am building a lab utility to deploy my teams development environments (testing / stress etc).
At present, the pipeline is as follows:
Trigger pipeline via HTTP request, args contain the distribution, web server and web server version using ARGs that are passed too multi stage dockerfiles.
Dockerx builds the container (if it doesn't exist in ECR)
Pipeline job pushes that container to ECR (if it doesn't already exist).
Terraform deploys the container using Fargate, sets up VPCs and a ALB to handle ingress externally.
FQDN / TLS is then provisioned on ...com
Previously when I've made tools like this that create environments, environments were managed and deleted solely at project level, given each environment had it's own project, given this is best practice for isolation and billing tracking purposes, however given the organisation security constraints of my company, I am limited to only 1 project wherein I can create all the resources.
This means I have to find a way to manage/deploy 30 (the max) environments in one project without it being a bit of a clustered duck.
More or less, I am looking for a way that allows me to keep track, and tear down environments (autonomously) and their associated resources relevant to a certain identifier, most likely these environments can be separated by resource tags/groups.
It appears the CDKTF/Pulumi looks like a neat way of achieving some form of "high-level" structure, but I am struggling to find ways to use them to do what I want. If anyone can recommend an approach, it'd be appreciated.
I have not tried anything yet, mainly because this is something that requires planning before I start work on it (don't like reaching deadends ha).
Our infra pipeline is setup using terraform + gitlab-ci. I am given with task to provide documentation on setup with what's implemented and what's left. I am new to infra world and finding it hard to come up template to start documentation.
So far I thought of having a table with resources needed with details on dependencies, source of the module, additional notes, etc
If you have a template, can you share OR any other suggestions?
For starters, you could try one or both of the below approaches:
a) create a graph of the Terraform resources using its graph command
b) group and then list all of your resources for a specific tag using AWS Resource Groups, specifically its Create Resource Group functionality
The way I do documentation is to keep it as simple as possible, explain how it works, how to use it and also provide instructions on how it was setup from scratch for reference and as an insurance policy. So that if it's destroyed, someone other than the person that set it all up could recreate it.
Since this is just a pipeline there is probably not much to diagram. The structure of documentation I would provide would be something like this and I would add this either as part of the README.md, in Confluence or however your team does documentation.
Summary
1-2 Sentences about the work and why it was created.
How the Repo is Structured
An explanation on how the repo is structured and decisions behind why it was structured the way it was.
How To Use
Provide steps on how a user can use the pipeline
How It Was Created
Provide steps on how it was setup so anybody can manage it and work on it going forward.
Hey I started migrating infrastructure to terraform and came across few questions that are hard for me to answer
How to easily swap between different projects, assuming I have same resources in few projects separated by environments. Do I store it all in one tfstate - or do I have multiple ones ? Is it stored in one bucket or few buckets or somewhere else entirely
Can you create a new project with some random number at the end and automatically deploy resources to it
If you can create new project and deploy to it - how do you enable the API for terraform to work - like iam.googleapis.com etc.
Here, some pieces of answer to your questions
If you use only one terraform, you use only 1 tfstate. By the way, when you would like to update a project, you have to take into account all the dependencies in all project (and you risk to break other projects), the file are bigger and harder to maintain... I recommend you to have 1 terraform per project, and 1 TF state per project. If you use common pattern (IP naming, VM settings,...) you can create modules to import in the terraform of each project.
(and 3) Yes, you can create and deploy then. But I don't recommend it for a separation of concern. Use a terraform to manage your ressources' organisation (projects, folders, ....) Another one to manage your infrastructure.
A good way to think is: Build once, maintain a lot! So the build phase is not the hardest, but having something maintainable, easy to read, concise is hard!!
I'm presently looking into GCP's Deployment Manager to deploy new projects, VMs and Cloud Storage buckets.
We need a web front end that authenticated users can connect to in order to deploy the required infrastructure, though I'm not sure what Dev Ops tools are recommended to work with this system. We have an instance of Jenkins and Octopus Deploy, though I see on Google's Configuration Management page (https://cloud.google.com/solutions/configuration-management) they suggest other tools like Ansible, Chef, Puppet and Saltstack.
I'm supposing that through one of these I can update something simple like a name variable in the config.yaml file and deploy a project.
Could I also ensure a chosen name for a project, VM or Cloud Storage bucket fits with a specific naming convention with one of these systems?
Which system do others use and why?
I use Deployment Manager, as all 3rd party tools are reliant upon the presence of GCP APIs, as well as trusting that those APIs are in line with the actual functionality of the underlying GCP tech.
GCP is decidedly behind the curve on API development, which means that even if you wanted to use TF or whatever, at some point you're going to be stuck inside the SDK, anyway. So that's why I went with Deployment Manager, as much as I wanted to have my whole infra/app deployment use other tools that I was more comfortable with.
To specifically answer your question about validating naming schema, what you would probably want to do is write a wrapper script that uses the gcloud deployment-manager subcommand. Do your validation in the wrapper script, then run the gcloud deployment-manager stuff.
Word of warning about Deployment Manager: it makes troubleshooting very difficult. Very often it will obscure the error that can help you actually establish the root cause of a problem. I can't tell you how many times somebody in my office has shouted "UGGH! Shut UP with your Error 400!" I hope that Google takes note from my pointed survey feedback and refactors DM to pass the original error through.
Anyway, hope this helps. GCP has come a long way, but they've still got work to do.
I need to move more than 50 compute instances from a Google Cloud project to another one, and I was wondering if there's some tool that can take care of this.
Ideally, the needed steps could be the following (I'm omitting regions and zones for the sake of simplicity):
Get all instances in source project
For each instance get machine sizing and the list of attached disks
For each disk create a disk-image
Create a new instance, of type machine sizing, in target project using the first disk-image as source
Attach remaining disk-images to new instance (in the same order they were created)
I've been checking on both Terraform and Ansible, but I have the feeling that none of them supports creating disk images, meaning that I could only use them for the last 2 steps.
I'd like to avoid writing a shell script because it doesn't seem a robust option, but I can't find tools that can help me doing the whole process either.
Just as a side note, I'm doing this because I need to change the subnet for all my machines, and it seems like you can't do it on already created machines but you need to clone them to change the network.
There is no tool by GCP to migrate the instances from one project to another one.
I was able to find, however, an Ansible module to create Images.
In Ansible:
You can specify the “source_disk” when creating a “gcp_compute_image” as mentioned here
Frederic