I'm trying to come up with a way in GCP to automatically deploy defined IAM roles, policies and policy bindings to selected GCP projects or all GCP projects.
I am aware that GCP organizations exist and that they can be used to define IAM resources in one place to have them inherited to child projects. However, organizations are not mandatory in GCP and some customers will be using the old structure where projects exist side by side without inheritance and not wanting to migrate to an organization.
One solution would be to create scripts which iterate over projects and create everything. However, a GCP native solution would be preferrable. Is there a GCP native way of deploying defined IAM resources like this - and possibly other project level configurations - to specific GCP projects or all projects which works regardless of whether the customer uses organizations or not and without iterating over projects?
I'm trying to come up with a way in GCP to automatically deploy
defined IAM roles, policies and policy bindings to selected GCP
projects or all GCP projects.
Deployment tools use concise descriptions of resources called configuration files. These tools manage resource state, meaning you declare what you want and they make it so. They are not dynamic in that you do not say sometimes do X and sometimes do Y. You say do X to Y and if different make it Y.
Deployment tools are IaaC - Infrastructure as Code. The configuration files are the blueprint for your goal of "desired state". You write the configuration files and the tools know how to build the resources that match the desired state.
If your goal is dynamic configuration based upon inputs, conditionals, and/or external factors, IaaC based tools will fail to meet your goal.
For IaaC based tools, you have two well-supported options.
Google Deployment Manager. This is an official Google product. This product is vendor-specific.
Terraform Google Provider. Terraform is a HashiCorp product. The Google Provider is developed by Google.
I recommend choosing Terraform and the Google Provider. Terraform is cross-platform with most of the world supporting Terraform. Terraform is very easy to use, there are numerous training resources, example configurations, Internet guides, getting-started articles, and YouTube videos. I have written a few articles on Terraform with Google Cloud.
In your question, you mention writing scripts. That is possible, but I do not recommend that. For one-off configurations, using the Google Cloud CLI in a script is workable and sometimes necessary. The benefits of a deployment language, once mastered, are tremendous.
without iterating over projects?
Unless you implement organizations, Google Cloud Projects are separate independent resources. Deployment tools are project-specific, meaning if you want to manage resources in more than one project, you must declare that in the deployment configuration. They do not iterate projects, you declare them.
Related
I found some problems on Google Cloud managing paths like /{organization}/{project-id} with some tools (mainly BigQuery with automated transformations).
Because of that, one approach is to create a project without organization (with a billing account connected), but I am not sure about which possible future problems this will imply.
The only disadvantage I see is that we wouldn't use the GCP organization structure to manage IAM permissions.
If you have only one project, having an organisation or no changes very little (some features aren't available, such as organization policies - but of course you don't have an organization!), hierarchical firewall rules, or security command center).
You will have a real difference if you manage several projects: IAM is a use case, Cloud Logging sink and asset inventory.
However, it's safer, and cleaner, to solve your previous issue with organization/project that you have previously than creating a standalone project.
I'm trying to incorporate Deployment Manager into my project which already has various instances of services up and running. I don't want to write deployments for all of them, and was hoping GCP offered a tool to generate them automatically, since it has exact information about which infrastructure components are up and how they are configured. Does such a tool exist?
I am trying to create prototype, where I can share the resources among the projects to run a job within the google cloud platform
Motivation: Let say there are two projects: Project A and Project B.
I want to use the dataproc cluster created in Project A to run a job in Project B.
The project are within the same organisation in the GCP platform.
How do I do that?
There are a few ways to manage resources across projects. Probably the most straightforward way to do this is to:
Create a service account with appropriate permissions across your project(s).
Setup an Airflow connection with the service account you have created.
You can create workflows that use that connection and then specify the project when you create a Cloud Dataproc cluster.
Alternate ways you could do this that come to mind:
Use something like the BashOperator or PythonOperator to execute Cloud SDK commands.
Use an HTTP operator to ping the REST endpoints of the services you want to use
Having said that, the first approach using the operators is likely the easiest by far and would be the recommended way to do what you want.
With respect to Dataproc, when you create a job, it will only bind to clusters within a specific project. It's not possible to create jobs in one project against clusters in another. This is because things like logging, auditing, and other job-related semantics are messy when clusters live in another project.
I am having a unique opportunity to suggest a workflow for IaC for a part of a big company which has number of technical agencies working for it.
I am trying to work out a solution that would be enterprise-level safe but have as much self-service as possible.
In scope:
Code management [repository per project/environment/agency/company]
Environment handling [build promotion/statefile per env, one statefile, terraform envs etc]
Governance model [Terraform Enterprise/PR system/custom model]
Testing and acceptance [manual acceptance/automated tests(how to test tf files?)/infra test environment]
I have read many articles, but most of them describe a situation of a development team in-house, which is much easier in terms of security and governance.
I would love to learn how what is the optimal solution for IaC management and govenance in enterprise. Is Terraform Enterprise a valid option?
I recommend using Terraform modules as Enterprise "libraries" for (infrastructure) code.
Then you can:
version, test, and accept your libraries at the Enterprise level
control what variables developers or clients can set (e.g. provide a module for AWS S3 buckets with configurable bucket name, but restricted ACL options)
provide abstractions over complex, repeated configurations to save time, prevent errors and encourage self-service (e.g. linking AWS API Gateway with AWS Lambda and Dynamodb)
For governance, it helps to have controlled cloud provider accounts or environments where every resource is deployed from scratch via Terraform (in addition to sandboxes where users can experiment manually).
For example, you could:
deploy account-level settings from Terraform (e.g. AWS password policy)
tag all Enterprise module resources automatically with
the person who last deployed changes (e.g. AWS caller ID)
the environment they used (with Terraform interpolation: "${terraform.workspace}")
So, there are lots of ways to use Terraform modules to empower your clients / developers without giving up Enterprise controls.
I'm facing a choice terraform of gcloud deployment manager.
Both tools provide similar functionality and unfortunately lacks all resources.
For example:
gcloud can create service account (terraform cannot)
terraform can manage DNS record set (gcloud cannot)
and many others ...
Questions:
Can you recommend one tool over the other?
What do you think, which tool will have a richer set of available resources in long run?
Which solution are you using in your projects?
Someone may say this is not a question you should ask on stackoverflow, but I will answer anyway.
It is possible to combine multiple tools. The primary tool you should run is Terraform. Use Terraform to manage all resources it supports natively, and use external provider to invoke gcloud (or anything else). While it will be not very elegant sometimes it will make the work.
Practically I do same approach to invoke aws-cli in external.
I personally found deployment manager harder to get started with for what I wanted to do. Although I had previous experience with terraform, therefore I may be biased. Terraform for me was easier.
Thats said though, the gcloud command line tool is extremely good and as Anton has said, you can feed that in when you need it via external. Also note, this is what terraform does and has been doing for a long time. They are also quite good in my experience of adding new features etc. Yes Gcloud Deployment Manager might have them first, as its google in house, but terraform would never be far behind.
In the long run terraform may be easier to integrate with other services, and there's always the options of going to other providers. On top of that, you have one configuration format to use. As this is what terraform does, I find the way you structure and work with it very logical and easily understood. Something thats valuable if your going to be sharing and working with other team members.
Deployment Manager is a declarative deployment orchestration tool specifically for Google Cloud Platform. So, if you're all in on Google, or just want to automate your processes on our infrastructure, you can certainly do so with Deployment Manager. Deployment Manager also allows you to integrate with other GCP services such as Identity Access Management. Cross platform alternatives such as Puppet, Chef, and Terraform work across multiple cloud providers. They aren't hosted, and you're ending up setting up your own infrastructure to support those. Cloud Formation from AWS is only structured to work within AWS infrastructure, and it integrates well with AWS services.