Generate deployment for project with existing infrastructure - google-cloud-platform

I'm trying to incorporate Deployment Manager into my project which already has various instances of services up and running. I don't want to write deployments for all of them, and was hoping GCP offered a tool to generate them automatically, since it has exact information about which infrastructure components are up and how they are configured. Does such a tool exist?

Related

What is the difference between Cloud Build and Cloud Deploy?

They both seem to be recommended CI/CD tools within Google Cloud.. but with similar functionality. Would I use one over the other? Maybe together?
Cloud Build seems to be the de facto tool. While Cloud Deploy says that it can do "pipeline and promotion management."
Both of them are designed as serverless, meaning you don't have to manage the underlying infrastructure of your builds and defining delivery pipelines in a YAML configuration file. However, Cloud Deploy needs a configuration for Skaffold, which Google Cloud Deploy needs in order to perform render and deploy operations.
And according to this documentation,
Google Cloud Deploy is a service that automates delivery of your applications to a series of target environments in a defined sequence.
Cloud Deploy is an opinionated, continuous delivery system currently supporting Kubernetes clusters and Anthos. It picks up after the CI process has completed (i.e. the artifact/images are built) and is responsible for delivering the software to production via a progression sequence defined in a delivery pipeline.
While Google Cloud Build is a service that executes your builds on Google Cloud.
Cloud Build (GCB) is Google's cloud Continuous Integration/Continuous Development (CICD) solution. And takes users code stored in Cloud Source Repositories, GitHub, Bitbucket, or other solutions; builds it; runs tests; and saves the results to an artifact repository like Google Container Registry, Artifactory, or a Google Cloud Storage bucket. Also, supports complex builds with multiple steps, for example, testing and deployments. If you want to add your CI pipeline, it's as easy as adding an additional step to it. Take your Artifacts, either built or stored locally or at your destination and easily deploy it to many services with a deployment strategy of you choice.
Provide more details in order to choose between the two services and it will still depend on your use case. However, their objectives might help to make it easier for you to choose between the two services.
Cloud Build's mission is to help GCP users build better software
faster, more securely by providing a CI/CD workflow automation product for
developer teams and other GCP services.
Cloud Deploy's mission is to make it easier to set up and run continuous
software delivery to a Google Kubernetes Engine environment.
In addtion, refer to this documentation for price information, Cloud Build pricing and Cloud Deploy pricing.

Native GCP solution for automatically deploying IAM resources to projects?

I'm trying to come up with a way in GCP to automatically deploy defined IAM roles, policies and policy bindings to selected GCP projects or all GCP projects.
I am aware that GCP organizations exist and that they can be used to define IAM resources in one place to have them inherited to child projects. However, organizations are not mandatory in GCP and some customers will be using the old structure where projects exist side by side without inheritance and not wanting to migrate to an organization.
One solution would be to create scripts which iterate over projects and create everything. However, a GCP native solution would be preferrable. Is there a GCP native way of deploying defined IAM resources like this - and possibly other project level configurations - to specific GCP projects or all projects which works regardless of whether the customer uses organizations or not and without iterating over projects?
I'm trying to come up with a way in GCP to automatically deploy
defined IAM roles, policies and policy bindings to selected GCP
projects or all GCP projects.
Deployment tools use concise descriptions of resources called configuration files. These tools manage resource state, meaning you declare what you want and they make it so. They are not dynamic in that you do not say sometimes do X and sometimes do Y. You say do X to Y and if different make it Y.
Deployment tools are IaaC - Infrastructure as Code. The configuration files are the blueprint for your goal of "desired state". You write the configuration files and the tools know how to build the resources that match the desired state.
If your goal is dynamic configuration based upon inputs, conditionals, and/or external factors, IaaC based tools will fail to meet your goal.
For IaaC based tools, you have two well-supported options.
Google Deployment Manager. This is an official Google product. This product is vendor-specific.
Terraform Google Provider. Terraform is a HashiCorp product. The Google Provider is developed by Google.
I recommend choosing Terraform and the Google Provider. Terraform is cross-platform with most of the world supporting Terraform. Terraform is very easy to use, there are numerous training resources, example configurations, Internet guides, getting-started articles, and YouTube videos. I have written a few articles on Terraform with Google Cloud.
In your question, you mention writing scripts. That is possible, but I do not recommend that. For one-off configurations, using the Google Cloud CLI in a script is workable and sometimes necessary. The benefits of a deployment language, once mastered, are tremendous.
without iterating over projects?
Unless you implement organizations, Google Cloud Projects are separate independent resources. Deployment tools are project-specific, meaning if you want to manage resources in more than one project, you must declare that in the deployment configuration. They do not iterate projects, you declare them.

How to backup whole GCP projects (including service- and infrastructure config like subnets, firewallrules, etc)

We are evaluating options to backup whole google cloud projects. Everything that could possibly get lost somehow should be saved. What would be a good way to backup and recover networks, subnets, routing, etc?
Just to be clear: Our scope is not only data and files like compute engine disks or storage buckets but also the whole "how everything is put together" - all code and config describing the infrastructure and services of a gcp project (as far as possible).
Of course we could simply save all code that created resources (e.g. via deployment manager or gcloud sdk) but we also want to be able to cover stuff someone provisioned by hand / via gui as good as possible.
Recursively pulling data with gcloud sdk (e.g. gcloud compute networks ... list/describe for network config) could be an option, but maybe someone has already found a better solution?
Output should be detailed enough to be able to restore a specific resource (better: all containing resources) in a gcp project (e.g. via deployment manager).
All constructive ideas are appreciated!
You can use this product for reverse engineering the infrastructure and to generate a tfstate file to use with Terraform
https://github.com/GoogleCloudPlatform/terraformer
For the rest, no magic things, you have to code.

deploy different resources using deployment manager?

I'm planning to use the deployment manager to deploy a new project for each of our client.
I'm just wondering can I do the following using the deployment manager or put into script/YAML, so it deploys all components all at once through the command shell?
create a new GCP project
create a VPC for the client with custom subnet assigned
create a VM and set the network to the custom VPC/subnet
create an app engine with different services using the yaml file
create storage buckets
create cloud Postgres SQL instance
What I tried so far, I can deploy the VM only through the deployment manager, I can do them individually using the command line, but not using the deployment manager in one single step.
Thanks for your help.
Deployment Manager should work perfectly for this type of setup. There are a few minor caveats though.
You need to have a project in place where you can run deployment manager from
You will need to provide the deployment manager service account all the required permissions before creating the deployment (such as project creator at the org level). The service account is [PROJECT_ID]#cloudservices.gserviceaccount.com
Next, you will want to call each of the resources individually in your deployment manager manifest, luckily all these resource APIs are supported by DM:
Projects to create the project.
** All following resources should make a reference to this resource to create a dependancy so that DM does not try to create them before the project exists... which would result in a failure
VPC and VMs: use something like this
** This includes adding GKE clusters at the end and a VPC peering you won't need, but it demonstrates the creation of a VPC, subnets, firewall rules and a VM
App Engine
GCS Bucket
SQL instance
As long as your overall config is less than 1 MB, you can place all these resources into a single config.
If you are new to DM, I recommend trying each of these resources individually to make sure that you have the syntax correct. Trying to debug syntax errors with multiple resources is much more difficult.
I also recommend using the --preview flag before creating or updating resources so that you can make sure that your configurations or changes will come into effect the way you planned.
Finally, you can either write all this directly into a YAML config or you can create templates using either jinja or python2 which can be imported into your config.yaml
Please take a look at the Deployment Manager Cloud Foundation Toolkit which is a sets of well designed templates.

Automatically setup a AWS Environment

I wanted to know if there is a way to setup a cloud environment using Amazon Web Services automatically (like by just invoking a batch file...).
I have a scenario where i want to setup the Environment with all the requisite things like OS, Platforms etc. I want to automate the entire process of setting up the environment. Is this possible?
I am trying to do Continuous Integration and as a part of CI i want to first set up the environment for the application to be deployed, deploy the application and then run automated and performance tests. i am using Jenkins to run my automated and performance test cases with Selenium and Jmeter. Kindly help me.
You can use different tools based on your requirement.
If you also want to configure VPC and other network level configuration, you can use cloud formation, basically you'll create a template and launch your infrastructure using this template file.
https://aws.amazon.com/cloudformation/cloufformation
if you need to launch an project with a database, application server (tomcat, java, python, ...) and with load balancing and autoscaling configuration, you can use elasticbeanstalk
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/Welcome.html
opswork, docker could be also an option depending on your requirements. But they need pre configuration.
Would be more easy to advise a solution if you extend question with your use case.

Categories