Enabling all API in Google Cloud project - google-cloud-platform

Google Cloud needs enabled API before many things are possible to be done.
Enabling needs just one CLI command, and usually is very fast. Enabling is even proposed by CLI if I try to do something which requires not-enabled API. But it anyway interrupts development.
My question is why they are not enabled by default? And is it ok if I enable them all just after creating new project to don't bother about enabling them later?
I would like to understand purpose of such design and learn best practices.

Well, they're disabled mainly in order not to incurr costs that you weren't intending on inducing, for you to be aware which service you're using at which point and to track the usage/costs for each of them.
Also, some services like Pub/Sub are dependent on others, and others such as Container Registry (or Artifact Registry), require a Cloud Storage bucket for artifacts to be stored, and it will create a one automatically if you're pushing a Docker image or using Cloud Build. So these are things for you to be aware of.
Enabling an API takes a bit of time depending on the service, yes, but it's a one-time action per project. I'm not sure what exactly your concerns on the waiting time are, but if you want to run commands while having executed a gcloud command to enable some APIs you can use the --async flag which will run the commands in the background without needing you to wait for it to complete before running another one.
Lastly, sure, you can just enable them all if you know what you're doing but at your own risk - it's a safer route to enable just the ones you need and as you might already be aware, you can enable multiple in a single gcloud command. In the example of Container Registry, it uses Cloud Storage, for which you will still be billed on.

Enabling services enables access to (often billed) resources.
It's considered good practice to keep this "surface" of resources constrained to those that you(r customers) need; the more services you enable, the greater your potential attack surface and potential bills.
Google provides an increasing number of services (accessible through APIs). It is highly unlikely that you would ever want to access them all.
APIs are enabled by Project. The Project creation phase (including enabling services) is generally only a very small slice of the entire lifetime of a Project; even of those Projects created-and-torn-down on demand.
It's possible to enable the APIs asynchronously, permitting you to enable-not-block each service:
for SERVICE in "containerregistry" "container" "cloudbuild" ...
do
gcloud services enable ${SERVICE}.googleapis.com --project=${PROJECT} --async
done
Following on from this, it is good practice to automate your organization's project provisioning (scripts, Terraform, Deployment Manager etc.). This provides a baseline template for how your projects are created, which services are enabled, default permissions etc. Then your developers simply fire-and-forget a provisioner (hopefully also checked-in to your source control), drink a coffee and wait these steps are done for them.

Related

Extracting metrics such as CPU utilization into reports via command line/bash scripts?

In Azure for example, I created a few bash scripts give me things like average daily CPU utilization over whatever time period I want for any/all VMs using their command line tool.
I can't seem to figure out how to do this in Google cloud except by manually using the console (automatically generated daily usage reports don't seem to give me any CPU info either), so far numerous searches have told me that using the monitoring function in the google cloud console is basically the only way I can do this, as the cli "gcloud" will only report quotas back which isn't really what I'm after here. I haven't bothered with the ops agent install yet, as my understanding is that this is just for adding additional metrics (to the console) and not functionality to the google cloud cli. Up to this point I've only ever managed Azure and some AWS, so maybe what I'm trying to do isn't even possible in Google cloud?
Monitoring (formerly Stackdriver) does seem to be neglected by the CLI (gcloud).
There is a gcloud monitoring "group" but even the gcloud alpha monitoring and gcloud beta monitoring commands are limited (and don't include e.g. metrics).
That said, gcloud implements Google's underlying (service) APIs and, for those (increasingly fewer) cases where the CLI does not yet implement an API and its methods, you can use APIs Explorer to find the underlying e.g. Monitoring service directly.
Metrics can be access through a query over the underlying time-series data, e.g. projects.timeseries.query. The interface provides a form that enables you to invoke service methods from the browser too.
You could then use e.g. curl to construct the queries you need for your bash scripts and other tools (e.g. jq) to post-process the data.
Alternatively, and if you want a more programmatic experience with good error-handling and control over the output formatting, you can use any of the language-specific SDKs (client libraries).
I'd be surprised if someone hasn't written a CLI tool to complement gcloud for monitoring as it's a reasonable need.
It may be worth submitting a feature request on Google's Issue Tracker. I'm unsure whether it would best be placed under Cloud CLI or Monitoring. Perhaps try Monitoring.

How to disable firebase function versioning

Is there any way to disable google cloud functions versioning?
I've for a long time tried to limit the number of versions kept in the cloud functions history, or if impossible, disable it completely...
This is something that at low level any infrastructure manager will let you do but google intentionally doesn't
When using Firebase Cloud Function, There's a Lifecycle of a background function. As stated from the documentation:
When you update the function by deploying updated code, instances for older versions are cleaned up along with build artifacts in Cloud Storage and Container Registry, and replaced by new instances.
When you delete the function, all instances and zip archives are cleaned up, along with related build artifacts in Cloud Storage and Container Registry. The connection between the function and the event provider is removed.
There is no need to manually clean or remove the previous versions as Firebase deploy scripts are doing it automatically.
Based on the Cloud Functions Execution Environment:
Cloud Functions run in a fully-managed, serverless environment where
Google handles infrastructure, operating systems, and runtime
environments completely on your behalf. Each Cloud Function runs in
its own isolated secure execution context, scales automatically, and
has a lifecycle independent from other functions.
These means that you should not remove build artifacts since cloud functions are scaling automatically and new instances are built from these artifacts.

How to backup whole GCP projects (including service- and infrastructure config like subnets, firewallrules, etc)

We are evaluating options to backup whole google cloud projects. Everything that could possibly get lost somehow should be saved. What would be a good way to backup and recover networks, subnets, routing, etc?
Just to be clear: Our scope is not only data and files like compute engine disks or storage buckets but also the whole "how everything is put together" - all code and config describing the infrastructure and services of a gcp project (as far as possible).
Of course we could simply save all code that created resources (e.g. via deployment manager or gcloud sdk) but we also want to be able to cover stuff someone provisioned by hand / via gui as good as possible.
Recursively pulling data with gcloud sdk (e.g. gcloud compute networks ... list/describe for network config) could be an option, but maybe someone has already found a better solution?
Output should be detailed enough to be able to restore a specific resource (better: all containing resources) in a gcp project (e.g. via deployment manager).
All constructive ideas are appreciated!
You can use this product for reverse engineering the infrastructure and to generate a tfstate file to use with Terraform
https://github.com/GoogleCloudPlatform/terraformer
For the rest, no magic things, you have to code.

Terraform vs gcloud deployment-manager

I'm facing a choice terraform of gcloud deployment manager.
Both tools provide similar functionality and unfortunately lacks all resources.
For example:
gcloud can create service account (terraform cannot)
terraform can manage DNS record set (gcloud cannot)
and many others ...
Questions:
Can you recommend one tool over the other?
What do you think, which tool will have a richer set of available resources in long run?
Which solution are you using in your projects?
Someone may say this is not a question you should ask on stackoverflow, but I will answer anyway.
It is possible to combine multiple tools. The primary tool you should run is Terraform. Use Terraform to manage all resources it supports natively, and use external provider to invoke gcloud (or anything else). While it will be not very elegant sometimes it will make the work.
Practically I do same approach to invoke aws-cli in external.
I personally found deployment manager harder to get started with for what I wanted to do. Although I had previous experience with terraform, therefore I may be biased. Terraform for me was easier.
Thats said though, the gcloud command line tool is extremely good and as Anton has said, you can feed that in when you need it via external. Also note, this is what terraform does and has been doing for a long time. They are also quite good in my experience of adding new features etc. Yes Gcloud Deployment Manager might have them first, as its google in house, but terraform would never be far behind.
In the long run terraform may be easier to integrate with other services, and there's always the options of going to other providers. On top of that, you have one configuration format to use. As this is what terraform does, I find the way you structure and work with it very logical and easily understood. Something thats valuable if your going to be sharing and working with other team members.
Deployment Manager is a declarative deployment orchestration tool specifically for Google Cloud Platform. So, if you're all in on Google, or just want to automate your processes on our infrastructure, you can certainly do so with Deployment Manager. Deployment Manager also allows you to integrate with other GCP services such as Identity Access Management. Cross platform alternatives such as Puppet, Chef, and Terraform work across multiple cloud providers. They aren't hosted, and you're ending up setting up your own infrastructure to support those. Cloud Formation from AWS is only structured to work within AWS infrastructure, and it integrates well with AWS services.

Best practice for reconfiguring and redeploying on AWS autoscalegroup

I am new to AWS (Amazon Web Services) as well as our own custom boto based python deployment scripts, but wanted to ask for advice or best practices for a simple configuration management task. We have a simple web application with configuration data for several different backend environments controlled by a command line -D defined java environment variable. Sometimes, the requirement comes up that we need to switch from one backend environment to another due to maintenance or deployment schedules of our backend services.
The current procedure requires python scripts to completely destroy and rebuild all the virtual infrastructure (load balancers, auto scale groups, etc.) to redeploy the application with a change to the command line parameter. On a traditional server infrastructure, we would log in to the management console of the container, change the variable, bounce the container, and we're done.
Is there a best practice for this operation on AWS environments, or is the complete destruction and rebuilding of all the pieces the only way to accomplish this task in an AWS environment?
It depends on what resources you have to change. AWS is evolving everyday in a fast paced manner. I would suggest you to take a look at the AWS API for the resources you need to deal with and check if you can change a resource without destroying it.
Ex: today you cannot change a Launch Group once it is created. you must delete it and create it again with the new configurations. but if you have one auto scaling group attached to that launch group you will have to delete the auto scaling group and so on.
IMHO a see no problems with your approach, but as I believe that there is always room for improvement, I think you can refactor it with the help of AWS API documentation.
HTH
I think I found the answer to my own question. I know the interface to AWS is constantly changing, and I don't think this functionality is available yet in the Python boto library, but the ability I was looking for is best described as "Modifying Attributes of a Stopped Instance" with --user-data as being the attribute in question. Documentation for performing this action using HTTP requests and the command line interface to AWS can be found here: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Using_ChangingAttributesWhileInstanceStopped.html