Google Cloud Platform - Define credentials in code - google-cloud-platform

How do we define credentials in Java program which connects to Google Cloud Platform to execute the code.
There is a standard way of setting GOOGLE_APPLICATION_CREDENTIALS env variable. I want to define in code. any suggestions?

Thanks for your response. Understood defining credentials is not recommended by GCP. So, I would use ADC(Authenticate Default Credentials).
Adding more info:
Providing credentials to your application
GCP client libraries use a strategy called Application Default Credentials (ADC) to find your application's credentials. When your code uses a client library, the strategy checks for your credentials in the following order:
First, ADC checks to see if the environment variable GOOGLE_APPLICATION_CREDENTIALS is set. If the variable is set, ADC uses the service account file that the variable points to.
If the environment variable isn't set, ADC uses the default service account that Compute Engine, Kubernetes Engine, App Engine, and Cloud Functions provide, for applications that run on those services.
If ADC can't use either of the above credentials, an error occurs.
The following code example illustrates this strategy. The example doesn't explicitly specify the application credentials. However, ADC is able to implicitly find the credentials as long as the GOOGLE_APPLICATION_CREDENTIALS environment variable is set, or as long as the application is running on Compute Engine, Kubernetes Engine, App Engine, or Cloud Functions.
Java Code:
static void authImplicit() {
// If you don't specify credentials when constructing the client, the client library will
// look for credentials via the environment variable GOOGLE_APPLICATION_CREDENTIALS.
Storage storage = StorageOptions.getDefaultInstance().getService();
System.out.println("Buckets:");
Page<Bucket> buckets = storage.list();
for (Bucket bucket : buckets.iterateAll()) {
System.out.println(bucket.toString());
}
}
You can find all these details in GCP link: https://cloud.google.com/docs/authentication/production#auth-cloud-app-engine-java

Related

How to achieve multiple gcs backends in terraform

Within our team. We all have our own dev project, and then we have a test and prod environment.
We are currently in the process of migrating from deployment manager, and gcloud cli. Into terraform. however we havent been able to figure out a way to create isolated backends within gcs backend. We have noticed that the remote backends support setting a dedicated workspace but we havent been able to setup something similar within gcs.
Is it possible to state that terraform resource A, will have a configurable backend, that we can adjust per project, or is the equivalent possible with workspaces?
So that we can use either tfvars, and vars parameters to switch between projects?
As stands everytime we attempt to make the backend configurable through vars, we get the error in terraform init of
Error: Variables not allowed
How does one go about creating isolated backends for each project.
Or if that isn't possible how can we guarantee that with multiple projects a shared backend state will not collide causing the state to be incorrect?
Your backend must been known when you run your terraform init command, I mean your backend bucket.
If you don't want to use workspace, you have to customize the backend value before running the init. We are use make to achieve this. According to the environment, make create a backend.tf file with the correct backend name. And run the init command.
EDIT 1
We have this piece of script (sh) which create the backend before triggering the terraform command. (it's our Make file that do this)
cat > $TF_export_dir/backend.tf << EOF
terraform {
backend "gcs" {
bucket = "$TF_subsidiary-$TF_environment-$TF_deployed_application_code-gcs-tfstatebackend"
prefix = "terraform/state"
}
}
EOF
Of course the bucket name pattern is dependent of our project. The $TF_environment is the most important because according to the env var set, the bucket reached will be different.

The environment variable "GOOGLE_APPLICATION_CREDENTIALS" in Google machines

Background
I have a virtual machine running a code using Google SDK for diffrent products (like Google PubSub). According to Google documentation, my machine should have an environment variable called GOOGLE_APPLICATION_CREDENTIALS and its values should be pointing to a clear text file that holding the service account of the application.
I have done it and it's working for me.
The Problem
It sounds like an unsafe practice to store such a key, in plain text, inside a virtual machine. If the machine has been hacked, this key will be one of the first targets of the attacker.
I was expected to find a solution to "hide" this key file or just encrypt it with a key that my application will be able to read.
I found some code examples (C#), that allow the programmer to pass the credentials manually to the SDK functions. But, it's not a standard way to do it and it's being changed from one product to another (seems impossible in some products).
What is the best practice to do it?
Have a good read at the following:
https://cloud.google.com/docs/authentication/production
This describes a concept called "Application Default Credentials". The concept here is that a Compute Engine (a virtual machine) has a default service account (that you can configure) associated with it. Applications running on the Compute Engine can thus make requests from that Compute Engine to other GCP services and the requests to those services will implicitly appear to come from the service account configured against the Compute Engine.
The key phrase in the article is:
If the environment variable GOOGLE_APPLICATION_CREDENTIALS isn't set, ADC uses the default service account that Compute Engine, Google Kubernetes Engine, App Engine, Cloud Run, and Cloud Functions provide.

don't want to login google cloud with service account

I am new at google cloud and this is my first experience with this platform. ( Before I was using Azure )
So I am working on a c# project and the project has a requirement to save images online and for that, I created cloud storage.
not for using the services, I find our that I have to download a service account credential file and set the path of that file in the environment variable.
Which is good and working file
RxStorageClient = StorageClient.Create();
But the problem is that. my whole project is a collection of 27 different projects and that all are in the same solution and there are multi-cloud storage account involved also I want to use them with docker.
So I was wondering. is there any alternative to this service account system? like API key or connection string like Azure provides?
Because I saw this initialization function have some other options to authenticate. but didn't saw any example
RxStorageClient = StorageClient.Create();
Can anyone please provide a proper example to connect with cloud storage services without this service account file system
You can do this instead of relying on the environment variable by downloading credential files for each project you need to access.
So for example, if you have three projects that you want to access storage on, then you'd need code paths that initialize the StorageClient with the appropriate service account key from each of those projects.
StorageClient.Create() can take an optional GoogleCredential() object to authorize it (if you don't specify, it grabs the default application credentials, which, one way to set is that GOOGLE_APPLICATION_CREDENTIALS env var).
So on GoogleCredential, check out the FromFile(String) static call, where the String is the path to the service account JSON file.
There are no examples. Service accounts are absolutely required, even if hidden from view, to deal with Google Cloud products. They're part of the IAM system for authenticating and authorizing various pieces of software for use with various products. I strongly suggest that you become familiar with the mechanisms of providing a service account to a given program. For code running outside of Google Cloud compute and serverless products, the current preferred solution involves using environment variables to point to files that contain credentials. For code running Google (like Cloud Run, Compute Engine, Cloud Functions), it's possible to provide service accounts by configuration so that the code doesn't need to do anything special.

Dataflow - Call external API with using private IP

I'm getting an issue calling an external API from a Dataflow job.
Dataflow is running under project A, and the API is hosted in GKE in project B, with Istio. The service account used to run Dataflow has access to resources (like GCS) from project A and B.
The projects don't have a default network and in order to run Dataflow, I needed to set the flag --use_public_ips to false. With that, the job runs, but the API call isn't reaching the API controller and is returning the following error:
I/O error while reading input message; nested exception is org.apache.catalina.connector.ClientAbortException: java.net.SocketTimeoutException"
I tested the same job in a separate environment with a default network and with Dataflow and GKE hosted under the same project. In that environment using --use_public_ips=true, the API call works, and using --use_public_ips=false it doesn't.
My questions are:
1 - What does the --use_public_ips flag changes exactly in terms of external access to resources and how can we configure my services to work with that?
2 - Is there's a way to run Dataflow in a project without default network (subnetwork specified at runtime), and not use the --use_public_ips flag set to false.

Turn off the v0.1 and v1beta1 endpoints via GCP console

I have a Flutter + Firebase app, and received an email about "Legacy GAE and GCF Metadata Server endpoints will be turned down on April 30, 2020". I updated it to v1 or whatever, and at the end of the email it suggests to turn off the endpoints completely. I'm using Google Cloud Functions and the email says
If you are using App Engine Standard or Cloud Functions, set the following environment variable: DISABLE_LEGACY_METADATA_SERVER_ENDPOINTS=true.
Upon further research this can be done through the console (https://cloud.google.com/compute/docs/storing-retrieving-metadata#custom). It says to add it as custom metadata (https://cloud.google.com/compute/docs/storing-retrieving-metadata#disable-legacy-endpoints) but I'm not sure if I'm doing this right.
For additional info, the email was triggered from a few cloud functions I have where I used the firebase admin to send push notifications (via cloud messaging)
The custom metadata feature you mention is meant to be used with Compute Engine and allows to pass in arbitrary values to your project or instance, and set startup and shutdown scripts. It's a handy way to pass common environment variables to all your GCE VMs in your project. You can also use those custom metadata in App Engine Flexible instances because they are actually Compute Engine VMs in your project running your App Engine code.
Cloud Functions and App Engine Standard are fundamentally different in that they don't run in your project but in a Google-owned project. This makes your project-wide custom metadata unreachable to them.
For this reason, for Cloud Functions you'll need to set a CF-specific environment variable by either:
using the --set-env-vars flag when deploying your Function with the gcloud functions deploy command
adding it to the environment variable section of your Function when creating it via the Developer Console