Access service account ID at runtime of google cloud run service - google-cloud-platform

Does anyone have an idea, how I can access the email address of the service account, which is running my cloud run service, at runtime?
When deploying the service to gcloud, I use a specific service account for running the service.
During runtime I need the email/ID of this service account, in order to do blob signing using IAMCredentialsService.
Is there a possibility to get the service account ID somehow? The ComputeCredential object I have at hand doesn't provide this information. Right now I have to set an environment variable which contains the service account email address, which I can use at runtime within the service.

In your cloud run container, you need to reach this URL (a GET)
http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/email
With this header
Metadata-Flavor: Google
If you have difficulty getting the value, provide your language and I will see if I can provide a code sample for you.

See more in documentation https://cloud.google.com/run/docs/reference/container-contract#metadata-server
Container instance metadata server
Cloud Run container instances expose a metadata server that you can use to retrieve details about your container instance, such as the project ID, region, instance ID or service accounts. It can also be used to generate tokens for the runtime service account.
You can access this data from the metadata server using simple HTTP requests to the http://metadata.google.internal/ endpoint with the Metadata-Flavor: Google header: no client libraries are required. For more information, see Getting metadata.

Related

How can I create a firebase function that disables billing on my application if the bill is above a threashhold?

I've read much documentation but all of them require me to oauth2 tokens or set environment variables to a json file that contains my credentials... That's nice when I run this locally, but how would I run this in the cloud? Such as firebase functions?
There is an example to cap billing on the public docs. Notice that you'll need to assign the Billing Account Administrator role to the runtime service account (generally the App Engine's default service account, but you can change that) for your Cloud Functions for it to work. As using the client libraries will handle the authentication and authorization automatically based on the permissions assigned to that particular service account.

Do we really need to bind Oracle service in PCF , can't we just use credentials mentioned in service?

I have a question what is the difference if I just use a Oracle/MySQL service provided by PCF without binding it? What difference will it create. I can anyway access DB using the credentials
There are two differences that come to mind:
When you create a service through the Cloud Foundry marketplace, that will create backing resources for the service but in most cases it does not create credentials. The act of binding a service to your app, in most cases with most service brokers, will actually create service credentials for you. When you unbind, again with most brokers, the service credentials are destroyed. This makes it easy to regenerate your service credentials, just unbind/rebind the service and restart your app. The net result is that if you don't bind, there are no credentials.
Most people do not want to include credentials with the actual application (see https://12factor.net/ for details why). They want to be able to provide configuration external to the app. On Cloud Foundry this commonly amounts to binding a service.
Having said that, how do you want to provide the credentials to your application?
Service bindings are there to try and make life as a developer easier but you don't have to use them. If you want to pass in the configuration some other way, like via environment variables, a config file, or using a config service (Spring Cloud Config Server or Vault) those are fine options too.
If you do not want to bind a service to your app, the only thing you'll need to do is to create a service key instead. A service key is like a binding, but not associated with an application. It will also generate a set of unique credentials. You can then take the credentials from your service key and feed them to your app in the way that works best for you.
Ex:
cf create-service-key service-instance key-name
cf service-key service-instance key-name
The first command creates the service key, the second will display its credentials.

Authenticate Google Storage object with access token in python

I am new to Google Cloud. I am trying to access google buckets to upload files. I use Google Storage object for accessing the bucket programmatically in Python. I am able to authenticate the storage object with 'key.json'. But I am unsure when the application will run in cloud how will it access 'key.json' file securely ? Also is there a way to authenticate storage object using access token in python ?
Thanks in advance!
But I am unsure when the application will run in cloud how will it
access 'key.json' file securely ?
Review the details that I wrote below. Once you have selected your environment you might not need to use a service account JSON file at all because the metadata server is available to provide your code with credentials. This is the best case and secure. On my personal website, I have written many articles that show how to create, manage and store Google credentials and secrets.
Also is there a way to authenticate storage object using access token
in python ?
All access is via an OAuth Access Token. The following link shows details using the metadata server which I cover in more detail below.
Authenticating applications directly with access tokens
There are three items to consider:
My code is not running in Google Cloud
My code is running in Google Cloud on a "compute" type of service with access to the metadata server
My code is running in Google Cloud without access to the metadata server.
1) My code is not running in Google Cloud
This means your code is running on your desktop or even in another cloud such as AWS. You are responsible for providing the method of authorization. There are two primary methods: 1) Service Account JSON key file; 2) Google OAuth User Authorization.
Service Account JSON key file
This is what you are using now with key.json. The credentials are stored in the file and are used to generate an OAuth Access Token. You must protect that file as it contains your Google Cloud secrets. You can specify the key.json directly in your code or via the environment variable GOOGLE_APPLICATION_CREDENTIALS
Google OAuth User Authorization
This method requires the user to log in to Google Accounts requesting an OAuth scope for Cloud Storage. The end result is an OAuth Access Token (just like a Service Account) that authorizes access to Cloud Storage.
Getting Started with Authentication
2) My code is running in Google Cloud on a "compute" type of service with access to the metadata server
Notice the word "metadata" server. For Google Cloud compute services, Google provides a metadata server that provides applications running on that compute service (Compute Engine, Cloud Functions, Cloud Run, etc) with credentials. If you use Google SDK Client libraries for your code, the libraries will automatically select the credentials for you. The metadata server can be disabled (denied access through role/scope removal), so you need to evaluate what you are running on.
Storing and retrieving instance metadata
3) My code is running in Google Cloud without access to the metadata server.
This is a similar scenario to #1. However, now you are limited to only using a service account unless this is a web server type of service that can present the Google Accounts authorization service to the user.

How to allow a public Google Run Instance to communicate to a *private* Google Run Instance?

I have to docker images A and B running on google run. A need a small memory footprint and slow scaling (it is the front end) and B needs a high memory foot-sprint and heavy scaling under load (it is the backend).
I have made A public (allUser can touch :80 ), and B private (I didn't checked the checkbox).
Since google cloud instance doesn't have a static IP but a dynamic URL, how can I make A "speak" to B (through http) while maintaining B inaccessible from the wild ?
Right now, the only work around I found is to open HTTP ports to allUser for both and use a sub domain name for B (like b.my.app) and call "http://b.my.app" from A.
This is a very bad solution since B can be touched from outside google's network.
Since service B is private (requires authentication), service A will need to include an HTTP Authorization header in requests to service B.
The header looks like this:
Authorization: Bearer <replace_with_token
The token is an OAuth 2.0 Identity Token (not an Access Token). The IAM member email address for the User Credentials or Service Account is added to service B with the role roles/run.invoker.
You will still need to call the endpoint URL (xxx.y.run.app) of service B. That does not change unless you also implement Custom Domains.
A nice feature of Cloud Run is that when authentication is required, the Cloud Run Proxy handles this for you. The Proxy sits in front of Cloud Run and blocks all unathorized requests. Your instance is never launched so there is no billing time while hackers try to get thru.
In one of my articles on my website, I show how to generate the Identity Token in Go (link). In this article using CURL (link) which is a three-part series. There are numerous articles on the Internet that explain this also. In another article, I explain how Cloud Run Identity works (link) and how Cloud Run Identity Based Access Control works (link).
Review the --service-account option which allows you to set the service account to use for identity (link).
Cloud Run Authentication documentation (link).

terraform GCE service account stanza

I'm creating an instance on GCP and am running into some issues using the service account stanza. When I do this:
service_account {
email = "terraformdeploy-dev#projectxxx.org.com.iam.gserviceaccount.com"
scopes = []
}
The instance does provision with that service account but all of the Cloud API access scopes show disabled in the UI.
If I do this:
service_account {
email = "terraformdeploy-dev#projectxxx.org.com.iam.gserviceaccount.com"
scopes = ["cloud-platform"]
}
the instance provisions with full access to all the APIs, but the weird thing is that the above service account doesn't have access to all of those API. I'm confused on how to use the service account stanza here as the documentation isn't very clear.
Can I just assign the service account or do I need to specify the service account and the scopes that it has?
GCE offers two methods to limit the API access that an instance can provide, and you're getting caught up on the two. One is that a GCE instance has a scope which limits ANY API requests from that machine to those services. For example if your GCE instance does not allow GCS Write operations, regardless of the service account associated with the instance you can not perform GCS write operations.
You could SSH in, authenticate with the Project Owner account, and try to write to GCS and it would fail. This is done to allow an extra layer of security, and is primarily useful if you know that 'Instances in this instance group will only ever need GCS Read, and StackDriver API access'.
Now the service account that is associated with the instance is only used when a client library or gcloud command looks up the credentials in the 'application default' location. So if your application includes a service account json key, and reads from it, it doesn't matter the associated key.
So the service account you specify makes all applications default to performing API requests using that accounts credentials.
Also do keep in mind there are much more refined scopes than just 'cloud-platform'.