I have a service listening on 'https://myapp.a.run.app/dosomething', but I want to leverage the scalability features of Cloud Run, so in the controller for 'dosomething', I send off 10 requests to 'https://myapp.a.run.app/smalltask'; with my app configured to allow servicing of only one request per instance, I expect 10 instances to spin up, all do their smalltask, and return (all within the timeout period).
But I don't know how to properly authenticate the request, so those 10 requests all result in 403's. For Cloud Run services, I manually pass in a bearer token with the initial request, though I expect to add some api proxy at some point. But without said API proxy, what's the right way to send the request such that it is accepted? The app is running as a user that does have permissions to access the endpoint.
Authenticating service-to-service
If your architecture is using multiple services, these services will likely need to communicate with each other.
You can use synchronous or asynchronous service-to-service communication:
For asynchronous communication, use
Cloud Tasks for one to one asynchronous communication
Pub/Sub for one to many asynchronous communication
Cloud Scheduler for regularly scheduled asynchronous communication.
Cloud Workflows for orchestration services.
For synchronous communication
One service invokes another one over HTTP using its endpoint URL. In this use case, it's a good idea to ensure that each service is only able to make requests to specific services. For instance, if you have a login service, it should be able to access the user-profiles service, but it probably shouldn't be able to access the search service.
First, you'll need to configure the receiving service to accept requests from the calling service:
Grant the Cloud Run Invoker (roles/run.invoker) role to the calling service identity on the receiving service. By default, this identity is PROJECT_NUMBER-compute#developer.gserviceaccount.com.
In the calling service, you'll need to:
Create a Google-signed OAuth ID token with the audience (aud) set to the URL of the receiving service. This value must contain the schema prefix (http:// or https://) and custom domains are currently not supported for the aud value.
Include the ID token in an Authorization: Bearer ID_TOKEN header. You can get this token from the metadata server, while the container is running on Cloud Run (fully managed). If the application is running outside Google Cloud, you can generate an ID token from a service account key file.
For a full guide and examples in Node/Python/Go/Java and others see: Authenticating service-to-service
Related
I have a scenario where I am consuming an external API that only responds if you are authenticated. The Auth is client credentials based auth i.e service to service not intended for end-users.
I am designing a client microservice that talks to this external API. However, once this microservice scales how do I share the access token returned by the external API between all instances of the client microservice?
Thank you so much for reading have a nice day!
Note: I am using AWS ECS.
You would need to store the token in some central location where other instances of the service can read it. AWS Secrets Manager, AWS Parameter Store, and DynamoDB are all good possible locations for storing that token.
Also, you won't be able to use the ECS integration with Secrets Manager or Parameter Store for this. Since the value can change while ECS tasks are running, you'll need to write custom code in your application that reads and updates the value as needed.
I'm having some trouble setting uptime checks for some Cloud Run services that don't allow unauthenticated invocations.
For context, I'm using Cloud Endpoints + ESPv2 as an API gateway that's connected to a few Cloud Run services.
The ESPv2 container/API gateway allows unauthenticated invocations, but the underlying Cloud Run services do not (since requests to these backends flow via the API gateway).
Each Cloud Run service has an internal health check endpoint that I'd like to hit periodically via Cloud Monitoring uptime checks.
This serves the purpose of ensuring that my Cloud Run services are healthy, but also gives the added benefit of reduced cold boot times as the containers are kept 'warm'
However, since the protected Cloud Run services expect a valid authorisation header all of the requests from Cloud Monitoring fail with a 403.
From the Cloud Monitoring UI, it looks like you can only configure a static auth header, which won't work in this case. I need to be able to dynamically create an auth header per request sent from Cloud Monitoring.
I can see that Cloud Scheduler supports this already. I have a few internal endpoints on the Cloud Run services (that aren't exposed via the API gateway) that are hit via Cloud Scheduler, and I am able to configure an OIDC auth header on each request. Ideally, I'd be able to do the same with Cloud Monitoring.
I can see a few workarounds for this, but all of them are less than ideal:
Allow unauthenticated invocations for the underlying Cloud Run services. This will make my internal services publicly accessible and then I will have to worry about handling auth within each service.
Expose the internal endpoints via the API gateway/ESPv2. This is effectively the same as the previous workaround.
Expose the internal endpoints via the API gateway/ESPv2 AND configure some sort of auth. This sort of works but at the time of writing the only auth methods supported by ESPv2 are API Keys and JWT. JWT is already out of the question but I guess an API key would work. Again, this requires a bit of set up which I'd rather avoid if possible.
Would appreciate any thought/advice on this.
Thanks!
This simple solution may work on your use case as it is easier to just use a TCP uptime check on port 443:
Create your own Cloud Run service using https://cloud.google.com/run/docs/quickstarts/prebuilt-deploy.
Create a new uptime check on TCP port 443 Cloud Run URL.
Wait a couple of minutes.
Location results: All locations passed
Virginia OK
Oregon OK
Iowa OK
Belgium OK
Singapore OK
Sao Paulo OK
I would also like to advise that Cloud Run is a Google fully managed product and it has a 99.95 % monthly up time SLA, with no recent incidents in the past few months, but proactively monitoring this on your end is a very good thing too.
I'm a beginner when it comes to Google Cloud. I have only worked with AWS before, but for this purpose I want to give Google Cloud a try.
I want to create an application where I don't have human users, but instead there are multiple instances of the same client application trying to access the pub/sub service. I would like each one of these users to come to register with my cloud function, which in return will:
create a pub/sub topic that only this client can listen to
return an identifier/key/something that can be used to authenticate the client the next time
How should I handle the authentication in this case? Should I create service credentials for each one of the clients? Or is there a way to provide a custom Identity Provider?
The first question is answered in this answer.
For the second one, the best way is for the user to be identified with Google oauth (a.k.a. a Google account).
When you create the pub/sub topic for this user, you should have already identified them, so you can set the proper permissions on the thread. Then, the user can simply call the pub/sub endpoint identified.
GCF, GAE apps, apps running on GKE, ... all of those have service accounts associated with them, so there should not be a problem to properly identify each client app running there.
If those users don't have an account (e.g. the client app is running outside of GCP), you can ask your human users (the ones running the client apps) to either:
Authenticate with their user account on your client app
Create a service account in GCP and make the client app use it
If those are not options, you can create a service account for each of your users, and provide the proper service account key file to each client.
A request to list all service instances to the Cloud Controller API of Cloud Foundry (API Docs) shows a credentials property in the response body.
I know you can provide credentials in service bindings and service keys through the Open Service Broker API, but how do I fill this global credentials object in a service instance?
Imo, this can only happen during Service Provisioning, but all the Service Broker API defines in the response of the provisioning is a dashboard url and an operation.
I looked at a couple of my lab environments, which have a number of different service brokers installed on them. None of them used the field you're asking about.
i.e. cf curl /v2/service_instances. The dictionary resources[].entity.credentials was always empty.
My understanding is that service credentials are associated with a service binding or a service key, not the service itself. If you want to see the service bindings or service keys, you need to use a different API call.
Ex: service binding cf curl /v2/service_instances/<service-instance-guid>/service_bindings. In that output resources[].entity.credentials should be populated with the service information (ie. hostname, port, username, password, etc...; whatever is provided by the service).
Similarly, service key credentials would be under the API cf curl /v2/service_instances/<service-instance-guid>/service_keys.
Maybe someone else can come along and tell us the purpose of this global field, but at the time of me writing this it appears to be unused.
Hope that helps!
I am trying to obtain data from a web service (publisher).
The web service lets me send the data (message) to any url through a webhook. My plan is to send it to a Google Pub/Sub topic.
However, Google Pub/Sub is not recognizing this third-party web service. It is returning a http 401 response code, meaning that the web service is not authenticated.
My question is, How can I authenticate it?
Authentication for requests made to Google Cloud Pub/Sub or any other of the Google Cloud Platform services can be accomplished in a couple of different ways. In your case, where you want to make a direct request via the REST API, you'll need to create a service account and authenticate via OAuth 2.0. The Using OAuth 2.0 for Server to Server Applications guide details the process. If the web service you are using supports OAuth 2.0 authentication for requests it makes, then you should basically be set. If it does not, then you will have to take care of acquiring access tokens (and acquiring new ones when they expire) manually.