Cloud Composer Service Account Scope For Running Services within GCP - google-cloud-platform

I am setting up a DAG in Cloud Composer that triggers a number of Cloud Run and Cloud Function services. The service account specified in the Cloud Composer Environment (a user created SA) definitely has permissions to invoke both Cloud Run and Cloud Function services, however the Cloud Run functions are giving the following error:
The request was not authenticated. Either allow unauthenticated invocations or set the proper Authorization header. Read more at https://cloud.google.com/run/docs/securing/authenticating
The tasks are like so:
#t1 as request first report
big3_request = SimpleHttpOperator(
task_id= "big3_request",
method='GET',
http_conn_id='trigger_cloud_run_service_conversions_big_3',
endpoint='',
response_check = lambda response: True if response == ("ok", 200) else False
)
I would have thought that the cloud composer environment would be able to use the service accounts IAM roles, but this doesn't seem to be the case. What do I need to do here to enable the services to run? It looks like I can add the keyfile of the service account to the connection, but I don't see why this should be necessary if the same service account is used in the CC environment?

Your service (your SimpleHttpOperator task running in cloud composer) needs to provide authentication credentials in the request. More precisely it needs to
add a Google-signed OpenID Connect ID token as part of the request
You can find here in the official Google doc, different methods to provide such token and a proper request to your Cloud Run service endpoint.

Related

How to enable a Google Cloud Function to be invoked by GitHub Webhook

I set up a GitHub Webhook, which is the trigger for my Cloud Function, so whenever a change is made to the repository with this GithHub Webhook the Cloud Function is called. It works for unauthenticated access, but when using authenticated some set up has to be done.
I already tried using Service Accounts in GCP, in which the service account can only Invoke the specific Cloud Function, but the problem is that I can't explicitly assign this service account to be GitHub's Webhook.
Note: I thought about using Bearer Token and adding it to my Cloud Function, which would give a layer of security, but that wouldn't prevent the Cloud Function to be called anyways, right?
Yes, you need to be authenticated with a Google Account (service account or user Account) and to be authorized by IAM to invoke the function. Sadly Github webhook doesn't support service account key file to generate a secured token and then to securely call your Cloud Functions.
However, you can use API key (that you can add to the URL of your WebHook). I wrote an article that also work today with API Gateway (the managed version of ESPv2 used in my article)

HTTP cloud scheduler job fails to trigger cloud run even with oidc service account authentication

HTTP cloud scheduler job fails to trigger cloud run endpoint. Created a service account and its provided with cloud scheduler and cloud run admin roles. On cloud run permissions tab the account is given cloud run invoker permission. The cloud run endpoint can be triggered on console and returns successfully. The cloud scheduler job is getting created if no authentication is required and when it sends a request cloud run returns 403 HTTP response. Command used is
gcloud beta scheduler jobs create http *job_name* --schedule="* * * * *" --uri="https://*cloud-run-app-name-*cno4ptsl2q-ew.a.run.app" --http-method=GET --oidc-service-account-email="*project_id_number*#cloudservices.gserviceaccount.com"
On Console when this command is run invalid argument error occurs. When I do it on console creating job failed Unknown Error
OIDC
needs the url in the AUD param, make sure you have it.
best would be to use OAUTH
OAUTH
you need only the service account and scope https://www.googleapis.com/auth/cloud-platform
When you use OIDC authentication, you must specify "OIDC Audience" in your command if you didn't specify in URI.
Refer here to get more info about Cloud scheduler's OIDC audience flag.
It seems that your URI didn't include audience value.
Check attached link and retry creation job after add audience flag in your command.
This is my command which successed to create Cloud scheduler job
gcloud scheduler jobs create http deax-tweets-collection --schedule='* * * * *' \ --uri='https://job-name-cno4ptsl2q-ew.a.run.app' --http-method='GET' \ --oidc-service-account-email='XXXXX#project-id.iam.gserviceaccount.com' \ --oidc-token-audience='https://job-name-cno4ptsl2q-ew.a.run.app'

Access a cloud run from another cloud run

I am developing an application where i have hosted the frontend in cloud run: public access, no authentication
Another cloud run service has the backend. This requires authentication and is not open to public.
Ofcourse, if I disable authentication on backend service, everything works smoothly.
Is it possible to access the backend with authentication enabled from the frontend cloud run service?
Both the services are in the same serverless VPC.
As captured in the official doc, frontend can securely and privately invoke backend by leveraging the Invoker IAM role:
Grant the service account of frontend the Cloud Run Invoker IAM role.
When you issue request from frontend to backend, you must attach an identity token to the request, see here for code examples
To connect two Cloud Run applications privately, you need to obtain an identity token, and add it to the Authorization header of the outbound request of the target service. You can find documentation and examples here.
For Cloud Run service A (running with service account SA1) to be able to connect to private Cloud Run service B, you need to:
Update IAM permissions of service B to give SA1 Cloud Run Invoker role (roles/run.invoker).
Obtain an identity token (JWT) from metadata service:
curl -H "metadata-flavor: Google" \
http://metadata/instance/service-accounts/default/identity?audience=URL
where URL is the URL of service B (i.e. https://*.run.app).
Add header Authentication: Bearer where is the response obtained in the previous command.

How to print the error codes during authentication of google cloud platform using service account?

I am using google cloud storage and bigquery services of google cloud platform. I'm using service account to authenticate my application and cloud sdk to perform any actions on them.As of now, I'm able to connect and also able to perform any actions. I am curious to know that let suppose if my service account file get modified or somehow the path of that file get lost. Is there is anyway to verify my service account whether it is valid account or not and based on that I can print my code(error codes) and take action on that?
Using Google Cloud SDK commands, you can request additional debugging information:
Google Cloud gcloud has two flags that give user control over information that are displayed:
--log-http Logs all HTTP server requests
--verbosity Can display error or critical`
Cloud Storage gsutil has two options:
-D requests additional debug information
-DD requests full HTTP upstream payload
However, to have greater control over processes and errors, use Google Cloud Client Libraries, to authenticate and access Cloud Storage.

How to provide Service Instance specific Credentials in Cloud Foundry with Service Broker API?

A request to list all service instances to the Cloud Controller API of Cloud Foundry (API Docs) shows a credentials property in the response body.
I know you can provide credentials in service bindings and service keys through the Open Service Broker API, but how do I fill this global credentials object in a service instance?
Imo, this can only happen during Service Provisioning, but all the Service Broker API defines in the response of the provisioning is a dashboard url and an operation.
I looked at a couple of my lab environments, which have a number of different service brokers installed on them. None of them used the field you're asking about.
i.e. cf curl /v2/service_instances. The dictionary resources[].entity.credentials was always empty.
My understanding is that service credentials are associated with a service binding or a service key, not the service itself. If you want to see the service bindings or service keys, you need to use a different API call.
Ex: service binding cf curl /v2/service_instances/<service-instance-guid>/service_bindings. In that output resources[].entity.credentials should be populated with the service information (ie. hostname, port, username, password, etc...; whatever is provided by the service).
Similarly, service key credentials would be under the API cf curl /v2/service_instances/<service-instance-guid>/service_keys.
Maybe someone else can come along and tell us the purpose of this global field, but at the time of me writing this it appears to be unused.
Hope that helps!