Google Cloud Function 403 for internal authenticated requests - google-cloud-platform

I am calling a cloud function from within my GCP project.
I receive 403 (Permission Denied) when the function is configured with Allow internal traffic only, see
https://cloud.google.com/functions/docs/networking/network-settings#ingress_settings
When removing the ingress control there is no issue, the function responds with status 200.
The function does not allow un-authenticated access, IAM policies are configured.
Following the example from https://cloud.google.com/functions/docs/securing/authenticating#function-to-function:
# main.py
import requests
# TODO<developer>: set these values
# REGION = None
# PROJECT_ID = None
RECEIVING_FUNCTION = 'hello-get'
# Constants for setting up metadata server request
# See https://cloud.google.com/compute/docs/instances/verifying-instance-identity#request_signature
function_url = f'https://{REGION}-{PROJECT_ID}.cloudfunctions.net/{RECEIVING_FUNCTION}'
metadata_server_url = \
'http://metadata/computeMetadata/v1/instance/service-accounts/default/identity?audience='
token_full_url = metadata_server_url + function_url
token_headers = {'Metadata-Flavor': 'Google'}
def hello_trigger(request):
token_response = requests.get(token_full_url, headers=token_headers)
jwt = token_response.text
function_headers = {'Authorization': f'bearer {jwt}'}
function_response = requests.get(function_url, headers=function_headers)
function_response.raise_for_status()
return function_response.text
def hello_get(req):
return 'Hello there...'
Deploying the function and the triggering function with desired ingress settings:
gcloud functions deploy hello-get --trigger-http --entry-point hello_get --runtime python37 --ingress-settings internal-only
gcloud functions deploy hello-trigger --trigger-http --entry-point hello_trigger --runtime python37 --ingress-settings all --allow-unauthenticated
Calling hello-trigger returns 403.
Changing ingress of hello-get solves the issue:
gcloud functions deploy hello-get --trigger-http --entry-point hello_get --runtime python37 --ingress-settings all
Now calling hello-trigger returns 200.
The service account used for Cloud Functions is given the Functions Invoker Role for this setup.

When you set the ingress traffic to internal-only, only the traffic coming from your VPC or from the VPC SC (Service Control) is accepted.
Here, in your trigger function, you don't come from YOUR vpc, but from another one (a serverless VPC, managed by Google, the land where the Cloud Functions are deployed). Therefore, the ingress setting isn't respected and you get a 403.
So, for this you have 2 solutions:
Use only IAM service to filter who can invoke or not your function, and let "public" your function with an ingress=all. (Solution proposed by John in his 2nd comment). It's already a high level of security.
However, sometime, for regulatory reason (or for old fashion security team design) network control is preferred.
If you want to pass through your VPC, you need to
Create a serverless VPC connector in the same region as your trigger function
Deploy your trigger function with this serverless VPC connector
Set the egress traffic to all --egress-settings=all
Like this, all the outgoing traffic of your trigger function will pass through the serverless VPC connector, thus, the traffic is routed in your VPC before trying to reach your "ingress-internal" cloud functions. And it will be accepted.
If your function use ingress=all settings, anyone can reach it from internet.
However, if you don't make the function publicly accessible, I mean, authorized to unauthenticated user, only the valid requests (authenticated AND authorized with the role cloudfunctions.invoker) will be processed by your Cloud Functions
In fact, there is a common layer to any Google service name GFE: Google Front End. This layer is in charge of many things (expose your service in HTTPS, manage your certificates, discard DDoS attack OSI layer 4,...) whom the check of the authentication header and the authorization check against the IAM service.
Therefore, in case of DDoS attack on the layer 4, GFE filter by default these attacks. In case of attack of layer 7, only the authorized request (valid) are allowed and you will pay only for them. The filter performed by GFE is free.

Related

GCP Config Create Gateway Bug

It doesn't seem possible to create an API Gateway config for a gateway i've created using:
gcloud api-gateway apis create test-api --project=acme-prd
Then the following command fails
gcloud api-gateway api-configs create 01 \
--api=test-api --openapi-spec=./acme-web-gateway-v2.yaml \
--project=acme-prd --backend-auth-service-account=svc-owner#acme-prd.iam.gserviceaccount.com
With the error:
ERROR: (gcloud.api-gateway.api-configs.create) FAILED_PRECONDITION: API Gateway Management Service Agent does not have permission to create Service Configs for Service "test-api-3qz6mxqfw7klr.apigateway.acme-prd.cloud.goog", or the Service does not exist.
Noting the service account svc-owner#acme-prd.iam.gserviceaccount.com has Owner privileges on the project.
Is there something I am missing? This is preventing a Terraform deployment. I've used gcloud commands to demonstrate the issue.
Also of note, this does not work in the GCP UI either. :(
Permissions granted to the account being used:
Cheers
KH
To resolve this, you will need to ensure that the Service Agent account has the necessary permissions for the specified service. Check API Gateway Service Account and verify if it has “Service Account User '' role associated with it.The apigateway.apis.create should have owner/editor permissions.
Check the Google Cloud Console or by using command gcloud services list to see if the Gateway API, Service Management API, Service Control API are enabled because these api are prerequisites.You will need to enable it if it is not already enabled.you enable by using below commands:
gcloud services enable apigateway.googleapis.com
gcloud services enable servicemanagement.googleapis.com
gcloud services enable servicecontrol.googleapis.com
Attaching documents for creating an api, Gateway API access , Troubleshooting for your reference.
Edit-1:
I have tried to create an API Gateway config for a gateway using below steps and successfully created an api config
Create an api gateway using below command
gcloud api-gateway apis create test-api
Creating an API config using the below command.
gcloud api-gateway api-configs create 01 --api=test-api --openapi-spec=openapi2-functions.yaml --project=project-id
Output is
waiting for API Config [01] to be created for API [test-api]...done.
I have taken openapi2-functions. Yaml file for this doc. Can you check if your yaml files has any mistakes.
The image below has the api config that i have created.
I have followed this guide, can try to create an api gateway using this and let me know if you have an issues.

Call to K8S version API for the EKS cluster in ap-northeast-2 (seoul) is failing with unauthorized code 401

We are calling the K8S API to get version of the cluster.
The URL is https://:443/version .
But the HTTP request is failing with this error - GET request to the remote host failed [HTTP-Code: 401]: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Unauthorized","reason":"Unauthorized","code":401}
This call is working in other AWS regions for ex us-east-2(ohio), us-east-1, ap-south-1 etc. But specifically failing in this region.
I have checked IAM console this region is enables for STS service
While calling the K8S API we are passing STS token (with standard AWS signature calculation).
So I am not getting why it is failing in only a specific region?
I can access that cluster using AWS EKS CLI. All the operations on that cluster are working fine. kubectl is also in working state.

Denied AWS Opensearch write permission

I'm trying to connect a spring boot application from AWS EKS to AWS Opensearch both of which reside in a VPC. Though the connection is successful im unable to write any data to the index.
All the AWS resources - EKS and Opensearch are configured using terraform. I have mentioned the elasticsearch subnet CIDR in the egress which is attached to the application. Also, the application correctly assumes the EKS service account and the pod role - which I mentioned in the services stanza for Elasticsearch. In the policy which is attached to the pod role, I see all the permissions mentioned - ESHttpPost, ESHttpget, ESHttpPut, etc.
This is the error I get,
{"error":{"root_cause": [{"type":"security_exception", "reason":"no
permissions for [indices:data/write/index] and User
[name=arn:aws:iam::ACCOUNT_NO:role/helloworld-demo-eks-PodRle-
hellodemo-role-1,backend_roles=
[arn:aws:iam::ACCOUNT_NO:role/helloworld-demo-eks-PodRle-hellodemo
role-1], requested
Tenant=null]"}],"type":"security_exception", "reason":"no
permissions for [indices:data/write/index] and User
[name=arn:aws:iam::ACCOUNT_NO:role/helloworld demo-eks-PodRle-
hellodemo-role-1,
backend_roles=[arn:aws:iam::ACCOUNT_NO:role/helloworld-demo-eks-
PodRle-hellodemo role-1], requested Tenant=null]"},"status":403}
Is there anything that I'm missing out on while configuring?
This error can be resolved by assigning the pod role to additional_roles key in the Elasticsearch terraform. This internally is taken care by AWS STS when it receives a request from EKS.

Google Cloud API Gateway User Authentication

I am trying to implement user authentication via JWTs in Google Cloud API Gateway.
I have configured the security requirement object and a security definitions object in the API config as per the documentation
securityDefinitions:
google_id_token:
authorizationUrl: ""
flow: "implicit"
type: "oauth2"
x-google-issuer: "https://accounts.google.com"
x-google-jwks_uri: "https://www.googleapis.com/oauth2/v3/certs"
security:
- google_id_token: []
And the backend service is a Cloud Run service
x-google-backend:
address: https://my-apis-fskhw40mta-uk.a.run.app
However when I call the API with my user bearer token, I receive a 403 error and this error in the stackdriver logs
"jwt_authn_access_denied{Audiences_in_Jwt_are_not_allowed}"
The Python client code to call the API is
id_token = subprocess.run(['gcloud', 'auth', 'print-identity-token'], capture_output=True, text=True, shell=True).stdout
http = urllib3.PoolManager()
encoded_args = urlencode({'arg1': "val1"})
r = http.request(
'GET',
API_URL + "/run-api?" + encoded_args,
headers={"Authorization": f"Bearer {id_token}"}
)
What is the correct way to include an audience when using a User account (not service account)
Update 1
I have found one way to do it, however I'm not sure it is correct. If I add 32555940559.apps.googleusercontent.com
to the securityDefinitions so it looks like this
securityDefinitions:
google_id_token:
authorizationUrl: ""
flow: "implicit"
type: "oauth2"
x-google-issuer: "https://accounts.google.com"
x-google-jwks_uri: "https://www.googleapis.com/oauth2/v3/certs"
x-google-audiences: "https://oauth2.googleapis.com/token, 32555940559.apps.googleusercontent.com"
It will allow unauthenticated access to Cloud Run, however I still can not call Cloud Run with authentication enabled. Cloud Run returns 403 error due to the API gateway service account not having permmissions - It has Cloud Run Invoker
Is there anything special I need to do to enable API Gateway to call cloud run other than granting Cloud Run Invoker
Adding 32555940559.apps.googleusercontent.com is not recommended, since this is the default. Ideally the audience should be unique for every service, which is why we normally use the service's own URL for this purpose. This prevents the tokens being reused, e.g. by a malicious or insecure server, to authenticate to other services which expect a different audience.
You can specify the audience you want to use when you create your identity token. For example: gcloud auth print-identity-token --audiences "https://service-acldswax.fx.gateway.dev"
You can specify the same audience in x-google-audiences to make this work. Alternatively, the service name prefixed with "https://" will be accepted by default. This can be specified as "host" in the API specification file and would normally be something like "api.example.com".
Note that anyone can generate a valid identity token, which will be accepted by the gateway. The gateway is performing /authentication/, but not /authorization/. You can either do authorization in the app, or for a private API you may wish to use a different oauth2 client.
When this is set up correctly you should be able to connect to the API gateway, but you will probably want your Cloud Run service to be locked down, to prevent the gateway from being bypassed. As you mentioned, the permission required to do this is included in the "Cloud Run Invoker" role, this needs to be granted to the gateway's service account on the Cloud Run service one of its containing resources (e.g. project, folder, organization).
I would suggest running the following commands to confirm/check the settings again :
Verify URL and API config in the gateway: gcloud api-gateway gateways describe $GATEWAY --location $REGION
Verify gateway config service account and backend URL (in base64 encoded document.contents): gcloud api-gateway api-configs describe --api $API $API_CONFIG --view FULL
Verify permission on Cloud Run service : gcloud run services --region $REGION get-iam-policy $SERVICE

Getting permission denied error when calling Google cloud function from Cloud scheduler

I am trying to invoke Google cloud function which is Http triggered by cloud scheduler.
But whenever I try to run cloud scheduler it always says permission denied error
httpRequest: {
status: 403
}
insertId: "14igacagbanzk3b"
jsonPayload: {
#type: "type.googleapis.com/google.cloud.scheduler.logging.AttemptFinished"
jobName: "projects/***********/locations/europe-west1/jobs/twilio-cloud-scheduler"
status: "PERMISSION_DENIED"
targetType: "HTTP"
url: "https://europe-west1-********.cloudfunctions.net/function-2"
}
logName: "projects/*******/logs/cloudscheduler.googleapis.com%2Fexecutions"
receiveTimestamp: "2020-09-20T15:11:13.240092790Z"
resource: {
labels: {
job_id: "***********"
location: "europe-west1"
project_id: "**********"
}
type: "cloud_scheduler_job"
}
severity: "ERROR"
timestamp: "2020-09-20T15:11:13.240092790Z"
}
Solutions I tried -
Tried putting Google cloud function in the same region as the App engine as suggested by some users.
Gave access to Google provided cloud scheduler sa service-****#gcp-sa-cloudscheduler.iamaccount.gserviceaccount.com owner role and Cloud Functions Admin role
My cloud function has ingress setting of Allow all traffic.
My cloud scheduler only works when I run below command
gcloud functions add-iam-policy-binding cloud-function --member="allUsers" --role="roles/cloudfunctions.invoker"
On Cloud Scheduler page, you have to add a service account to use to call the private Cloud Function. In the Cloud Scheduler set up, you have to
Click on SHOW MORE on the bottom
Select Add OIDC token in the Auth Header section
Add a service account email in the service account email for the Scheduler
Fill in the Audience with the same base URL as the Cloud Functions (the URL provided when you deployed it)
The service account email for the Scheduler must be granted with the role cloudfunctions.invoker
In my case the problem was related to restricted ingress setting for the cloud function. I set it to 'allow internal traffic only', but that allows only traffic from services using VPC, whereas Cloud Scheduler doesn't as per doc explanation:
Internal-only HTTP functions can only be invoked by HTTP requests that are created within a VPC network, such as those from Kubernetes Engine, Compute Engine, or the App Engine Flexible Environment. This means that events created by or routed through Pub/Sub, Eventarc, Cloud Scheduler, Cloud Tasks and Workflows cannot trigger these functions.
So the proper way to do it is:
set the ingress to 'all traffic'
remove the permission for allUsers with role Cloud Function Invoker
add the permission for created service account with role Cloud Function Invoker
or just set that permission globally for the service account in IAM console(you could do that when creating service account as well)
If you tried all of the above (which should be the first things to look at, such as Add OIDC token, giving your service account role Cloud Function Invoker and/or Cloud Run Invoker (for 2nd gen functions) etc.), please also check the following:
For me the only thing that fixed this, was adding the following google internal service account to IAM:
service-YOUR_PROJECT_NUMBER#gcp-sa-cloudscheduler.iam.gserviceaccount.com
And give this internal service account the following role:
Cloud Scheduler Service Agent
See also:
https://cloud.google.com/scheduler/docs/http-target-auth
And especially for this case:
https://cloud.google.com/scheduler/docs/http-target-auth#add