https://cloud.google.com/solutions/authentication-in-http-cloud-functions
The document suggest set up a Google Cloud Storage bucket. And then set up the service accounts' permission, "storage.buckets.get", to the bucket.
Then use this permission to authenticate access to the http Google Cloud Functions.
We are talking about authenticating the http cloud functions, but we are borrowing the permission from a Google Cloud Storage. It seems to me this is a hack solution.
If we can just set up permissions right at each Cloud Function through the Google Cloud Console, that will be great.
Are you guys using the authentication solution suggested by Google in the above document? Or you have better approaches?
To set up the ""storage.buckets.get", does it mean I grant the service account "Storage Object Viewer" permission?
The solution proposed in the link you brought here is indeed one of the ways. In fact, you can use any other Google Cloud Platform product (not only Storage buckets) to check the chosen account's permissions to it.
An alternative that can work is:
Prepare a Cloud Function that will have the authorized users' emails listed.
Cloud Function retrieves the 'Authorization' header of the incoming HTTP request that contains the token generated for the account that made the request.
The function calls the tokeninfo endpoint using the mentioned header to retrieve email of the account (from the JSON response body). The url returning the email will look like this:
url = "https://www.googleapis.com/oauth2/v1/tokeninfo?fields=email&access_token
=" + token_from_the_request_header;
Verifying that the returned email is in the list of authorized ones.
... if yes, executing the function's logic.
For using Cloud Functions you need to put your modules in buckets. Granting the account ‘storage.buckets.get’ permission to the bucket, you grant authorization to the service account to trigger your HTTP Cloud Function; and similarly, you revoke authorization by removing ‘storage.buckets.get’ permission from another service account.
To set up the ‘storage.buckets.get’ permission you need to either select “Storage Admin” through the standard roles or ‘storage.legacyBucketReader'/’storage.legacyBucketWriter’ from legacy roles or even define a custom role with ‘storage.buckets.get’ permission.
Related
AWS has a large number of buckets that different users have access to. And there is a lambda function that selects data from s3 and gives it to the client via the Api Gateway. The client has the opportunity to specify in the api request from which bucket lambda should make a selection. But how to check that he is requesting exactly the bucket to which he has permission?
In the iam policies, I can only indicate that it can access a specific api resource, but the resource is shared by everyone. In lambda authorizer, I can't get information about the user's rights and permissions (or can I?).
Please tell me how you can solve this issue. Which way to move?
P.S. This should be the authorization of users in amazon, I can't give them my JWT with my data.
It would be your responsibility to code the authentication and permission requirements in your own code. The person making the request via API Gateway is not an IAM User, so AWS does not recognise them and cannot grant access based on the normal AWS permission model.
Your code would need to:
Recognise and authenticate the user
Determine what resources (buckets) that user is permitted to access
Only provide access to permitted resources
How to do this is your decision. You should start with a way of identifying and authenticating the user.
I am setting up a service account in GCP in order to call the Directory API.
But I always get permission error: Not Authorized to access this resource/api.
I have setup this role but no luck. How do I know what permission I need to configure in order to call the API?
Google Directory API is not a part of GCP - hence any roles / permissions you assign to your service account will not work.
You have to create a role and assign it to a user in order to be able to work with this API.
Your service account is not a Domain Admin so it doesn't have access. You can however enable domain-wide-delegation and make the service account impersonate domain admin so your requests will be accepted;
This page describes how to allow members and resources to impersonate, or act as, an Identity and Access Management (IAM) service account. It also explains how to see which members are able to impersonate a given IAM service account.
Have a look at this answer which may be usefull to you. One more document that you may find helpful is "Authorising your request".
I have a cloud function that has restricted access by Cloud IAM. I have an external service (Auth0) that launches hooks when something happens. I want that hook to trigger my Cloud Function. However the hook should authorize itself beforehand with Cloud IAM.
What I want to do:
Create a new member auth0-hooks
Give that member the Cloud Function Invoker permission
In the hook's code I want to fetch a IAM token from Google (metaserver?)
Use that token within the request to the Cloud Function trigger URL
Trigger access through Cloud IAM and the given token
I am currently stuck in the step of creating a new member auth0-hooks. I thought that's a trivial one but quickly figured out that there is no way to simply add a new member? I thought about creating a service-account but was unsure if a service-account can be used from outside (by requesting the access token of it via the google metaserver)?
That's where I am stuck currently
The service account is the correct way. A service account is a technical account. Like a user account, but for servers.
You can grant permissions on it. When you need to use this service account from outside GCP environment, you need to create a service account key file which contain a private key (it's a secret, keep it safe!). With this service account key file you are able to generate an identity token required by your hook to call the Cloud Functions and be authenticated and authorized.
The Google Cloud Auth libraries help you, in several languages.
Note: metadata server are only internal services on Google Cloud, not reachable externally.
How to setup multi-account(project) in GCP, it is possible in AWS by using assume-role, anyone knows how to do it in Google Cloud (GCP)?
I tried to explore AWS equivalent in GCP, but not able to find any document.
As documented, AssumeRole in AWS returns a set of temporary security credentials that you can use to access AWS resources that you might not normally have access to.
In AWS you can create one set of long-term credentials in one account. Then you can use temporary security credentials to access all the other accounts by assuming roles in those accounts.
The equivalent of the above in GCP would be creating short-lived credentials for service accounts to impersonate their identities (Documentation link).
Accordingly, in GCP you have the “caller” and the “limited-privilege service account” for whom the credential is created.
To implement this scenario, first, use handy documentation on Service Accounts and Cloud IAM Permission Roles in GCP, as each account is a Service Account with specific role permissions, in order to understand how accounts work in GCP.
The link I posted above, provides detailed information on the flows that allow a caller to create short-lived credentials for a service account and the supported credential types.
Additionally, this link can assist you in visualizing and understanding the resource hierarchy architecture in GCP and give you examples on how to structure your project according to your organization’s structure.
The basic answer is "Service Roles". Limited-time service roles are available.
For assigning permissions across projects (but still in the same organization), you can create a custom role.
For letting any user assume the role of a service account, use the Service Account user role.
For limited-time authorization tokens, you have OAuth 2.0 for server-to-server calls, particularly with JWT where available.
I'd like to give an application access to one of my Google Storage buckets by giving it a suitable OAuth2 token.
If I understand https://cloud.google.com/storage/docs/authentication correctly, then there is no way to limit a token to a specific bucket.
What is the easiest / recommended way to create a token with limited access? I guess I could create an entirely new Google account just for this purpose, adjust the ACLs of the bucket to give access to the new user as well, and then create an OAuth token using that user. But that seems... awkward and not very scalable.
(In case it matters: the application is using the OAuth2 device flow, i.e. it gives me a Google URL that I have to visit and use to log in)
You can create a service account, with access limited to the required bucket.
https://cloud.google.com/iam/docs/understanding-service-accounts
Use Bucket ACLs and grant the user's email address to the bucket. The user's email address must be part of Google Accounts or G Suite that the user logs in with (to Google Accounts).
The following gsutil example will grant john.doe#example.com the permission write on Bucket example-bucket:
gsutil acl ch -u john.doe#example.com:WRITE gs://example-bucket
Documentation:
GSUTIL - ACLs