Authorized Javascript Origins wildcard alternative - google-cloud-platform

We've recently introduced Google Single Sign-on to our platform. It works well, except for one issue. All our development branches are automatically assigned a url that looks something like https://{branch-name}.ourdomain.com. As of right now, we have to manually add the authorized origin for each environment, which is not scalable for us.
Is there a solution, such as an API we can use in our deployment process, that doesn't require us to authorize from the same origin for all our branches and doing a redirect dance? The ideal solution would be the wildcard solution where we could add https://*.ourdomain.com as an authorized origin, but that doesn't seem to be allowed in the Google Cloud Platform.

There is no API for adding authorized origin dynamically in the Google console; it must be done manually. The OAuth engineering team is still evaluating the best way an API could be deployed as this carries many security risks that need to be properly assessed. JavaScript origins cannot contain certain characters including: Wildcard characters ('*') to ensure the security and privacy of accounts. You need to add the exact URIs the application is gonna use as JavaScript origins. Unfortunately, there is no good alternative workaround in regards to your use case, the only workaround is that you need to add each environment manually.
Note : There are several feature requests like Can't update Google Cloud Javascript Origin domains via API for this, but unlikely that will be implemented soon.
Refer Google API: Authorized JavaScript Origins for information.

Alternatively, you can redirect the user to the base domain which is registered in the google cloud console, and redirect back to the original site with the token post-authentication.
Please take a look at this article. https://www.kcoleman.me/2016/12/28/wildcard-google-auth-for-multiple-subdomains.html

Related

Can we have revisions URL use the custom domain in Google Cloud Run instead of the "assigned by GCP" URLs

We are web developers building a deployment preview service with Google Cloud Run. Not a lot of experience with this... ;-)
We have mapped a custom domain to the service and the problem is that when the developers are pushing revisions, GCP returns the revision URLs assigned by GCP, not revision URLs using the custom domain.
This is problematic for us because of the cross-origin, and the way we whitelist apps that can call our APIs, etc.
So my questions is: Is there a to have revisions URLs be subdomains of our custom domain or something like that?
Would like...
https://branch-name---service-name-123456.a.run.app
if possible to become something like...
https://branch-name---service-name-preview.customdomain.com

AWS Cognito OIDC Customizations

https://consumerdatastandardsaustralia.github.io/standards/#security-profile
I am trying to setup AWS Cognito as an OIDC provider. Able to create User pool however there are lots of custom data needed. Such as ".well-known/openid-configuration" of Cognito returns few details but missing introspection_endpoint, revocation_endpoint, claims_supported etc.
Similary, customization of /authorize endpoint with additional claims is needed.
Any help or suggestions would be really helpful.
Regards & Thanks
Claims can be somewhat customised with a lambda: https://docs.aws.amazon.com/cognito/latest/developerguide/user-pool-lambda-pre-token-generation.html
introspection_endpoint and revocation_endpoint are not core oauth and are extensions. I have found Cognito does not generally implement extensions, there are many parts of oauth2 core not implemented also.
Cognito is missing many many features you may expect to get out of the box, there is a seemingly large and opaque backlog which support constantly reference when you point out that a standard feature is missing.
no silent refresh capability in the hosted UI, so no safe way to store the refresh token.
no support for custom auth flow in the hosted UI
no passwordless support in the hosted UI
no ability to pre-populate a field in the hosted UI (e.g. username)
no ability to customise the plethora of obscure error messages in the custom UI
fixed now, but for years the email addresses were case sensitive!
If you choose not to use the hosted UI there is no way to get any oauth scopes.
There are many non-oauth cognito idp calls that you may be able to use with the access token: https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cognito-idp/index.html
If these don't fit your needs, I would suggest you consider other auth services, or adjust your expectations if you choose to move forward with Cognito. Good luck!

How to access to secrets in static site hosted in S3 bucket

I'm new and since I could not find relevant information in my searches I decided to ask for your advice.
I created a SPA (React) that receives a token, validates the token and if the token is valid it renders some content. That SPA is hosted in S3.
Now, I want to add some API keys (sensitive ones). Adding them to the code (manually or during the build of the bundle) it would be a bad idea, no?
I thought about storing them in AWS, like in secrets manager, and use the SDK (js) to retrieve them. But here is my doubt. I don't want neither to hardcode the AWS credentials in the code for the SDK, nor use something like cognito since the authentication would be done by this app through the token that it receives. What would be the best way to achieve this? I will appreciate advice and if you can point to some resources.
Feel free to make as many suggestions as you want. Thanks.

Google Cloud Run service url (discovery)

I am running several gcloud services which have assigned urls automatically in following format:
https://SERVICE_NAME-XXXXXXX-ew.a.run.app/
This is not particularly easy to work with and to pass these URLs to clients. Alternative is to use the custom domain, but this needs hardcoding subdomains within DNS records (as far as I understand) and I would like to avoid that and use the default URLs.
What is the best practice to work with these URLs? I can imagine keeping some mapping of service->URL and passing it to clients, but I would like to avoid reinventing the wheels.
Edit: I've released an external tool called runsd that lets you do this. Check it out: https://github.com/ahmetb/runsd
Thanks for this question! The "Service discovery by name" for Cloud Run is very much an active area of work. Though, there are no active timelines we can share yet.
You can see a prototype of me running this on Cloud Run here: https://twitter.com/ahmetb/status/1233147619834118144
APIs like Google Cloud Service Directory linked are geared more towards custom/DIY service discovery you might want to build to your RPC stack such as gRPC. It's more of a managed domain name directory, that you can integrate with your RPC.
If you are interested in participating an alpha for this feature in the future, drop me an email at ahmetb at google.
You can use a beta service Service Directory.
At service deployment
Create your service with a name and the URL as metadata
In your code
Request the service metadata with its name, and get the URL
Use the url
You can't use the endpoint feature of the service because your don't have IP/Port.
However, for now, there is client library and you have to use API directly.

List subdomains attached to a google apps domain?

Referring to the admin SDK APIs, I don't see anything that would allow me to list the subdomains defined for the google apps domain (via the CPanel in Domain Settings->Domain Names).
Is there any way to collect this information directly? Even the older provisioning API doesn't support any listing of subdomains and/or domain aliases.
There are no domain-specific API calls that allow you to list out secondary domains and domain aliases. However, you can try listing all users (with aliases) and groups (with aliases) in order to see which domains are in use. Technically, this may not be all domains (some may not have any users/groups associated with them yet). But for practical purposes it should serve.
Obviously the performance would be much better if you could perform a single API call and get all domains but there's currently no method for that. This is the only workaround I'm aware of.