I followed the tutorial on the Google Cloud Run page and I have created a small, private Google Cloud Run API. Now I can use curl as described here to make requests to my API:
curl -H "Authorization: Bearer $(gcloud auth print-identity-token)" SERVICE_URL
So far so good. Now I would like to build a slackbot. The slackbot should respond to slashcommands and whenever a certain slashcommand is issued it should 1) authenticate itself with the API and then 2) issue a command.
Is that possible? I looked around in the entire Slack API documentation, but could not find an example in which a Slack Bot had to authenticate itself with another service. Could someone maybe point me to a guide/tutorial where the author implemented a private API in the Google Cloud that is called from a slackbot?
It's not possible. Instead of giving Slack the ability to make an authenticated request to your Cloud Run instance, configure it to allow unauthenticated access and instead validate that the event from Slack is valid by validating the token provided in the request.
This is described in Slack's Event's API documentation:
token: The shared-private callback token that authenticates this callback to the application as having come from Slack. Match this against what you were given when the subscription was created. If it does not match, do not process the event and discard it.
Related
I have a spring boot based web application which already authenticates the user. I would like to create some AWS APIs using AWS api gateway and a springboot rest app deployed on EC2. The user will log into the existing application and call the new AWS APIs from the browser ajax calls (like an SPA I guess but I have a server and can use client credentials/secrets if needed). What is the best way to handle security for the APIs. My first thought was to generate a JWT using a shared server side secret for hash and verify it with an AWS lambda. But this seems a bit non standard. Oauth2 might be better but might also be over kill. Not sure the performance would be good. Few requirements
token passed to the API should be a user specific token and have an expiration (and hence a refresh token)
The user is already logged into my web app and calling the api should not issue a login challenge.
AWS API Gateway should be able to verify the token before passing it to the application
Any token passed to the API should probably be generated on the logged in web application. Server knows the user is authenticated and should generate the user access token on behalf of the user. The AWS api should be able to figure out what privileges the user has based on the user principle or scopes in the token
I've thought about using Cognito AWS but I dont want to require the users to preexist in a user pool. Cognito seems geared more for the authentication rather than authorization and I would not be using the cognito login functionality. I dont know if its possible to use the oauth2/token endpoint alone with cognito.
I am trying to build a web application where a user log's in and access protected data in cloud storage bucket (firebase authentication used for login). ACL is set for objects in cloud storage. The user should be able to read only the objects that he has access to.
I want to get Bearer access token for the user, the access token should have his scopes, when I send a REST request with the bearer token, I should be able to read the objects in bucket for which he has access to, if the user don't have access, I should get access denied message. I cant use service account here as I am getting data specific to a user.
How can I do this, suggestions please and Thanks in advance.
For you to be able to use a Bearer token with your application, it will be by using the OAuth 2.0. This is the authentication method used by Cloud Storage to allow access via REST calls to the system. For you to use it, you need to add this below line of code into the headers of every request that you will be doing, that requires authentication - as described in the official documentation here.
Authorization: Bearer <oauth2_token>
This is the way of providing authentication via token in Cloud Storage. In addition to that, for you to generate the token, you can try two different modes.
One it's to use the OAuth 2.0 Playground, where you can specify the API you want the authorization token to.
The other one, as specified here, it's to run the command gcloud auth print-access-token, so you can get an access token for the gcloud default configuration.
Once you have the token and configure the header for your requests with it, you should be good to use it for the requests.
Let me know if the information helped you!
I have created CF on GCP console, some are trigggered by Firestore and some are HTTP Endpoints. I have secured former one using Firebase Auth, but the later one HTTP Endpoints are not secured as i didnt find any way to authenticate them. Please help as i am new to GCP.
Here's a code sample that shows how to only allow users that use a Firebase ID token as a Bearer in the Authorization header of the HTTP request or in a __session cookie to trigger the Cloud Function.
Alternatively this StackOverflow post may be of help.
I currently have a Lex chatbot that I would like to integrate with both Twilio and Cognito (in the sense that only Cognito authenticated users will be able to communicate with Twilio and the lex bot).
To this end, I've created an API Gateway that handles Twilio requests and pushes them to a Lambda function that interacts with Lex. I've also added a Cognito authorizer to my API Gateway that blocks users from interacting with Twilio if they are unauthenticated.
I don't currently have a back end app (long story), so for now users login to a Cognito-hosted UI that redirects to Google's homepage.
The problem? I haven't yet found a way to connect the authentication credentials given out at a user's Cognito log-in (which occurs on a web browser) to the API Gateway that communicates with Twilio (since Twilio is making the initial API calls). Currently there is no such connection, so all communication with Twilio (and therefore the lexbot) is blocked. I can't push the relevant tokens to Twilio when it makes the API call.
I have two feelings:
The issue probably comes from the fact that there is no connection between the web-based login and the text messages the end user sends to Twilio to kick-off the whole process
It seems like I will have to use a Custom Lambda Function Authorizer (I'd like to avoid this, if possible)
If it helps, I used this tutorial as a starting point.
Any ideas?
Any and all help or suggestions would be greatly appreciated!
Twilio developer evangelist here.
The HTTP requests that Twilio makes to the API gateway are completely disconnected from your user in a browser. About the best thing you can do to exclude users that haven't signed in is to somehow store their phone number within AWS somewhere and, like you said, use a custom authorizer to check the From phone number on the incoming webhook matches one of your users.
We are using GoogleCloudPlatform for big-data analytics. For processing we are currently using the google cloud dataproc & spark-streaming.
I want to submit a Spark job using the REST API, but when I am calling the URI with the api-key, I am getting the below error!
{
"error": {
"code": 403,
"message": "The caller does not have permission",
"status": "PERMISSION_DENIED"
}
}
URI :- https://dataproc.googleapis.com/v1/projects/orion-0010/regions/us-central1-f/clusters/spark-recon-1?key=AIzaSyA8C2lF9kT*************SGxAipL0
I created the API from the google console> API manager
While API keys can be used for associating calls with a developer project, it's not actually used for authorization. Dataproc's REST API, like most other billable REST APIs within Google Cloud Platform, uses oauth2 for authentication and authorization. If you want to call the API programmatically, you'll likely want to use one of the client libraries such as the Java SDK for Dataproc which provides convenience wrappers around the low-level JSON protocols, as well as giving you handy thick libraries for using oauth2 credentials.
You can also experiment with the direct REST API using Google's API explorer where you'll need to click the button on the top right that says "Authorize requests using OAuth 2.0".
I also noticed you used us-central1-f under the regions/ path for the Dataproc URI; note that Dataproc's regions don't map one-to-one with Compute Engine zones or regions; rather, Dataproc's regions will each contain multiple Compute Engine zones or regions. Currently there is only one Dataproc region available publicly, which is called global and is capable of deploying clusters into all Compute Engine zones. For an easy illustration of using an oauth2 access token, you can simply use curl along with gcloud if you have the gcloud CLI installed:
PROJECT='<YOUR PROJECT HERE>'
ACCESS_TOKEN=$(gcloud beta auth application-default print-access-token)
curl \
--header "Authorization: Bearer ${ACCESS_TOKEN}" \
--header "Content-Type: application/json" \
https://dataproc.googleapis.com/v1/projects/${PROJECT}/regions/global/clusters
Keep in mind that the ACCESS_TOKEN printed by gcloud here by nature expires (in about 5 minutes, if I remember correctly); the key concept is that the token you pass along in HTTP headers for each request will generally be a "short-lived" token, and by design you'll have code which separately fetches new tokens whenever the access tokens expire using a "refresh token"; this helps protect against accidentally compromising long-lived credentials. This "refresh" flow is part of what the thick auth libraries handle under the hood.