How to execute workflow from gcp cloud tasks - google-cloud-platform

Im trying to execute a workflow from cloud tasks but getting immediately 401 error.
Here is the code to enqueue the task:
req := &taskspb.CreateTaskRequest{
Parent: fmt.Sprintf("projects/%v/locations/us-central1/queues/myqueue", projectID),
Task: &taskspb.Task{
PayloadType: &taskspb.Task_HttpRequest{
HttpRequest: &taskspb.HttpRequest{
HttpMethod: taskspb.HttpMethod_POST,
Url: fmt.Sprintf("https://workflowexecutions.googleapis.com/v1/projects/%v/locations/us-central1/workflows/myworkflow/executions", projectID),
Body: bodyJson,
AuthorizationHeader: &taskspb.HttpRequest_OidcToken{
OidcToken: &taskspb.OidcToken{
ServiceAccountEmail: serviceAccount,
},
},
},
},
},
}
_, err = client.CreateTask(ctx, req)
UNAUTHENTICATED 401 error:
The service account I'm using has the workflow invoker permission.
What am I missing here?

The "Unauthorized" error indicates that the service account you are utilizing lacks the authorizations required to access the Workflow Executions API. Do check the service account you're using has the proper permissions to access the Workflow Executions API. Make sure the service account is set up successfully and has the relevant authentication information. Create the service account with the "Compute Engine default service account" role, and then enable it.

Related

Cloudfunction v2 invoked by pubsub logs "Either allow unauthenticated invocations or set the proper Authorization header."

I've created a cloudfunction v2 that's invoked by a pubsub topic/subscription messagePublished event. However, whenever it's triggered, I get this error:
{
httpRequest: {
latency: "0s"
protocol: "HTTP/1.1"
remoteIp: "xx.xxx.xx.xx"
requestMethod: "POST"
requestSize: "3912"
requestUrl: "https://handler-function-cloud-custodian-xxxxxx-uc.a.run.app/?__GCP_CloudEventsMode=CUSTOM_PUBSUB_projects%2Fchase-test-custodian%2Ftopics%2Fevent-topic-cloud-custodian"
serverIp: "xxx.xxx.xx.xx"
status: 403
userAgent: "APIs-Google; (+https://developers.google.com/webmasters/APIs-Google.html)"
}
insertId: "xxxxx"
labels: {
goog-managed-by: "cloudfunctions"
}
logName: "projects/chase-test-custodian/logs/run.googleapis.com%2Frequests"
receiveTimestamp: "2023-01-30T17:45:14.427320714Z"
resource: {
labels: {5}
type: "cloud_run_revision"
}
severity: "WARNING"
spanId: "xxxxxx"
textPayload: "The request was not authenticated. Either allow unauthenticated invocations or set the proper Authorization header. Read more at https://cloud.google.com/run/docs/securing/authenticating Additional troubleshooting documentation can be found at: https://cloud.google.com/run/docs/troubleshooting#unauthorized-client"
timestamp: "2023-01-30T17:45:14.422306Z"
trace: "projects/chase-test-custodian/traces/xxxxxx"
}
I tried adding the "allUsers" principal with "Cloud Functions Invoker" role to the cloud function, but I get the same error regardless.
The subscription was created by terraform when I specified it as the cloudfunctions iam member using the tf in the below gist:
https://gist.github.com/chase-replicated/0aa241db7da7e31fa63601fcd3308e91
I believe you will need to grant permission to the service account associated with the function for Google to authorise the function , You can also try creating a new service account for the function.
But I would suggest creating a case to Google Cloud for this as they may dig through your logs and monitoring. (and possibly your available service accounts) for investigation.

How to impersonate a service account from a differnet project when creating a cloudrun in a workflow?

A workflow fails to start due to permission denied error when trying to impersonate a service account from different project
given:
Projects:
project1
project2
Service Accounts:
sa1#project1 with roles:
Workflows Admin
Cloudrun Admin
Service Account Token Creator
Service Account User
sa2#project2
Workflows:
A workflow1 in project1 (creates a cloudrun instance with serviceAccountName=sa2#project2)
Result:
{
"body": {
"error": {
"code": 403,
"message": "Permission 'iam.serviceaccounts.actAs' denied on service account sa2#project2 (or it may not exist).",
"status": "PERMISSION_DENIED"
}
},
"code": 403,
"headers": {
"Alt-Svc": "h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000,h3-Q050=\":443\"; ma=2592000,h3-Q046=\":443\"; ma=2592000,h3-Q043=\":443\"; ma=2592000,quic=\":443\"; ma=2592000; v=\"46,43\"",
"Cache-Control": "private",
"Content-Length": "244",
"Content-Type": "application/json; charset=UTF-8",
"Date": "Wed, 14 Sep 2022 10:53:24 GMT",
"Server": "ESF",
"Vary": "Origin",
"X-Content-Type-Options": "nosniff",
"X-Frame-Options": "SAMEORIGIN",
"X-Xss-Protection": "0"
},
"message": "HTTP server responded with error code 403",
"tags": [
"HttpError"
]
}
The error message "Permission 'iam.serviceaccounts.actAs' denied on service account sa2#project2 indicates that users need permission to impersonate a service account in order to attach that service account to a resource. This means that the user needs the iam.serviceAccounts.actAs permission on the service account.
There are several predefined roles that allow a principal to impersonate a service account:
Service Account User
Service Account Token Creator
Workload Identity User
Alternatively, you can grant a different predefined role, or a custom role, that includes permissions to impersonate service accounts.
Service Account User (roles/iam.serviceAccountUser): This role includes the iam.serviceAccounts.actAs permission, which allows principals to indirectly access all the resources that the service account can access. For example, if a principal has the Service Account User role on a service account, and the service account has the Cloud SQL Admin role (roles/cloudsql.admin) on the project, then the principal can impersonate the service account to create a Cloud SQL instance.
You can try giving a service account User role on the service account which is trying to create a cloud run instance.
Refer to the link for more information on impersonating service accounts.
My client is a huge corporate.
Therefore the project level configuration to switch service accounts across projects is disabled (iam.disableCrossProjectServiceAccountUsage is enforced )
This is the root cause of my problem and I cannot change it.
More information is available here:
https://cloud.google.com/iam/docs/impersonating-service-accounts#attaching-different-project
My work around:
I needed this this as it seemed the simplest way to access external BigQuery Dataset & Project.
Solution:
Export private key for sa2#project2 and pass it as a secret to application layer.
Use the key file to imperosnatesa2#project2 service account.
example:
engine = create_engine('bigquery://project2', location="asia-northeast1")

Amplify API REST with AWS_IAM: Request failed with status code 403

I'm trying to execute API calls from ReactNative AWS Amplify to API Gateway endpoint using AWS_IAM authorization.
I do it by calling (all Amplify initialization params are set):
import { API, Auth } from "aws-amplify";
...
API.get("MyApiName", "/resource")
.then(resp => { ... })
.catch(e => console.log(JSON.stringify(e));
I have console printout like:
{
"message":"Request failed with status code 403",
"name":"Error",
"stack": "...",
"headers":{
"Accept":"application/json, text/plain, */*",
"User-Agent":"aws-amplify/3.8.23 react-native",
"x-amz-date":"20210908T172556Z",
"X-Amz-Security-Token":"IQoJb3...",
"Authorization":"AWS4-HMAC-SHA256 Credential=ASIA23GCUWEDETN632PS/20210908/us-east-1/execute-api/aws4_request, SignedHeaders=host;user-agent;x-amz-date;x-amz-security-token, Signature=2a06fb4d8eb672164bfd736790fb1658edef1240d12a38afb599a9e33020c3cd"
...
}
So, it looks like the request is Signed!
I use Cognito User Pool and appropriate Identity Pool. They both are set properly, becuase these settings work with successfull authorization access to S3 storage using AWS Amplify S3.
Authenticated role for Cognito Identity Pool has permission to for ExecuteApi to invoke the API resource method. Also, it has permission to invoce the Lambda that is linked to the API's resource method.
All looks fine, but I am still getting the 403 Forbidden error.
What's missing here?

Getting permission denied error when calling Google cloud function from Cloud scheduler

I am trying to invoke Google cloud function which is Http triggered by cloud scheduler.
But whenever I try to run cloud scheduler it always says permission denied error
httpRequest: {
status: 403
}
insertId: "14igacagbanzk3b"
jsonPayload: {
#type: "type.googleapis.com/google.cloud.scheduler.logging.AttemptFinished"
jobName: "projects/***********/locations/europe-west1/jobs/twilio-cloud-scheduler"
status: "PERMISSION_DENIED"
targetType: "HTTP"
url: "https://europe-west1-********.cloudfunctions.net/function-2"
}
logName: "projects/*******/logs/cloudscheduler.googleapis.com%2Fexecutions"
receiveTimestamp: "2020-09-20T15:11:13.240092790Z"
resource: {
labels: {
job_id: "***********"
location: "europe-west1"
project_id: "**********"
}
type: "cloud_scheduler_job"
}
severity: "ERROR"
timestamp: "2020-09-20T15:11:13.240092790Z"
}
Solutions I tried -
Tried putting Google cloud function in the same region as the App engine as suggested by some users.
Gave access to Google provided cloud scheduler sa service-****#gcp-sa-cloudscheduler.iamaccount.gserviceaccount.com owner role and Cloud Functions Admin role
My cloud function has ingress setting of Allow all traffic.
My cloud scheduler only works when I run below command
gcloud functions add-iam-policy-binding cloud-function --member="allUsers" --role="roles/cloudfunctions.invoker"
On Cloud Scheduler page, you have to add a service account to use to call the private Cloud Function. In the Cloud Scheduler set up, you have to
Click on SHOW MORE on the bottom
Select Add OIDC token in the Auth Header section
Add a service account email in the service account email for the Scheduler
Fill in the Audience with the same base URL as the Cloud Functions (the URL provided when you deployed it)
The service account email for the Scheduler must be granted with the role cloudfunctions.invoker
In my case the problem was related to restricted ingress setting for the cloud function. I set it to 'allow internal traffic only', but that allows only traffic from services using VPC, whereas Cloud Scheduler doesn't as per doc explanation:
Internal-only HTTP functions can only be invoked by HTTP requests that are created within a VPC network, such as those from Kubernetes Engine, Compute Engine, or the App Engine Flexible Environment. This means that events created by or routed through Pub/Sub, Eventarc, Cloud Scheduler, Cloud Tasks and Workflows cannot trigger these functions.
So the proper way to do it is:
set the ingress to 'all traffic'
remove the permission for allUsers with role Cloud Function Invoker
add the permission for created service account with role Cloud Function Invoker
or just set that permission globally for the service account in IAM console(you could do that when creating service account as well)
If you tried all of the above (which should be the first things to look at, such as Add OIDC token, giving your service account role Cloud Function Invoker and/or Cloud Run Invoker (for 2nd gen functions) etc.), please also check the following:
For me the only thing that fixed this, was adding the following google internal service account to IAM:
service-YOUR_PROJECT_NUMBER#gcp-sa-cloudscheduler.iam.gserviceaccount.com
And give this internal service account the following role:
Cloud Scheduler Service Agent
See also:
https://cloud.google.com/scheduler/docs/http-target-auth
And especially for this case:
https://cloud.google.com/scheduler/docs/http-target-auth#add

AWS API Gateway WebSocket Connection Error

I created an API by AWS API Gateway and Lambda that is same 'https://github.com/aws-samples/simple-websockets-chat-app'. But the API not working trust. I get an error when i try to connect. Its message is "WebSocket connection to 'wss://b91xftxta9.execute-api.eu-west-1.amazonaws.com/dev' failed: Error during WebSocket handshake: Unexpected response code: 500"
My Connection Code
var ws= new WebSocket("wss://b91xftxta9.execute-api.eu-west-1.amazonaws.com/dev");
ws.onopen=function(d){
console.log(d);
}
Try adding $context.error.validationErrorString and $context.integrationErrorMessage to the logs for the stage.
I added a bunch of stuff to the Log Format section, like this:
{ "requestId":"$context.requestId", "ip": "$context.identity.sourceIp",
"requestTime":"$context.requestTime", "httpMethod":"$context.httpMethod",
"routeKey":"$context.routeKey", "status":"$context.status",
"protocol":"$context.protocol", "errorMessage":"$context.error.message",
"path":"$context.path",
"authorizerPrincipalId":"$context.authorizer.principalId",
"user":"$context.identity.user", "caller":"$context.identity.caller",
"validationErrorString":"$context.error.validationErrorString",
"errorResponseType":"$context.error.responseType",
"integrationErrorMessage":"$context.integrationErrorMessage",
"responseLength":"$context.responseLength" }
In early development this allowed me to see this type of error:
{
"requestId": "QDu0QiP3oANFPZv=",
"ip": "76.54.32.210",
"requestTime": "21/Jul/2020:21:37:31 +0000",
"httpMethod": "POST",
"routeKey": "$default",
"status": "500",
"protocol": "HTTP/1.1",
"integrationErrorMessage": "The IAM role configured on the integration
or API Gateway doesn't have permissions to call the integration.
Check the permissions and try again.",
"responseLength": "35"
}
try using wscat -c wss://b91xftxta9.execute-api.eu-west-1.amazonaws.com/dev in a terminal. This should allow you to connect it. If you don't have wscat installed, just do a npm install -g wscat
To get more details, enable logging for your API: Stages -> Logs/Tracing -> CloudWatch Settings -> Enable CloudWatch Logs. Then, send a connection request again and monitor your API logs in CloudWatch. In my case, I had the next error:
Execution failed due to configuration error: API Gateway does not have permission to assume the provided role {arn_of_my_role}
So, I added API Gateway to my role's Trust Relationships, as it's mentioned here and it fixed the problem.