Unable to create a scheduler using the #google-cloud/scheduler package - google-cloud-platform

GOT this error while creating a scheduler , earlier i got the code on local and it was working but when i deployed the code on vm on gcp itself it started failing and showed this error.
Error: 7 PERMISSION_DENIED: Request had insufficient authentication scopes.
at Object.callErrorFromStatus (/app/node_modules/#grpc/grpc-js/build/src/call.js:31:26)
at Object.onReceiveStatus (/app/node_modules/#grpc/grpc-js/build/src/client.js:180:52)
at Object.onReceiveStatus (/app/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:336:141)
at Object.onReceiveStatus (/app/node_modules/#grpc/grpc-js/build/src/client-interceptors.js:299:181)`enter code here`
at /app/node_modules/#grpc/grpc-js/build/src/call-stream.js:160:78
at processTicksAndRejections (internal/process/task_queues.js:79:11) {
code: 7,
details: 'Request had insufficient authentication scopes.',
metadata: Metadata {
internalRepr: Map {
'google.rpc.errorinfo-bin' => [Array],
'grpc-status-details-bin' => [Array],
'grpc-server-stats-bin' => [Array]
},
options: {}
},
statusDetails: [
ErrorInfo {
metadata: [Object],
reason: 'ACCESS_TOKEN_SCOPE_INSUFFICIENT',
domain: 'googleapis.com'
}
],
reason: 'ACCESS_TOKEN_SCOPE_INSUFFICIENT',
domain: 'googleapis.com',
errorInfoMetadata: {
service: 'cloudscheduler.googleapis.com',
method: 'google.cloud.scheduler.v1.CloudScheduler.CreateJob'`enter code here`
}
}

If you use the default service account on your compute instance, you have to update the scopes; add all or only the Cloud Scheduler one.
If you don't use the default service account, you haven't scopes to select (scopes selection is a legacy mode, no longer available with new feature).
Note You have to stop the VM, change the scopes/service account and then restart the VM

Related

Deploying a model to the regional endpoint on AI Platform Prediction succeeds, but deploying the same model to the global endpoint fails

I have a scikit-learn model saved in Cloud Storage which I am attempting to deploy with AI Platform Prediction. When I deploy this model to a regional endpoint, the deployment completes successfully:
➜ gcloud ai-platform versions describe regional_endpoint_version --model=regional --region us-central1
Using endpoint [https://us-central1-ml.googleapis.com/]
autoScaling:
minNodes: 1
createTime: '2020-12-30T15:21:55Z'
deploymentUri: <REMOVED>
description: testing deployment to a regional endpoint
etag: <REMOVED>
framework: SCIKIT_LEARN
isDefault: true
machineType: n1-standard-4
name: <REMOVED>
pythonVersion: '3.7'
runtimeVersion: '2.2'
state: READY
However, when I try to deploy the exact same model, using the same Python/runtime versions, to the global endpoint, the deployment fails, saying there was an error loading the model:
(aiz) ➜ stanford_nlp_a3 gcloud ai-platform versions describe public_object --model=global
Using endpoint [https://ml.googleapis.com/]
autoScaling: {}
createTime: '2020-12-30T15:12:11Z'
deploymentUri: <REMOVED>
description: testing global endpoint deployment
errorMessage: 'Create Version failed. Bad model detected with error: "Error loading
the model"'
etag: <REMOVED>
framework: SCIKIT_LEARN
machineType: mls1-c1-m2
name: <REMOVED>
pythonVersion: '3.7'
runtimeVersion: '2.2'
state: FAILED
I tried making the .joblib object public to make sure there wasn't a permissions difference when trying to deploy to the two endpoints causing the issue, but the deployment to the global endpoint still failed. I removed the deploymentUri from the post since I have been experimenting with the permissions on this model object, but the paths are identical in the two different model versions.
The machine types for the two deployments have to be different, and for the regional deployment I use min nodes = 1 while for global I can use min nodes = 0, but other than that and the etags everything else is exactly the same.
I couldn't find any information in the AI Platform Prediction regional endpoints docs page which indicated certain models could only be deployed to a certain type of endpoint. The "Error loading the model" error message doesn't give me a lot to go on since it doesn't appear to be a permissions issue with the model file.
When I add the --log-http option to the create version command, I see that the errorcode is 3, but the message doesn't reveal any additional information:
➜ ~ gcloud ai-platform versions create $VERSION_NAME \
--model=$MODEL_NAME \
--origin=$MODEL_DIR \
--runtime-version=2.2 \
--framework=$FRAMEWORK \
--python-version=3.7 \
--machine-type=mls1-c1-m2 --log-http
Using endpoint [https://ml.googleapis.com/]
=======================
==== request start ====
...
...
the final response from the server looks like this:
---- response start ----
status: 200
-- headers start --
<headers>
-- headers end --
-- body start --
{
"name": "<name>",
"metadata": {
"#type": "type.googleapis.com/google.cloud.ml.v1.OperationMetadata",
"createTime": "2020-12-30T22:53:30Z",
"startTime": "2020-12-30T22:53:30Z",
"endTime": "2020-12-30T22:54:37Z",
"operationType": "CREATE_VERSION",
"modelName": "<name>",
"version": {
<version info>
}
},
"done": true,
"error": {
"code": 3,
"message": "Create Version failed. Bad model detected with error: \"Error loading the model\""
}
}
-- body end --
total round trip time (request+response): 0.096 secs
---- response end ----
----------------------
Creating version (this might take a few minutes)......failed.
ERROR: (gcloud.ai-platform.versions.create) Create Version failed. Bad model detected with error: "Error loading the model"
Can anyone explain what I am missing here?

Unable to Create New GCP Endpoints in Cloud Run

I'm using Cloud Run, Endpoints and Cloud Functions to build an API service. There are multiple endpoints running completely fine, but I'm no longer able to deploy any new endpoints.
The Cloud Run environment has an error that prevents it from making a call to the corresponding Cloud Function. Oddly enough, all other endpoints work fine, but I'm unable to create new endpoints.
I found this article: https://cloud.google.com/endpoints/docs/openapi/troubleshoot-response-errors but it's only for the BAD_GATEWAY error code.
All code is deployed completely fine. No errors in deploying the Cloud Function, Cloud Run or Open API yaml file.
Error in response:
{
"code": 13,
"message": "INTERNAL_SERVER_ERROR",
"details": [
{
"#type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [],
"detail": "application"
}
]
}
Error in Cloud Run:
5#5: *33 invalid URL prefix in "", client: xxxxx, server: , request: "GET /user HTTP/1.1", host: "[my cloud run host]"
GET 500 404 B4 ms python-requests/2.22.0 [cloud run host]/user
The main.py file:
def user(request):
return "Ok"
The yaml file:
/user:
x-google-backend:
address: https://[cloud functions host]/user
get:
summary: Retrieves a user.
operationId: getUser
responses:
'200':
description: A successful response
'400':
description: BAD_REQUEST
If we look at your YAML:
/user:
x-google-backend:
address: https://[cloud functions host]/user
get:
summary: Retrieves a user.
operationId: getUser
responses:
'200':
description: A successful response
'400':
description: BAD_REQUEST
... pay particular attention to the x-google-backend section. Notice that this exists within the /user path section. Now notice that the address is a URL with a path. You don't want a path part in the URL, just the address of the host (and optional port). Change your start of your YAML to:
/user:
x-google-backend:
address: https://[cloud functions host]
(The /user portion was removed from the address line)

Error deploying IAM role in Google Deployment Manager

I am trying to assign a custom IAM role to a user (google account) in a GCP Project via Deployment Manager but received a 403 Error code.
I have followed the sample provided in the Google Cloud Platform repo:
https://github.com/GoogleCloudPlatform/deploymentmanager-samples/tree/master/community/cloud-foundation/templates/iam_member
Basically I created a configuration YAML file with the following content:
- path: ../iam_member.py
name: iam_member.py
resources:
- name: iam-member-oval-unity-test-0
type: iam_member.py
properties:
projectId: oval-unity-88908
type: string
roles:
- role: roles/GARawDataViewer
members:
- user:<USER_EMAIL>
GARawDataViewer is a custom role created in the project oval-unity-88908 and is the value of the user email address to whom I am trying to assign the custom IAM role.
Finally, I deployed running the following command:
gcloud deployment-manager deployments create deployment-oval-unity-member-test --config examples/oval_unity_member.yaml
After running the gcloud deployment-manager I received the following error message:
- code: CONDITION_NOT_MET
location: /deployments/deployment-oval-unity-member-test/resources/get-iam-policy-iam-member-oval-unity-test-0-0-0->$.properties->$.policy
message: |-
InputMapping for field [policy] for method [setIamPolicy] could not be set from input, mapping was: [$.gcpIamMemberBinding($.intent, $.inputs.policy.response, $.resource.properties)], and evaluation context was:
{
"deployment" : {
"id" : 4858392305054927640,
"name" : "deployment-oval-unity-member-test"
},
"extensions" : {
"EnableAdditionalJsonPathFunctions" : true,
"EnableGoogleTypeProviderFunctionsExperiment" : true
},
"inputs" : {
"policy" : {
"error" : {
"code" : "403",
"message" : "{\"code\":403,\"message\":\"The caller does not have permission\",\"status\":\"PERMISSION_DENIED\",\"statusMessage\":\"Forbidden\",\"requestPath\":\"https://cloudresourcemanager.googleapis.com/v1/projects/oval-unity-88908:getIamPolicy\",\"httpMethod\":\"POST\"}"
}
}
},
"intent" : "CREATE",
"matches" : [ ],
"project" : "dm-creation-project-0",
"requestId" : "f3c7f0c4-1ff7-3e26-a060-b0adc068866d",
"resource" : {
"name" : "get-iam-policy-iam-member-oval-unity-test-0-0-0",
"previous" : { },
"properties" : {
"member" : "<USER_EMAIL_ADDRESS!>",
"resource" : "oval-unity-88908",
"role" : "roles/GARawDataViewer"
},
"self" : { }
}
}
Error was:
Parameter for gcpIamMemberBinding at position 1 is not of type map, value was [null]
The interesting thing is that I have been able to deploy successfully assigning a predefined role like 'editor': roles/editor, but it is failing using a custom role.
I have even tried using the full path to the custom role: projects/oval-unity-88908/roles/GARawDataViewer
but still showing the same error.
Do you have any idea how could I solve this issue?
Thanks in advance!
The issue might be, that you did not gave the service account which is used by the deployment manager the proper rights to handle IAM things. As described here you can possibly fix this issue by completing the following steps:
Go to the IAM page in the GCP Console of your project.
If prompted, select your project from the list.
Look for the Google APIs service account, which has the email address in the following format: [PROJECT_NUMBER]#cloudservices.gserviceaccount.com.
Grant the APIs service account the roles/owner roles
Let me know if you need further help!

Cannot get simple AWS web socket publish to work

I wrote this uber simple client to publish a message to aws sdk via websocket protocol (javascript version). https://github.com/aws/aws-iot-device-sdk-js
var awsIot = require('aws-iot-device-sdk');
var device = awsIot.device({
region: "us-west-2",
protocol: "wss",
clientId: "ARUNAVS SUPER TEST",
host: "iot.us-west-2.amazonaws.com",
port: "443"
});
device
.on('connect', function() {
console.log('connect');
device.publish('abcd', JSON.stringify({ test_data: 1}));
});
device
.on('message', function(topic, payload) {
console.log('message', topic, payload.toString());
});
device
.on('error', function(error) {
console.log('error', error);
});
I am getting the following error (after importing admin creds https://github.com/aws/aws-iot-device-sdk-js#websockets):-
node testCode.js
error { Error: unexpected server response (403)
at ClientRequest._req.on
(/Users/arunavs/mrtests/node_modules/ws/lib/WebSocket.js:653:21)
at emitOne (events.js:96:13)
at ClientRequest.emit (events.js:188:7)
at HTTPParser.parserOnIncomingClient (_http_client.js:472:21)
at HTTPParser.parserOnHeadersComplete (_http_common.js:105:23)
at TLSSocket.socketOnData (_http_client.js:361:20)
at emitOne (events.js:96:13)
at TLSSocket.emit (events.js:188:7)
at readableAddChunk (_stream_readable.js:177:18)
at TLSSocket.Readable.push (_stream_readable.js:135:10)
type: 'error',
target:
WebSocket {
domain: null,
_events: {},
_eventsCount: 0,
_maxListeners: undefined,
readyState: 3,
bytesReceived: 0,
extensions: null,
protocol: '',
_binaryType: 'arraybuffer',
_finalize: [Function: bound finalize],
_closeFrameReceived: false,
_closeFrameSent: false,
_closeMessage: '',
_closeTimer: null,
_finalized: true,
The SDK fails to give any reason why I am getting a 403.
Note : According to https://github.com/aws/aws-iot-device-sdk-js/blob/234d170c865586f4e49e4b0946100d93f367ee8f/device/index.js#L142, the code is even presigning using sigv4, as part of my output also has
url: 'wss://iot.us-west-2.amazonaws.com:443/mqtt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential= .......
Has anyone seen an error like this?
I think, you are publish on the THING which does not allow all users to connect with it.
Can you post the details of the policy of the thing that you are trying to publish message on.
On the Create a policy page, in the Name field, type a name for the
policy (for example, MyIoTButtonPolicy). In the Action field, type
iot:Connect. In the Resource ARN field, type *. Select the Allow
checkbox. This allows all clients to connect to AWS IoT.
Read more about POLICIES.
PS: This is just a wild guess. Please post policy details in the question so that I can be sure.

AWS Amplify React Native, GET request error 403 status code

I enabled access to unauthenticated identities to do some quick testing before integrating authentication. My configuration code is the following,
Amplify.configure({
Auth: {
identityPoolId: 'us-east-1:example',
region: 'us-east-1',
userPoolId: 'us-east-1_example',
userPoolWebClientId: 'us-east-1_example'
},
API: {
endpoints: [
{
name: "example-name",
endpoint: "https://example.execute-api.us-east-1.amazonaws.com/prod/example-path"
},
]
}
});
and my GET request code is the following,
example() {
const apiName = 'example-name';
const path = '/example-path';
API.get(apiName, path).then(response => {
console.log(response)
}).catch(error => {
console.log(error)
})
}
I followed everything on GitHub and my API gateway and Lambda functions are working correctly when I run a "test" and through postman. But on react-native it's giving me a 403 status code without any detailed explanation. Does this have to do with accessing using unauthenticated identity? Also, I used "example" in my code to hide my personal information, I typed in everything correctly since I'm not getting any syntax error (identity pool recognizes access every time I run it, but cloudWatch doesn't show any log of gateway access)
The Endpoint in Amplify.configure is the InvokeURL from API Gateway, you just need to include the stage (/prod in this case) and not the other routes. The other routes are just the path parameters for API.() calls.