Can you help me with Google Workflow? - google-admin-sdk

I'm getting the error:
"in step "readGcpadmin": {"message":"HTTP body unsupported with: 'GET'","tags":["ValueError"]}"
and I don't know how to solve it, here is the code with the hidden data below:
- readGcpadmin:
call: http.get
args:
url: https://admin.googleapis.com/admin/directory/v1/users
#method: get
headers:
Authorization: "Bearer [My token]"
Content-type: "application/json"
#body:
#domain: [my domain.page]
#query:
auth:
type: OAuth2
#scope: https://www.googleapis.com/auth/cloud-platform
#timeout: 20
result: teste
- returnResult:
return: ${teste.body}
When I try the terminal it works:
curl \
'https://admin.googleapis.com/admin/directory/v1/users?domain=MyDomain&key=MyKey' \
--header 'Authorization: Bearer MyToken' \
--header 'Accept: application/json' \
--compressed

this is a similar working example, notice the query section which you miss:
readItem:
call: http.get
args:
url: ${"https://storage.googleapis.com/storage/v1/b/"+bucket+"/o"}
auth:
type: OAuth2
query:
prefix: ${prefix}
fields: items/name,items/bucket
result: documentValue
next: documentFound

The problem was really with the position of the "auth" field, it was enough to put it in the "Header" directly "auth: OAuth2". In addition, I removed the "Content-type" field. Thank you guys!

Related

Multipart upload with presigned urls - Scaleway S3-compatible object storage

I’m trying to have multipart upload working on Scaleway Object Storage (S3 compatible) with presigned urls and I’m getting errors (403) on preflight request generated by the browser but my CORS settings seems correctly set. (Basically wildcard on allowed headers and origins).
The error comes with a 403 status code and is as follow:
<?xml version='1.0' encoding='UTF-8'?>
<Error><Code>AccessDenied</Code><Message>Access Denied.</Message><RequestId>...</RequestId></Error>
I’m stuck on this one for a while now, I tried to copy the pre-flight request from my browser to reproduce it elsewhere and tried to tweak it a little bit.
Removing the query params from the url of the pre-flight request make the request successful (returns a 200 with Access-Control-Allow-* response headers correctly set) but this is obviously not the browser behavior...
This Doesn’t work (secrets, keys and names have been changed)
curl 'https://bucket-name.s3.fr-par.scw.cloud/tmp-screenshot-2021-01-20-at-16-21-33.png?AWSAccessKeyId=XXXXXXXXXXXXXXXXXXXX&Expires=1638217988&Signature=NnP1XLlcvPzZnsUgDAzm1Uhxri0%3D&partNumber=1&uploadId=OWI1NWY5ZGrtYzE3MS00MjcyLWI2NDAtNjFkYTM1MTRiZTcx' -X OPTIONS -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:94.0) Gecko/20100101 Firefox/94.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: http://domain.tech/' -H 'Access-Control-Request-Method: PUT' -H 'Origin: http://domain.tech' -H 'DNT: 1' -H 'Connection: keep-alive' -H 'Sec-Fetch-Dest: empty' -H 'Sec-Fetch-Mode: no-cors' -H 'Sec-Fetch-Site: cross-site' -H 'Pragma: no-cache' -H 'Cache-Control: no-cache'
This Works (secrets, keys and names have been changed)
curl 'https://bucket-name.s3.fr-par.scw.cloud/tmp-screenshot-2021-01-20-at-16-21-33.png' -X OPTIONS -H 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:94.0) Gecko/20100101 Firefox/94.0' -H 'Accept: */*' -H 'Accept-Language: en-US,en;q=0.5' --compressed -H 'Referer: http://domain.tech/' -H 'Access-Control-Request-Method: PUT' -H 'Origin: http://domain.tech' -H 'DNT: 1' -H 'Connection: keep-alive' -H 'Sec-Fetch-Dest: empty' -H 'Sec-Fetch-Mode: no-cors' -H 'Sec-Fetch-Site: cross-site' -H 'Pragma: no-cache' -H 'Cache-Control: no-cache'
The url comes from the aws-sdk and is generated this way :
const S3Client = new S3({
credentials: {
accessKeyId: env.SCW_ACCESS_KEY,
secretAccessKey: env.SCW_SECRET_KEY,
},
endpoint: `https://s3.${env.SCW_REGION}.scw.cloud`,
})
S3Client.getSignedUrlPromise('uploadPart', {
Bucket: bucket,
Key: key,
UploadId: multipartUpload.UploadId,
PartNumber: idx + 1,
})
and used this way in frontend:
// url being the url generated in backend as demonstrated above
const response = await fetch(url, {
method: 'PUT',
body: filePart,
signal: abortController.signal,
})
If anyone can give me a hand at this or that would be great!
As it turns out, Scaleway Object Storage is not fully S3-compatible on this case.
Here is a workaround:
Install aws4 library to sign request easily (or follow this scaleway doc to manually sign your request)
Form your request exactly as per stated in this other scaleway doc (this is where aws-sdk behavior differs, it generates an url with AWSAccessKeyId, Expires and Signature query params that cause the scaleway API to fail. Scaleway API only wants partNumber and uploadId).
Return the generated url and headers to the frontend
// Backend code
const signedRequest = aws4.sign(
{
method: 'PUT',
path: `/${key}?partNumber=${idx + 1}&uploadId=${
multipartUpload.UploadId
}`,
service: 's3',
region: env.SCW_REGION,
host: `${bucket}.s3.${env.SCW_REGION}.scw.cloud`,
},
{
accessKeyId: env.SCW_ACCESS_KEY,
secretAccessKey: env.SCW_SECRET_KEY,
},
)
return {
url: `https://${signedRequest.host}${signedRequest.path}`,
headers: Object.keys(signedRequest.headers).map((key) => ({
key,
value: signedRequest.headers[key] as string,
})),
}
And then in frontend:
// Frontend code
const headers = signedRequest.headers.reduce<Record<string, string>>(
(acc, h) => ({ ...acc, [h.key]: h.value }),
{},
)
const response = await fetch(signedRequest.url, {
method: 'PUT',
body: filePart,
headers,
signal: abortController.signal,
})
Scaleway knows this issue as I directly discussed with their support team and they are putting some effort in order to be as compliant as possible with S3. This issue might be fixed by the time you read this.
Thanks to them for the really quick response time and for taking this seriously.

SignatureDoesNotMatch Error in AWS V4 Signed Request

I have generated the presigned url for get request using the same algoritthm mentioned in the documentation of AWS.
This is working.
But when I Post/Put the data the signature mismatch error is present.
Below is a sample curl :-
curl --location --request POST 'https://<bucket_name>.s3.ap-south-1.amazonaws.com/testFolder/testing1.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=<access_key_id>%2F20211006%2Fap-south-1%2Fs3%2Faws4_request&X-Amz-Date=20211006T113405Z&X-Amz-Expires=3000&X-Amz-SignedHeaders=content-type%3Bhost%3Bx-amz-content-sha256%3Bx-amz-date&X-Amz-Signature='
--header 'x-amz-content-sha256: UNSIGNED-PAYLOAD'
--header 'x-amz-date: 20211006T113405Z'
--header 'Content-Type: text/plain'

Google Cloud Workflow - Fetch Bearer Token Step

In my use case I want to create a step in Google Cloud Workflow where I can pass my username and password and return the resultant bearer token in a variable. Was wondering how the yaml config for such a workflow would look like ?
My endpoint expects a request in the following manner :
POST 'https://cloud.business.io/v2/login'
--header 'Content-Type: application/x-www-form-urlencoded'
--data-urlencode 'username=business#xyz.io'
--data-urlencode 'password=APassword123'
Found it with a little trial and error !
call: http.post
args:
url: https://cloud.business.io/v2/login
headers:
'Content-Type': 'application/x-www-form-urlencoded'
body :
'username': 'business#xyz.io'
'password': 'APassword123'
result: BearerToken

How to access and process json array sent to a service resource in Ballerina

I have a POST resource and I want to pass a JSON array as the request payload.
#http:ResourceConfig {
methods: ["POST"],
path: "/news-articles/validatetest",
cors: {
allowOrigins: ["*"],
allowHeaders: ["Authorization, Lang"]
},
produces: ["application/json"],
consumes: ["application/json"]
}
resource function validateArticlesTest(http:Caller caller, http:Request req) {
json[]|error jsonarray = <json[]>req.getJsonPayload();
io:println(jsonarray);
}
My request is as below.
curl -X POST http://localhost:9090/news-articles/validatetest -H "Content-Type: application/json" --data '[{"aaa":"amaval", "bbb":"bbbval"},{"ccc":"amaval", "ddd":"bbb val"}]'
But 'jsonarray' gets always null when I run this and make the above curl request.
I guess I am not doing this correct. What is the correct approach to achieve this?
Edit: (Adding the version)
Ballerina version: jBallerina 1.1.3

Why did my POST to AWS API Gateway failed?

I use POSTMAN to test my Cloudformation created APIs
POST https://6pppnxxxh.execute-api.eu-central-1.amazonaws.com/Prod/users
I got
{
"message": "Missing Authentication Token"
}
My prod stage
I doublechecked PROD Invoke URL.
How to solve this problem?
I tried with curl
curl --header "Content-Type: application/json" --request POST --data '{ "emailaddress" : "acj#rambler.ru,"first name" : "Aca","last name" : "Ljubascikic", "password" : "bbbac_96"}' https://6pppnxxxh.execute-api.eu-central-1.amazonaws.com/Prod/users
The same issue
{"message":"Missing Authentication Token"}
How to test from CLI?
According to your screenshot /Prod/users is a PUT method and you are using POST in your command. I would confirm that first.
Hope this helps.