I am using the Cloud Datastore to Cloud Storage Text template from Cloud Dataflow.
My python code correctly submits the request and uses javascriptTextTransformFunctionName to run the correct function in my Google Cloud Storage bucket.
Here is a minimized part of the code that is running
function format(inJson) {
var output = {};
output.administrator = inJson.properties.administrator.keyValue.path[0].id;
return output;
And here is the Json I am looking to format, cut down, but only the other children of "properties."
"properties": {
"administrator": {
"keyValue": {
"path": [
{
"kind": "Kind",
"id": "5706504271298560"
}
]
}
}
}
}
And I am getting this exception:
java.lang.RuntimeException:
org.apache.beam.sdk.util.UserCodeException:
javax.script.ScriptException: TypeError: Cannot read
property "keyValue" from undefined in <eval> at line number 5
I understand what it is saying the error is, but I don't know why its happening. If you take the format function and that json and run it through your browser console you can easily test and see that it pulls out and returns an object with "administrator" equal to "5706504271298560".
I did not found the solution to your problem but I expect to be of some help:
Found this post and this one with the same issue. The first one was fixed installing NodeJS library, the second one changing the kind of quotes for the Java.Type().
Nashorn official docs: call Java.type with a fully qualified Java class name, and then to call the returned function to instantiate a class from JavaScript.
Related
I'm building docker images using cloud builder trigger, previously $BRNACH_NAME was working but now its giving null.
Thanks in advance.
I will post my comment as an answer as it is too long for comment section.
According to this documentation, you should have the possibility to use $BRANCH_NAME default substitution for builds invoked by triggers.
In the same documentation it is stated that:
If a default substitution is not available (such as with sourceless
builds, or with builds that use storage source), then occurrences of
the missing variable are replaced with an empty string.
I assume this might be the reason you are receiving NULL.
Have you performed any changes? Could you please provide some further information, such as your .yaml/.json file, your trigger configuration and the error you are receiving?
The problem was not in $BRANCH_NAME, I was using the resulted JSON to fetch the branch name.
like,
"source": {
"repoSource": {
"projectId": "project_id",
"repoName": "bitbucket_repo_name",
"branchName": "integration"
}
}
and
I was using build_details['source']['repoSource']['branchName']
but now it's giving like
"source": {
"repoSource": {
"projectId": "project_id",
"repoName": "bitbucket_repo_name",
"commitSha": "ght8939jj5jd9jfjfjigk0949jh8wh4w"
}
},
so, now I'm using build_details['substitutions']['BRANCH_NAME'] and its working fine.
I am attempting to create an AWS StepFunctions workflow where I have a Lambda task followed by an ECS/Fargate task.
The Lambda takes an ID as an input and outputs some data in JSON form that is used by the ECS task, which is runs a Python script in its container environment. What I would like to do in StepFunctions is the following flow:
{ id: 1234 } -> [Lambda] -> { id: 1234, data: {...} }
{ id: 1234, data: {...} } -> [ECS] -> { id: 1234, result: "bar"}
For reference, here is an example configuration of an ECS Task:
https://docs.aws.amazon.com/step-functions/latest/dg/sample-project-container-task-notification.html
I cannot figure out any way to pass the structured JSON input of a ECS Task to the container running the task.
Here are the things I have found so far:
I can pass individual fields of a JSON input to the container by using JSONPath to select individual fields of the input and set them to environment variables. But if I assign the entire input object ($) to an environment variable, then it fails at runtime with a serialization error ([Object] cannot be converted to a string).
I can create an intermediate lambda that takes the input and converts it to a JSON string that is stored in a single key-value in the output, then assign this single string key-value to an environment variable of ECS Task and parse it. However, this requires adding an entire extra Task and a few seconds of runtime + cost.
Here are some things I can't do:
There doesn't seem to be any mechanism in boto3 to get the input of an existing ECS Task. I can get the input of an unassigned Activity, or I can get the input of the entire Execution. But there is no API for just getting the input of an existing, running Task, even though I have a Task Token.
I cannot modify my original Lambda to output JSON as a string. I am using this result in multiple places (parallel tasks), and the other tasks are Lambdas that consume specific subfields of the output as their input.
What is the intended mechanism to pass a structured JSON object defined as the input to a Task to the executing container of an ECS/Fargate Task?
You can use intrinsic functions to format the request before running the task:
const formatRequest = new sfn.Pass(this, 'FormatRequest', {
parameters: {
'request.$': 'States.JsonToString($)'
}
})
Given you don't specify in the step that runs the Lambda a result path, the input of your container will be the output of your Lambda, that translates to:
"Overrides": {
"ContainerOverrides": [
{
"Name": "container-name",
"Environment": [
{
"Name": "SOME_ENV_VAR",
"Value.$": "$"
},
But even this is limited to what you can store as ENV, so you would need to make sure your JSON is actually a string
What is the intended mechanism to pass a structured JSON object
defined as the input to a Task to the executing container of an
ECS/Fargate Task?
Take a look at the Input and OutPut processing docs: https://docs.aws.amazon.com/step-functions/latest/dg/concepts-input-output-filtering.html
This will help with you deciding what of the JSON input you want passed to the "Run Fargate Task" state (from the example you linked in you question)
Step Functions support the "RunTask" of ECS and a couple parameters: https://docs.aws.amazon.com/step-functions/latest/dg/connect-ecs.html
For Example,
Suppose I have my Lambda Function output this JSON
{
"commands": [
"foo": { "bar" },
"some command 1",
"some command 2"
]
}
I want my Run Fargate Task to haven an Input Path that only gets all of the input. In my state machine, after "Type": "Task", I will put:
"InputPath":"$.commands",
Then in my "Parameters" for my Fargate Task after "NetworkConfigurations:{....}," I will place the Container Overrides that I want using the JSON Path syntax: https://github.com/json-path/JsonPath. However, I don't want all the input from the JSON, just the value of "foo"
"Overrides": {
"ContainerOverrides": [
{
"Name": "container-name",
"Command.$": "$.commands.foo"
}
]
}
You can use the syntax used here: https://docs.aws.amazon.com/step-functions/latest/dg/connect-ecs.html
I am trying to create a google cloud task from one of my Google Cloud Functions. This function gets triggered when a new object is added to one of my Cloud Storage buckets.
I followed the instructions given here to create my App Engine (App Engine Quickstart Guide)
Then in my Cloud Function, I added the following code to create a cloud task (as described here - Creating App Engine Tasks)
However, there is something wrong with my task or App Engine call (not sure what).
I am getting the following errors every now and then. Sometimes it works and sometimes it does not.
{ Error: 4 DEADLINE_EXCEEDED: Deadline Exceeded at Object.exports.createStatusError (/srv/node_modules/grpc/src/common.js:91:15) at Object.onReceiveStatus (/srv/node_modules/grpc/src/client_interceptors.js:1204:28) at InterceptingListener._callNext (/srv/node_modules/grpc/src/client_interceptors.js:568:42) at InterceptingListener.onReceiveStatus (/srv/node_modules/grpc/src/client_interceptors.js:618:8) at callback (/srv/node_modules/grpc/src/client_interceptors.js:845:24) code: 4, metadata: Metadata { _internal_repr: {} }, details: 'Deadline Exceeded' }
Do let me know if you need more information and I will add them to this question here.
I had the same problem with firestore, trying to write one doc at time; I solve it by returning the total promise. That is because cloud function needs to know when is convenient to terminate the function but if you do not return anything maybe cause this error.
My example:
data.forEach( d => {
reports.doc(_date).collection('data').doc(`${d.Id}`).set(d);
})
This was the problem with me, I was writing document 1 by 1 but I wasn't returning the promise. So I solve it doing this:
const _datarwt = [];
data.forEach( d => {
_datarwt.push( reports.doc(_date).collection('data').doc(`${d.Id}`).set(d) );
})
const _dataloaded = await Promise.all( _datarwt );
I save the returned promise in an array and await for all the promises. That solved it for me. Hope been helpful.
We built a Dialogflow agent using google cloud functions as webhook which worked properly until yesterday evening. At that time I exported the agent and reimported it later on and it worked for a while.
What stopped working is that agent.context.get('...'); (also agent.getContext('...')) does return undefined even if the context is set according to the UI and raw API response.
As an example I have an intent which has a required slot shop, webhook for slot filling enabled.
When I test the agent, the intent named info is matched correctly and also the context info_dialog_params_store seems to be there:
And here is part of the output context according to the raw API response:
"outputContexts": [
{
"name": "projects/MYAGENTNAME/agent/sessions/0b753e8e-b377-587b-3db6-3c8dc898879b/contexts/info_dialog_params_store",
"lifespanCount": 1,
"parameters": {
"store": "",
"store.original": "",
"kpi": "counts",
"date_or_period": "",
"kpi.original": "trafico",
"date_or_period.original": ""
}
}
In the webhook I mapped the intent correctly to a js function:
let intentMap = new Map();
intentMap.set('info', info);
agent.handleRequest(intentMap);
And the first line of the info function looks like:
function info(agent) {
store_context = agent.context.get('info_dialog_params_store');
}
Which returns
TypeError: Cannot read property 'get' of undefined
at info (/user_code/index.js:207:36)
at WebhookClient.handleRequest (/user_code/node_modules/dialogflow-fulfillment/src/dialogflow-fulfillment.js:303:44)
at exports.dialogflowFirebaseFulfillment.functions.https.onRequest (/user_code/index.js:382:9)
at cloudFunction (/user_code/node_modules/firebase-functions/lib/providers/https.js:57:9)
at /var/tmp/worker/worker.js:762:7
at /var/tmp/worker/worker.js:745:11
at _combinedTickCallback (internal/process/next_tick.js:73:7)
at process._tickDomainCallback (internal/process/next_tick.js:128:9)
I am quite sure that I did not change anything which could affect the proper functioning of agent, except some refactoring.
I also tried the beta functions activated as well as deactivated as I read that there can be issues with environments, but that did not change anything.
Anyone knows in which direction I can investigate further?
I had the same issue, I resolved it updating dialogflow-fulfillment in package.json:
from "dialogflow-fulfillment": "^0.5.0"
to "dialogflow-fulfillment": "^0.6.0"
I solved the problem by turning off "Beta features"
enter image description here
Actually I could fix it by the following 'magic' steps:
Copied my original function to a text file
Copy and pasted the original example code into the GUI fulfillment code editor (Code on GitHub)
Deployed the function
Created a minimal example for my info function:
function info(agent) {
store_context = agent.context.get('info_dialog_params_store');
}
Tested it, and it worked
Copied back my original code
Everything was fine again
Hy, I'm implementing a custom auth flow on a Cognito User Pool. I managed to handle the Define- and CreateAuthChallenge-triggers, but not the VerifyAuthChallenge.
I use this documentation as a guide: Verify Auth Challenge Response Lambda Trigger
I take the verify-lambda input and add answerCorrect = true to the response, as described in the documentation. Define- and CreateChallenge-parts work as expected with the given information. Verifying the challenge answers, I get InvalidLambdaResponseException: Unrecognizable lambda output as a response. The verify-lambda exists successfully, returning this object:
{
"version": 1,
"triggerSource": "VerifyAuthChallengeResponse_Authentication",
"region": "eu-central-1",
"userPoolId": "eu-central-1_XXXXXXXXX",
"callerContext": {
"awsSdkVersion": "aws-sdk-dotnet-coreclr-3.3.12.7",
"clientId": "2490gqsa3gXXXXXXXXXXXXXXXX"
},
"request": {
"challengeAnswer": "{\"DeviceSub\":\"TestSub\"}",
"privateChallengeParameters": {
"CUSTOM_CHALLENGE": "SessionService_SendDevice"
},
"userAttributes": {
"sub": "8624237e-0be8-425e-a2cb-XXXXXXXXXXXX",
"email_verified": "true",
"cognito:user_status": "CONFIRMED",
"email": "X.XXXXXXXX#XXXXXXXXXX.de"
}
},
"response": {
"answerCorrect": true
},
"userName": "8624237e-0be8-425e-a2cb-XXXXXXXXXXXX"
}
Before, I ran into the problem, that the "challengeAnswer"-part was described as a Dictionary in the documentation, but it actually is just a string, containing the dictionary as json. Sadly, I cannot find any information anywhere for why the returned object isn't accepted by Cognito.
Apparently someone had the same problem as me, using JavaScript: GitHub link
Can anyone tell me, what the response object should look like, so that it is accepted by Cognito? Thank you.
Well, so my mistake was to not consider the custom authentication flow. I found a different documentation, which is by the way the one you should definitely use:
Customizing your user pool authentication flow
I ran into 2 wrong parts in the documentation here (the triggers sub-pages) and 1 error on my part.
Wrong part 1:
DefineAuthChallenge and CreateAuthChallenge inputs for the session is defined as a list of challenge results. This is all fine, but the challenge result object has the challenge metadata part wrongly displayed of being written like this: "ChallengeMetaData", when instead it should be "ChallengeMetadata", with a lower case "d" for "data" instead of an upper case one. This gave me the "Unrecognized lambda output"-error, because "ChallengeMetaData" wasn't what the backend was expecting, it was looking for "ChallengeMetadata", which wasn't present. The first time you enter the define auth challenge lambda, this error doesn't show up, because the session doesn't contain any challenge answers. The moment you verify a challenge though, this gets filled and then the uppercase d gives you troubles.
Wrong part 2:
As described in my question, the VerifyAuthChallenge input for the "challengeAnswer" is a string, not a Dictionary.
All these wrong parts are correctly displayed on the first documentation page I linked here. So I would recommend using that instead of the other documentation.
Error on my side:
I didn't really check what happens after you verify a custom challenge via the VerifyAuthChallenge-trigger. In the given link, in the image above the headline 'DefineAuthChallenge: The challenges (state machine) Lambda trigger', it clearly states, that after verifying the response, the DefineAuthChallenge trigger is invoked again, which I didn't consider.
I hope I could save someone the time it took for me to figure this out with this :-)