I'm building docker images using cloud builder trigger, previously $BRNACH_NAME was working but now its giving null.
Thanks in advance.
I will post my comment as an answer as it is too long for comment section.
According to this documentation, you should have the possibility to use $BRANCH_NAME default substitution for builds invoked by triggers.
In the same documentation it is stated that:
If a default substitution is not available (such as with sourceless
builds, or with builds that use storage source), then occurrences of
the missing variable are replaced with an empty string.
I assume this might be the reason you are receiving NULL.
Have you performed any changes? Could you please provide some further information, such as your .yaml/.json file, your trigger configuration and the error you are receiving?
The problem was not in $BRANCH_NAME, I was using the resulted JSON to fetch the branch name.
like,
"source": {
"repoSource": {
"projectId": "project_id",
"repoName": "bitbucket_repo_name",
"branchName": "integration"
}
}
and
I was using build_details['source']['repoSource']['branchName']
but now it's giving like
"source": {
"repoSource": {
"projectId": "project_id",
"repoName": "bitbucket_repo_name",
"commitSha": "ght8939jj5jd9jfjfjigk0949jh8wh4w"
}
},
so, now I'm using build_details['substitutions']['BRANCH_NAME'] and its working fine.
Related
I'm creating a Glue Job using boto3 create_job. I was interested to pass a parameter to enable Continuous Logging (no filter) for this new Job.
Unfortunately neither here or here I can find any useful parameter to enable it.
Any suggestions?
Found a way to do it, it's similar to other arguments, you just need to pass these log related arguments in Default Arguments like -
glueClient.create_job(
Name="testBoto",
Role="Role_name",
Command={
'Name': "some_name",
'ScriptLocation' : "some_location"
},
DefaultArguments={
"--enable-continuous-cloudwatch-log": "true",
"--enable-continuous-log-filter": "true"
}
)
We built a Dialogflow agent using google cloud functions as webhook which worked properly until yesterday evening. At that time I exported the agent and reimported it later on and it worked for a while.
What stopped working is that agent.context.get('...'); (also agent.getContext('...')) does return undefined even if the context is set according to the UI and raw API response.
As an example I have an intent which has a required slot shop, webhook for slot filling enabled.
When I test the agent, the intent named info is matched correctly and also the context info_dialog_params_store seems to be there:
And here is part of the output context according to the raw API response:
"outputContexts": [
{
"name": "projects/MYAGENTNAME/agent/sessions/0b753e8e-b377-587b-3db6-3c8dc898879b/contexts/info_dialog_params_store",
"lifespanCount": 1,
"parameters": {
"store": "",
"store.original": "",
"kpi": "counts",
"date_or_period": "",
"kpi.original": "trafico",
"date_or_period.original": ""
}
}
In the webhook I mapped the intent correctly to a js function:
let intentMap = new Map();
intentMap.set('info', info);
agent.handleRequest(intentMap);
And the first line of the info function looks like:
function info(agent) {
store_context = agent.context.get('info_dialog_params_store');
}
Which returns
TypeError: Cannot read property 'get' of undefined
at info (/user_code/index.js:207:36)
at WebhookClient.handleRequest (/user_code/node_modules/dialogflow-fulfillment/src/dialogflow-fulfillment.js:303:44)
at exports.dialogflowFirebaseFulfillment.functions.https.onRequest (/user_code/index.js:382:9)
at cloudFunction (/user_code/node_modules/firebase-functions/lib/providers/https.js:57:9)
at /var/tmp/worker/worker.js:762:7
at /var/tmp/worker/worker.js:745:11
at _combinedTickCallback (internal/process/next_tick.js:73:7)
at process._tickDomainCallback (internal/process/next_tick.js:128:9)
I am quite sure that I did not change anything which could affect the proper functioning of agent, except some refactoring.
I also tried the beta functions activated as well as deactivated as I read that there can be issues with environments, but that did not change anything.
Anyone knows in which direction I can investigate further?
I had the same issue, I resolved it updating dialogflow-fulfillment in package.json:
from "dialogflow-fulfillment": "^0.5.0"
to "dialogflow-fulfillment": "^0.6.0"
I solved the problem by turning off "Beta features"
enter image description here
Actually I could fix it by the following 'magic' steps:
Copied my original function to a text file
Copy and pasted the original example code into the GUI fulfillment code editor (Code on GitHub)
Deployed the function
Created a minimal example for my info function:
function info(agent) {
store_context = agent.context.get('info_dialog_params_store');
}
Tested it, and it worked
Copied back my original code
Everything was fine again
Hy, I'm implementing a custom auth flow on a Cognito User Pool. I managed to handle the Define- and CreateAuthChallenge-triggers, but not the VerifyAuthChallenge.
I use this documentation as a guide: Verify Auth Challenge Response Lambda Trigger
I take the verify-lambda input and add answerCorrect = true to the response, as described in the documentation. Define- and CreateChallenge-parts work as expected with the given information. Verifying the challenge answers, I get InvalidLambdaResponseException: Unrecognizable lambda output as a response. The verify-lambda exists successfully, returning this object:
{
"version": 1,
"triggerSource": "VerifyAuthChallengeResponse_Authentication",
"region": "eu-central-1",
"userPoolId": "eu-central-1_XXXXXXXXX",
"callerContext": {
"awsSdkVersion": "aws-sdk-dotnet-coreclr-3.3.12.7",
"clientId": "2490gqsa3gXXXXXXXXXXXXXXXX"
},
"request": {
"challengeAnswer": "{\"DeviceSub\":\"TestSub\"}",
"privateChallengeParameters": {
"CUSTOM_CHALLENGE": "SessionService_SendDevice"
},
"userAttributes": {
"sub": "8624237e-0be8-425e-a2cb-XXXXXXXXXXXX",
"email_verified": "true",
"cognito:user_status": "CONFIRMED",
"email": "X.XXXXXXXX#XXXXXXXXXX.de"
}
},
"response": {
"answerCorrect": true
},
"userName": "8624237e-0be8-425e-a2cb-XXXXXXXXXXXX"
}
Before, I ran into the problem, that the "challengeAnswer"-part was described as a Dictionary in the documentation, but it actually is just a string, containing the dictionary as json. Sadly, I cannot find any information anywhere for why the returned object isn't accepted by Cognito.
Apparently someone had the same problem as me, using JavaScript: GitHub link
Can anyone tell me, what the response object should look like, so that it is accepted by Cognito? Thank you.
Well, so my mistake was to not consider the custom authentication flow. I found a different documentation, which is by the way the one you should definitely use:
Customizing your user pool authentication flow
I ran into 2 wrong parts in the documentation here (the triggers sub-pages) and 1 error on my part.
Wrong part 1:
DefineAuthChallenge and CreateAuthChallenge inputs for the session is defined as a list of challenge results. This is all fine, but the challenge result object has the challenge metadata part wrongly displayed of being written like this: "ChallengeMetaData", when instead it should be "ChallengeMetadata", with a lower case "d" for "data" instead of an upper case one. This gave me the "Unrecognized lambda output"-error, because "ChallengeMetaData" wasn't what the backend was expecting, it was looking for "ChallengeMetadata", which wasn't present. The first time you enter the define auth challenge lambda, this error doesn't show up, because the session doesn't contain any challenge answers. The moment you verify a challenge though, this gets filled and then the uppercase d gives you troubles.
Wrong part 2:
As described in my question, the VerifyAuthChallenge input for the "challengeAnswer" is a string, not a Dictionary.
All these wrong parts are correctly displayed on the first documentation page I linked here. So I would recommend using that instead of the other documentation.
Error on my side:
I didn't really check what happens after you verify a custom challenge via the VerifyAuthChallenge-trigger. In the given link, in the image above the headline 'DefineAuthChallenge: The challenges (state machine) Lambda trigger', it clearly states, that after verifying the response, the DefineAuthChallenge trigger is invoked again, which I didn't consider.
I hope I could save someone the time it took for me to figure this out with this :-)
Just wondering whats the best practice for determining what permissions I should give for my CloudFormation template?
After some time of trying to give the minimal permissions it require, I find that thats really time consuming and error prone. I note that depending on the state of my stack, really new vs some updates vs delete, I will need different permissions.
I guess, it should be possible for there to be some parser that given a CloudFormation template can determine the minimum set of permissions it require?
Maybe I can give ec2:* access to resources tagged Cost Center: My Project Name? Is this ok? But I wonder what happens when I change my project name for example?
Alternatively, isit ok to assume its ok to give say ec2:* access based on the assumption the CloudFormation parts is usually only executed off CodeCommit/Github/CodePipeline and its not something that is likely to be public/easy to hack? --- Tho this sounds like a flawed statement to me ...
In the short term, you can use aws-leastprivilege. But it doesn't support every resource type.
For the long term: as mentioned in this 2019 re:invent talk, CloudFormation is working towards open sourcing and migrating most of its resource types to a new public resource schema. One of the benefits of this is that you'll be able to see the permissions required to perform each operation.
E.g. for AWS::ImageBuilder::Image, the schema says
"handlers": {
"create": {
"permissions": [
"iam:GetRole",
"imagebuilder:GetImageRecipe",
"imagebuilder:GetInfrastructureConfiguration",
"imagebuilder:GetDistributionConfiguration",
"imagebuilder:GetImage",
"imagebuilder:CreateImage",
"imagebuilder:TagResource"
]
},
"read": {
"permissions": [
"imagebuilder:GetImage"
]
},
"delete": {
"permissions": [
"imagebuilder:GetImage",
"imagebuilder:DeleteImage",
"imagebuilder:UnTagResource"
]
},
"list": {
"permissions": [
"imagebuilder:ListImages"
]
}
}
I am using the Cloud Datastore to Cloud Storage Text template from Cloud Dataflow.
My python code correctly submits the request and uses javascriptTextTransformFunctionName to run the correct function in my Google Cloud Storage bucket.
Here is a minimized part of the code that is running
function format(inJson) {
var output = {};
output.administrator = inJson.properties.administrator.keyValue.path[0].id;
return output;
And here is the Json I am looking to format, cut down, but only the other children of "properties."
"properties": {
"administrator": {
"keyValue": {
"path": [
{
"kind": "Kind",
"id": "5706504271298560"
}
]
}
}
}
}
And I am getting this exception:
java.lang.RuntimeException:
org.apache.beam.sdk.util.UserCodeException:
javax.script.ScriptException: TypeError: Cannot read
property "keyValue" from undefined in <eval> at line number 5
I understand what it is saying the error is, but I don't know why its happening. If you take the format function and that json and run it through your browser console you can easily test and see that it pulls out and returns an object with "administrator" equal to "5706504271298560".
I did not found the solution to your problem but I expect to be of some help:
Found this post and this one with the same issue. The first one was fixed installing NodeJS library, the second one changing the kind of quotes for the Java.Type().
Nashorn official docs: call Java.type with a fully qualified Java class name, and then to call the returned function to instantiate a class from JavaScript.