I have a try {} catch (err) {} clause in my jenkins ci/cd pipeline that deploys a yaml to CloudFormation in aws. However when I've tested this out and purposely made something incorrect on the yaml file, I can see the stack fails to upload in cloudformation but jenkins pipeline still carries on to next stage and eventually marks the build as successful which is not the desired outcome.
I wish to fail the pipeline completely if my cloudformation stack (or any external process) fails. How can I do this? Is it possible to do so?
In your catch block, you can check for a specific error message and fail the build. SOmething like the below.
script {
try {
sh 'letsGenerateAError'
}
catch (error) {
if(error.getMessage().contains("SOme Error you want to check")) {
throw error
}
echo "Error occurred while Running, But ignoring it. Message : ${error.getMessage()}"
}
}
Related
We have a NodeJS Lambda deployed in AWS. Works fine but whenever errors happen, the entire and details of the error is not shown in AWS Cloudwatch. CW shows all console.info output but does not show the stack trace of the exception. If we run the Lambda in our local machines, the console logs would look like the one below:
****START METHOD EXECUTION****
****END METHOD EXECUTION******
/Users/john.doe/Documents/workspace/myproject/user-service/dbutility.js:45
await connection.query('INSERT INTO users SET ?', record, async function (error, results, fields) {
^
at Handshake.onConnect (/Users/john.doe/Documents/workspace/myproject/user-service/node_modules/mysql/lib/Pool.js:58:9)
at Handshake.<anonymous> (/Users/john.doe/Documents/workspace/myproject/user-service/node_modules/mysql/lib/Connection.js:526:10)
at Handshake._callback (/Users/john.doe/Documents/workspace/myproject/user-service/node_modules/mysql/lib/Connection.js:488:16)
at Sequence.end (/Users/john.doe/Documents/workspace/myproject/user-service/node_modules/mysql/lib/protocol/sequences/Sequence.js:83:24)
at Protocol.handleNetworkError (/Users/john.doe/Documents/workspace/myproject/user-service/node_modules/mysql/lib/protocol/Protocol.js:369:14)
at Connection._handleNetworkError (/Users/john.doe/Documents/workspace/myproject/user-service/node_modules/mysql/lib/Connection.js:418:18)
at Socket.emit (node:events:513:28)
at Socket.emit (node:domain:489:12)
at emitErrorNT (node:internal/streams/destroy:151:8)
But when the same Lambda is deployed in AWS, the Cloudwatch logs we see is only the one below
****START METHOD EXECUTION****
****END METHOD EXECUTION******
In our codes, we catch error using the usual try catch:
try {
} catch (err) {
console.error(err);
}
How can we display error stack trace or details in the CW logs?
I recently created a scheduled trigger by following this google page: . But when I did a test run from Scheduler's interface, the result was a NOT_FOUND error:
{
#type: "type.googleapis.com/google.cloud.scheduler.logging.AttemptFinished"
jobName: "projects/myproject/locations/australia-southeast1/jobs/trigger-schedule"
status: "NOT_FOUND"
targetType: "HTTP"
url: "https://cloudbuild.googleapis.com/v1/projects/myproject/triggers/ca55b01d-f4e6-4b8b-b92b-b2e4f380788c:run"
}
I was worried about location, which is appEngine related, even there is no instances, the location shows to be in australia-southeast1, which is correct.
What could be the cause of the error? Or even what was not found? the job definition or the target?
After running gcloud beta builds triggers run TRIGGER which is the scheduled job runs, I found the cloudbuild.yaml does not exist in the targeted branch.
First, I wish the error in the scheduler could have been more meaningful and had some details.
Second, triggers all have conditions how they are triggered. Maybe the POST HTTP call to the trigger can allow an empty body to use default condition. In my case, the condition defined in the trigger was branch = test and in my scheduled job definition was branch = master. This mismatch caused the problem.
Hope this could help others to debug scheduled triggers.
I have enabled X-Ray for my StepFunction state machine, and in X-Ray trace map, I can see in the Subsegment section, I can locate which step in the statefunction has caught an error, but it only says States.TaskFailed but with no actual error message: (screenshot shown below)
However, if I navigate to stepfunction execution event history, I can locate in 'TashStateExited', and I see something like:
"name": "xxxxxxx",
"output": {
"Error": "States.TaskFailed",
"Cause": xxxxxxxxxxx (the actual error message)
I wonder if there is a way that I can see this error message directly in X-Ray without navigating to the specific execution event history? Since X-Ray is supposed to make monitoring easier and help us debug, how come it's not showing the actual error message in trace map?
I've only been able to do this manually by running my code in a try/except block which traps the error and then using a subsegment.addError() call to add the exception information to the trace segment before re-throwing the exception. I'm not sure of a way to get X-Ray to do that automatically... here's a thread on the AWS forums that provides a bit of background: https://forums.aws.amazon.com/thread.jspa?threadID=282800&tstart=0
Step Function sends all the collected Errors and Causes to X-Ray. But they may not appear in the subsegment for the state but in the one for task.
1- In X-Ray, check "Raw data" tab of the trace. Does the error appear in the JSON there?
2- In the Timeline tab you should be able to see the error under the task:
This is the state subsegment:
And this is the task subsegment:
If you can't still find the error in X-Ray, please post the state machine definition and the Raw trace JSON.
I'm using Cloud Run that is getting triggered by a pubsub message.
But when this Cloud Run code gets an error it does re-run the application over and over again.
This seems unnecessary now when testing because I see the error in the log and doesn't need the code to re-run.
Where can I turn this off?
I'm using Node JS.
You can purge your PubSub push subscription, or delete it.
Solved it short term by enclosing the whole code block by a try/catch and then always be sure to a throw err to catch the error.
After that instead of returning a 400 status in the catch block I returned 200 and the pubsub message got ack:ed that everything was working (Even if it did not).
We have an AWS Lambda running in Go, and upon initialisation runs the following to initialise AWS X-Ray
err := xray.Configure(xray.Config{
LogLevel: "info",
ServiceVersion: "1.2.3",
})
In a seperate repository, we have a utils repository which exposes a HTTP library for our internal stuff. This is imported as a git submodule to all other Lambdas. The code is as follows:
ctx, subseg := xray.BeginSubsegment(incomingContext, "Outbound HTTP call")
client := xray.Client(&http.Client{Transport: tr})
// further down
client.Do(req)
// finally
subseg.Close(resp)
This works as expected when deployed on AWS, producing a nice graph.
The problem is running unit tests on the utils repository. In the context of that repository alone, X-Ray has not been configured, so on the BeginSubsegment call I get a panic:
panic: failed to begin subsegment named 'Outbound HTTP call': segment cannot be found.
I want to gracefully handle the case when X-Ray has not been configured, log it, and carry on execution regardless.
How can I ensure to properly error handle the call to BeginSubsegment when it does not return an error object?
In the case of lambda this code executes without any panic is because lambda creates a facade segment and then your code will be creating subsegments. In the non lambda environment you will have to create a segment first before creating a subsegment. If you don't then it will generate a panic. Now, If you want to log this panic and continue executing your unit tests then I would recommend you to set AWS_XRAY_CONTEXT_MISSING environment variable to LOG_ERROR. It will basically log your panic and continue executing your unit tests.