I have process instances to be deleted using /POST Async Delete API. We were able to get the response when calling this API but process instances count is not getting reduced. I there any mistake ? i am using camunda 7.11 and mysql 5.7.33 respectively.I am calling Camunda REST-API calls using node server. kindly help me to resolve this.
const options = {
method: 'POST',
uri: `localhost:8080/engine-rest/process-instance/delete`,
headers,
body:{
processInstanceIds:processInstanceIds,
deleteReason : "Removed Process Instances via Script",
"skipCustomListeners":true,
"skipSubprocesses":true,
},
json: true
};
request(options).then(data=>{
return data;
}).catch(e=> throw e;);
Related
Intermittently getting the following error when connecting to an AWS keyspace using a lambda layer
All host(s) tried for query failed. First host tried, 3.248.244.53:9142: Host considered as DOWN. See innerErrors.
I am trying to query a table in a keyspace using a nodejs lambda function as follows:
import cassandra from 'cassandra-driver';
import fs from 'fs';
export default class AmazonKeyspace {
tpmsClient = null;
constructor () {
let auth = new cassandra.auth.PlainTextAuthProvider('cass-user-at-xxxxxxxxxx', 'zzzzzzzzz');
let sslOptions1 = {
ca: [ fs.readFileSync('/opt/utils/AmazonRootCA1.pem', 'utf-8')],
host: 'cassandra.eu-west-1.amazonaws.com',
rejectUnauthorized: true
};
this.tpmsClient = new cassandra.Client({
contactPoints: ['cassandra.eu-west-1.amazonaws.com'],
localDataCenter: 'eu-west-1',
authProvider: auth,
sslOptions: sslOptions1,
keyspace: 'tpms',
protocolOptions: { port: 9142 }
});
}
getOrganisation = async (orgKey) => {
const SQL = 'select * FROM organisation where organisation_id=?;';
return new Promise((resolve, reject) => {
this.tpmsClient.execute(SQL, [orgKey], {prepare: true}, (err, result) => {
if (!err?.message) resolve(result.rows);
else reject(err.message);
});
});
};
}
I am basically following this recommended AWS documentation.
https://docs.aws.amazon.com/keyspaces/latest/devguide/using_nodejs_driver.html
It seems that around 10-20% of the time the lambda function (cassandra driver) cannot connect to the endpoint.
I am pretty familiar with Cassandra (I already use a 6 node cluster that I manage) and don't have any issues with that.
Could this be a timeout or do I need more contact points?
Followed the recommended guides. Checked from the AWS console for any errors but none shown.
UPDATE:
Update to the above question....
I am occasionally (1 in 50 if I parallel call the function (5 concurrent calls)) getting the below error:
"All host(s) tried for query failed. First host tried,
3.248.244.5:9142: DriverError: Socket was closed at Connection.clearAndInvokePending
(/opt/node_modules/cassandra-driver/lib/connection.js:265:15) at
Connection.close
(/opt/node_modules/cassandra-driver/lib/connection.js:618:8) at
TLSSocket.
(/opt/node_modules/cassandra-driver/lib/connection.js:93:10) at
TLSSocket.emit (node:events:525:35)\n at node:net:313:12\n at
TCP.done (node:_tls_wrap:587:7) { info: 'Cassandra Driver Error',
isSocketError: true, coordinator: '3.248.244.5:9142'}
This exception may be caused by throttling in the keyspaces side, resulting the Driver Error that you are seeing sporadically.
I would suggest taking a look over this repo which should help you to put measures in place to either prevent the occurrence of this issue or at least reveal the true cause of the exception.
Some of the errors you see in the logs you will need to investigate Amazon CloudWatch metrics to see if you have throttling or system errors. I've built this AWS CloudFormation template to deploy a CloudWatch dashboard with all the appropriate metrics. This will provide better observability for your application.
A System Error indicates an event that must be resolved by AWS and often part of normal operations. Activities such as timeouts, server faults, or scaling activity could result in server errors. A User error indicates an event that can often be resolved by the user such as invalid query or exceeding a capacity quota. Amazon Keyspaces passes the System Error back as a Cassandra ServerError. In most cases this a transient error, in which case you can retry your request until it succeeds. Using the Cassandra driver’s default retry policy customers can also experience NoHostAvailableException or AllNodesFailedException or messages like yours "All host(s) tried for query failed". This is a client side exception that is thrown once all host in the load balancing policy’s query plan have attempted the request.
Take a look at this retry policy for NodeJs which should help resolve your "All hosts failed" exception or pass back the original exception.
The retry policies in the Cassandra drivers are pretty crude and will not be able to do more sophisticated things like circuit breaker patters. You may want to eventually use a "failfast" retry policy for the driver and handle the exceptions in your application code.
I am stuck on how I should begin my lambda handler that will read some data from dynamodb. I have defined my api gateway model with the requests and response models, therefore do I need to state any status codes in the lambda handler ? Do I use API gateway proxy response event ? Any code examples
in Java would be helpful.
My notes on what I should include in the lambda handler:
access DB
map over table
Find attribute
Return response to api ?
What am I missing ? Thank you.
If you decided to use a non-proxy Lambda integration then you need to define an integration response using a regex. Here is an example using nodeJS:
lambda regex
Normally i declare my error messages just like this:
const errorMessages = {
INTERNAL_SERVER_ERROR: {
message: "Internal server error!",
code: 500
},
ERROR_BODY_INVALID: {
message: "Invalid request body",
code: 400
}
};
Then throw an error like this
exports.handler = function(event, context, callback) {
try {
// do something
} catch (error) {
callback(JSON.stringify(errorMessages.INTERNAL_SERVER_ERROR));
};
}
When a lambda error regex matches then it's mapped to the configured response status code.
P.D. If you are using Java this method does not work and you need to use proxy integration
I am relatively new to Google Cloud Platform, and I am able to create app services, and manage databases. I am attempting to create a handler within Google Cloud Tasks (similar to the NodeJS sample found in this documentation.
However, the documentation fails to clearly address how to connect the deployed service with what is requesting. Necessity requires that I have more than one service in my project (one in Node for managing rest, and another in Python for managing geospatial data as asynchronous tasks).
My question: When running multiple services, how does Google Cloud Tasks know which service to direct the task towards?
Screenshot below as proof that I am able to request tasks to a queue.
When using App Engine routing for your tasks it will route it to the "default" service. However, you can overwrite this by defining AppEngineRouting, select your service, instance and version, the AppEngineHttpRequest field.
The sample shows a task routed to the default service's /log_payload endpoint.
const task = {
appEngineHttpRequest: {
httpMethod: 'POST',
relativeUri: '/log_payload',
},
};
You can update this to:
const task = {
appEngineHttpRequest: {
httpMethod: 'POST',
relativeUri: '/log_payload',
appEngineRouting: {
service: 'non-default-service'
}
},
};
Learn more about configuring routes.
I wonder which "services" you are talking about, because it always is the current service. These HTTP requests are basically being dispatched by HTTP headers HTTP_X_APPENGINE_QUEUENAME and HTTP_X_APPENGINE_TASKNAME... as you have them in the screenshot with sample-tasks and some random numbers. If you want to task other services, these will have to have their own task queue(s).
I want to create a CRUD API using Micronaut and deploy it on AWS Lambda, exposing the different methods with Amazon API Gateway. I could create different Kotlin Projects per endpoint (GET, POST...), one including one function, but that's kind of cumbersome so I'd rather have a single project with all the CRUD functions.
My current application contains two functions: one Supplier (GET) and one Consumer (POST).
Application:
object Application {
#JvmStatic
fun main(args: Array<String>) {
Micronaut.build()
.packages("micronaut.aws.poc")
.mainClass(Application.javaClass)
.start()
}
}
Supplier:
#FunctionBean("micronaut-aws-poc")
class MicronautAwsPocFunction : Supplier<String> {
override fun get(): String {
println("GET")
return "micronaut-aws-poc"
}
}
Consumer:
#FunctionBean("micronaut-aws-poc-post")
class MicronautAwsPocPostFunction : Consumer<String> {
override fun accept(t: String) {
println("POST $t")
}
}
Then, I have created a resource in Amazon API Gateway with one GET and one POST method. The problem is, no matter which one I call, the MicronautAwsPocFunction is always invoked.
Is it possible/recommended to embed many functions in a single jar?
How can I make POST invocations call the MicronautAwsPocPostFunction instead of the MicronautAwsPocFunction?
In case I wanted an additional PUT function, how could I model it?
I tried a different approach, this is how I solved it:
Instead of using functions I changed to Lambda Functions Using AWS API Gateway Proxy. Also take into account this specific aws lambda documentation.
I recreated the project with this command mn create-app micronaut-poc --features aws-api-gateway -l kotlin
Now I have a "normal" REST application with two controllers:
#Controller("/")
class PingController {
#Get("/")
fun index(): String {
return "{\"pong\":true}"
}
}
#Controller("/")
class PongController {
#Post("/")
fun post(): String {
println("PONG!!!!!!!")
return "{\"ping\":true}"
}
}
The magic happens in the AWS API Gateway configuration. We have to configure a proxy resource:
Finally we can invoke the lambda from the API Gateway setting the correct HTTP Method. IMPORTANT: Set a host header, otherwise Micronaut will throw a nullpointerexception:
GET:
POST:
I have a multi-endpoint webservice written in Flask and running on API Gateway and Lambda thanks to Zappa.
I have a second, very tiny, lambda, written in Node, that periodically hits one of the webservice endpoints. I do this by configuring the little lambda to have Internet access then use Node's https.request with these options:
const options = {
hostname: 'XXXXXXXXXX.execute-api.us-east-1.amazonaws.com',
port: 443,
path: '/path/to/my/endpoint',
method: 'POST',
headers: {
'Authorization': `Bearer ${s3cretN0tSt0r3d1nTheC0de}`,
}
};
and this works beautifully. But now I am wondering whether I should instead make the little lambda invoke the API endpoint directly using the AWS SDK. I have seen other S.O. questions on invoking lambdas from lambdas but I did not see any examples where the target lambda was a multi-endpoint webservice. All the examples I found used new AWS.Lambda({...}) and then called invokeFunction with params.
Is there a way to pass, say, an event to the target lambda which contained the path of the specific endpoint I want to call? (and the auth headers, etc.) * * * * OR * * * * is this just a really dumb idea, given that I have working code already? My thinking is that a direct SDK lambda invocation might (is this true?) bypass API Gateway and be cheaper, BUT, hitting the endpoint directly via API Gateway is better for logging. And since the periodic lambda runs once a day, it's probably free anyway.
If what I have now is best, that's a fine answer. A lambda invocation answer would be cool too, since I've not been able to find a good example in which the target lambda had multiple https endpoints.
You can invoke the Lambda function directly using the invoke method in AWS SDK.
var params = {
ClientContext: "MyApp",
FunctionName: "MyFunction",
InvocationType: "Event",
LogType: "Tail",
Payload: <Binary String>,
Qualifier: "1"
};
lambda.invoke(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
/*
data = {
FunctionError: "",
LogResult: "",
Payload: <Binary String>,
StatusCode: 123
}
*/
});
Refer the AWS JavaScript SDK lambda.invoke method for more details.