I have the following pipeline in AWS
API-Gateway -> Lambda -> Kafka(Amazon MSK) -> Consumers
I need to load test the entire pipeline and each component in the pipeline (to identify bottlenecks)
I don't have any prior experience in load testing. So, didn't know where to start. Some blogs mentioned JMeter can be used to do the load testing. But later got to know that the pipeline is asynchronous and it can't be done using JMeter.
How can I load test the pipeline? Is there any standard way to do it?
Any help is greatly appreciated. Thanks!
You can use any load testing tool which is capable of sending message to the API gateway and then reading it from Kafka.
When it comes to JMeter it's capable of both:
API Gateway: Building a WebService Test Plan
Kafka consumer: Apache Kafka - How to Load Test with JMeter
If you want to measure the cumulative duration of the request from API till it gets "consumed" - Transaction Controller
Related
I have a solution
SNS-->SQS-->LAMBDA-->ES(ElastciSearch)
I want to test this with heavy load like 10K or 5K request to SNS per second.
The size of the test record can be very small (1kb) and any type of json record .
Is there anyway to test this load ?I did find anything which is native to AWS for this test .
You could try with jmeter. JMeter has support for testing JMS interfaces for messaging systems. You can use the AWS Java SDK to get a SNS JMS interface
Agree, you can use JMeter to execute load testing over SNS. Create Java Request sampler class using AWS SDK library to publish messages in SNS topic, build a jar and install it under lib/ext.
https://github.com/JoseLuisSR/awsmeter
In this repository you cand find Java Request sampler classes created to publish messages in Standard Topic or FIFO Topic, depends of the kind of Topic you need use other message properties like deduplication id or group id for FIFO topic.
Here you can find details to subscribe SQS queue to SNS topic.
I'm deploying my .net core 2.1 application on AWS Lambda, I'm using AspNetCoreServer Package for proxy routing to my controllers, and I found the problema on this solution, in my first request lambda is very slow for executing the action controller, but in anothers requests is fast, I look in CloudWatch logs for understand whats is happen and i saw in logs that the longest time is in ControllerActionInvoker: Route Match to invoke my action, I would to know if i did anything wrong or is .net core is slow for aws lambda.
My logs evidencies:
Here is my first request log:
And my second request log:
Thank you
In fact the first slow request is not only caused by the lambda cold start.
With .Net Core in lambda you have 2 cold start : the cold start of the lambda itself and the cold start of .net core itself.
In order to avoid these 2 cold start you have to :
Lambda cold start : Warm your lambda by calling it every 5 min
.Net Core cold start : Warm your .Net Core api by calling all your endpoint at startup
refer to this github issue to know more on the first slow request in .Net Core (still hope this problem will be fixed or better managed in next dotnet core releases, but right now you don't have better option)
Cold start (first lambda invocation) is not the specific problem of .Net Core.
You can find a timing comparison for different languages in this article.
I've created a mule application with the following configuration to SQS. This lives in my config.xml file.
<sqs:config name="amazonSQSConfiguration" accessKey="${aws.sqs.accessKey}" secretKey="${aws.sqs.secretKey}" url="${aws.sqs.baseUrl}" region="${aws.account.region}" protocol="${aws.sqs.protocol}" doc:name="Amazon SQS: Configuration">
<reconnect count="5" frequency="1000"/>
</sqs:config>
This is problematic for me because when this SQS configuration loads up, it tries to connect to Amazon SQS queue but can't because access to the queue is not accessible from my machine.
For munit, unit purposes, I'm looking for a way to stop this from trying to connect on load?
Is there a way I can mock this sqs:config?
Please note this is different from mocking the connector in my flow? In this case I need to mock the config.
Or happy for any other suggestions.
thanks
I have a job, using a tESBConsumer component to call a distant webservice.
This webservice takes between 55-65s to answer : default timeout is set to 60s : I have sometimes read Timeout on this webservice call, I want to push it a bit.
I am deploying the job on a jobserver (not on karaf, as we have split configuration : karaf is for services only, jobserver for the jobs).
Thus, talend advice on configuring timeout is to use the org.apache.cxf.http.conduits-common.cfg file, which is a file only available on a Karaf, not on a jobserver ! (See talend doc)
=> Is there a way to configure read timeout option on a jobserver ?
Actually, using the advanced settings of tESBConsumer works in this case, even if it says that it is only working on the studio.
So the "timeout" fields could be used for a deployment on a jobserver.
I want to schedule (with date time) a Webservice call using Spring integration. I am planning to use the below configuration to invoke the REST Webservice. I am new to Webservice and SI. Could any of you help me to come up with a scheduler to do the same?
<int-http:outbound-gateway request-channel="sampleRequestChannel"
reply-channel="sampleReplyChannel"
url="http://<server details>"
http-method="POST" expected-response-type="java.lang.String" />
To read the data from DB there is JDBC adapters. One of them is:
<int-jdbc:inbound-channel-adapter>
<int:poller/>
</int-jdbc:inbound-channel-adapter>
with which you can poll some table in DB periodically for the fresh value of date and time and send it as a payload to the Spring Integration flow.
Another is <int-jdbc:outbound-gateway> which is based on the upstream flow and can be triggered by user event.