Mock SQS Config - amazon-web-services

I've created a mule application with the following configuration to SQS. This lives in my config.xml file.
<sqs:config name="amazonSQSConfiguration" accessKey="${aws.sqs.accessKey}" secretKey="${aws.sqs.secretKey}" url="${aws.sqs.baseUrl}" region="${aws.account.region}" protocol="${aws.sqs.protocol}" doc:name="Amazon SQS: Configuration">
<reconnect count="5" frequency="1000"/>
</sqs:config>
This is problematic for me because when this SQS configuration loads up, it tries to connect to Amazon SQS queue but can't because access to the queue is not accessible from my machine.
For munit, unit purposes, I'm looking for a way to stop this from trying to connect on load?
Is there a way I can mock this sqs:config?
Please note this is different from mocking the connector in my flow? In this case I need to mock the config.
Or happy for any other suggestions.
thanks

Related

Serverless and Observer pattern ( ¿It is possible to listen to events from a lambda? )

I was thinking about serverless architecture and the observer pattern (i think that it does not fit into serverless).
How can I implement the Observer design pattern usings a lambda or similar things (severless architecture)?.
My use case is:
Listen to events from ethereum blockchain (it is a polling mechanism, but for the sake of the process itself i need to pool constantly the ethereum RPC API so that it is like to listen to events) using ethers.js or web3.js. I am doing this from an aws lambda function. The lambda is activated on a scheduled time base event.
Sending some of the recollected data in the previous step to an aws RabbitMQ
¿In this kind of technological stack (lambdas, managed services and so) It is mandatory to listen to from a 100% live application (always on)?
I can't imagine a lambda function implementing this pattern, but...ideas are welcome!
Thanks in advance
You can use the 3rd party service like streams API provided by Moralis to listen to the smart contract events in real-time through a webhook URL.
I think this is 💯 the best option for you since you are looking for a serverless option.
Once you have a stream linked to your webhook you get the event data to your backend as soon as the transaction is confirmed on the blockchain.
I suggest you look at this tutorial on how to set it up.
For easy setup: https://www.youtube.com/watch?v=QtstmvVeI18
For Programmatic Setup: https://www.youtube.com/watch?v=Rzsu52dSfAg
Read more at: https://docs.moralis.io/streams-api

How to load test an asynchronous pipeline?

I have the following pipeline in AWS
API-Gateway -> Lambda -> Kafka(Amazon MSK) -> Consumers
I need to load test the entire pipeline and each component in the pipeline (to identify bottlenecks)
I don't have any prior experience in load testing. So, didn't know where to start. Some blogs mentioned JMeter can be used to do the load testing. But later got to know that the pipeline is asynchronous and it can't be done using JMeter.
How can I load test the pipeline? Is there any standard way to do it?
Any help is greatly appreciated. Thanks!
You can use any load testing tool which is capable of sending message to the API gateway and then reading it from Kafka.
When it comes to JMeter it's capable of both:
API Gateway: Building a WebService Test Plan
Kafka consumer: Apache Kafka - How to Load Test with JMeter
If you want to measure the cumulative duration of the request from API till it gets "consumed" - Transaction Controller

How to load testing of SNS-->SQS-->LAMBDA

I have a solution
SNS-->SQS-->LAMBDA-->ES(ElastciSearch)
I want to test this with heavy load like 10K or 5K request to SNS per second.
The size of the test record can be very small (1kb) and any type of json record .
Is there anyway to test this load ?I did find anything which is native to AWS for this test .
You could try with jmeter. JMeter has support for testing JMS interfaces for messaging systems. You can use the AWS Java SDK to get a SNS JMS interface
Agree, you can use JMeter to execute load testing over SNS. Create Java Request sampler class using AWS SDK library to publish messages in SNS topic, build a jar and install it under lib/ext.
https://github.com/JoseLuisSR/awsmeter
In this repository you cand find Java Request sampler classes created to publish messages in Standard Topic or FIFO Topic, depends of the kind of Topic you need use other message properties like deduplication id or group id for FIFO topic.
Here you can find details to subscribe SQS queue to SNS topic.

GAE service running on Flexible Env. as target of a task queue

According to the google doc, a service running in the flexible enviroment can be the target of a push task:
Outside of the standard environment, you can't add tasks to push
queues, but a service running in the flexible environment can be the
target of a push task. You can specify this using the target parameter
when adding a task to queue or by specifying the default target for
the queue in queue.yaml.
However, when I tried to do it I get 404 errors in the flexible service.
That's totally normal due to the required endpoint (/_ah/queue/deferred) for task queues is it not defined in the flexible service.
How do I become a flexible service in a valid target for task queues?
Do I have to define that endpoint in my code in some way?
Usually, you'll need to write a handler in your worker service to do the processing after receiving a task. In the case of push tasks, the service will send HTTP requests to your whatever url you specify. If no url is specified the default URL /_ah/queue/[QUEUE_NAME] will be used.
Now, from the endpoint you mention, it seems you are using deferred tasks, which are a somewhat special kind. Please, see this thread for a workaround by adding the needed url entry. It mentions Managed VMS but it should still work.

Reliably pushing events from API Gateway to Lambda using a queue

I currently have a 3rd party application pushing messages to a Lambda function through API gateway. The Lambda function needs to serialize, log, and push the message to another ESB that I have very little control over.
I'm trying to ensure that there is some kind of recovery mechanism in the case that the Lambda function is either at max load or cannot communicate with the ESB. I've read about Kinesis being a good option for exactly this, but the ESB does not support batching for my use case.
This would cause me to run into the scenario where some messages might make it to ESB, while others don't, which would ultimately cause the batch to fail. Then, when the batch is retried, the messages would be duplicated in the ESB.
Is there a way I could utilize the functionality that Kinesis offers without the batching? Is there another AWS offering that better fits my use case? Ideally I would have one message being handled by the Lambda function that stays in the queue until it is successfully pushed into the ESB.
Any tips would be much appreciated.
Thanks,
Matt
Following might be of help to you:
1) setup api-gateway to log to sqs and 2) then set up a lambda function on that sqs queue to serialize, log, and push the message to the external endpoint.
For the first part: How to integrate API Gateway with SQS this will be of help. (as already mentioned in comments)
This article might help you more for second part: https://dzone.com/articles/integrate-sqs-and-lambda-serverless-architecture-f
Note that you can also choose what kind of trigger you would like (based on usecase)- cron based poll/ or event based, you also have control over when you are deleting from sqs in your lambda function. (you can also find the very basic code in lambda blueprint with name "sqs-poller").
Thanks!