We are trying to use serverless Websocket feature of API Gateway on AWS platform.
During initial observation, we have seen like the idle connection timeout for such websocket is 10 minutes. We have a requirement where we need to increase this time to 30 minutes so that the websocket connection should not close.
Is there any setting or alternate way of increasing this default idle time?
If you take a look at the table under API Gateway quotas for configuring and running a WebSocket API in this AWS documentation, you can see that increase of idle timeout is currently not supported.
My solution was to send a heartbeat (without any data since we just need to interact through the websocket to let API Gateway know that the connection is not idle) through the websocket every 5 minutes, and it has been working well.
Related
AWS API Gateway timeout after around 30 seconds. I connected the API gateway to a lambda function that runs far longer than 30 seconds.
So the API response times out after around ~30 seconds and give back something like a timeout response.
How do I solve this problem and get my response back from Lambda?
Thank you.
The API Gateway has a maximum integration timeout of 30 seconds (API Gateway Limits), so there is nothing you can do to increase it.
What you could do is accept the request, create and ID and put it in a queue.
Then you send an HTTP 202 Message with the request id back to the client.
Now a Lambda function can be triggered from the Queue asynchronously that performs the work.
It later persists the results of the query somewhere under the request id (maybe only for a period of time).
The client can then use the request ID and poll a second API gateway for the status, which is able to return the response once it's present.
You cannot do anything about api Gateway timeout, it is having hard limit of 30 seconds.
A Solution for this is to use alb with aws
lambda, alb is not having any timeout limit.
Another option is polling based as suggested in previous answer by #Maurice.
I want to crawl data from a WebSocket data source, usually WebSocket data is an endless stream, while an AWS Lambda function has a Timout limit, the maximum allowed value is 900 seconds.
If my Lambda function acts as a WebSocket client and connects to a WebSocket url, e.g., wss://ws-feed-public.sandbox.pro.coinbase.com, it starts to receive data for 900 seconds and get terminated by then.
How to keep my Lamda function running forever? Thanks!
Right now I'm running my crawler inside a Linux VM, is it possible to migrate it to AWS Lambda?
AWS Lambda functions run for a maximum of 900 seconds (15 minutes).
There is no way to extend this.
You should continue using an Amazon EC2 instance or a container (ECS, Fargate).
Fun fact: When initially released, the limit was 3 minutes. It was later extended to 5 minutes, then to 15 minutes.
TLDR: Is there a way to trigger an AWS lambda or step function based on an external system's websocket message?
I'm building a synchronization service which connects to a system which supports websockets. I can use timers in step functions to wake periodically and call lambda functions to perform the sync, but I would prefer to subscribe to the websocket and perform the sync only when a message is received.
There are plenty of ways to expose websockets in AWS, but I haven't found a way to consume them short of something like an EC2 instance with a custom service running on it. I'm trying to stay in the serverless ecosystem.
It seems like consuming a websocket is a fairly common requirement; have I overlooked something?
Lambdas are ephemeral. They can't be sitting there waiting for a websocket message.
However, I think what you can do is use an Activity task. Once the step function gets to that state it will wait. The activity worker will run on an EC2 instance and subscribe to a websocket. When a message is received it will poll the State Machine for an activity token and call SendTaskSuccess. The state machine will then continue execution and call the lambda that performs the sync.
You can use AWS API gateway service and lambda. It supports web sockets and can trigger lambda on request
I'm using Spring JMS to communicate with Amazon SQS queues. I set up a handful of queues and wired up the listeners, but the app isn't sending any messages through them currently. AWS allows 1 million requests per month for free, which I thought should be no problem, but after a month I got billed a small amount for going over that limit.
Is there a way to tune SQS or Spring JMS to keep the requests down?
I'm assuming a request is whenever my app polls the queue to check for new messages. Some queues don't need to be near realtime so I could definitely reduce those requests. I'd appreciate any insights you can offer into how SQS and Spring JMS communicate.
"Normal" JMS clients, when polled for messages, don't poll the server - the server pushes messages to the client and the poll is just done locally.
If the SQS client polls the server, that would be unusual, to say the least, but if it's using REST calls, I can see why it would happen.
Increasing the container's receiveTimeout (default 1 second) might help, but without knowing what the client is doing under the covers, it's hard to tell.
We are using wso2am-1.10.0 with a gateway cluster. The throttling tier limits are not behaving as expected.
We have redefined the “Bronze” tier to allow 3 requests in 1 minute. The application is set to “Large”, the subscription to the API is set to “Bronze” and the resource is set to “Unlimited” . We understand that “Bronze” is the most restrictive tier in this case.
We have tested the API with and without a load-balancer:
With load-balancer:
We are always allowed to exceed the 3 requests by minute and then the throttling behaviour became inconsistent. Although it does not seem to be a multiple of the number of gateway workers.
Without load-balancer:
We call our API directly through a gateway worker of our cluster, the worker allows 4 requests(app-tier limit + 1) before returning a quota failure. Then if we call this API, within the same minute, but through an other gateway worker of our cluster, this worker allows one request before failing because of the quota limit.
We have tested the API without clustering the gateways and it works as expected.
Any ideas?
Thanks in advance!