AWS SNS and load balancer - amazon-web-services

I'm currently facing a problem when thinking about a event driven arch using SNS to decouple some applications.
Imagine a SNS Topic, and I have application A producing messages to it and application B will listen and consume messages from this topic.
I want to consume the messages only once if I attach one more instance to application B so shall I use Load balancer for the same or not ?
I will subscribe the topic using load balancer URL and distribute the messages between the instances of application B.

Related

How to configure the aws event bridge to send events to URL as internally or private(VPC)

I have deployed my application A and B on AWS ECS and used an application load balancer to make HTTPS protocol. I have created an AWS event bridge event. Here my scenario application A sends an event with data to rule and I configured the target as URL by using API destination(HTTPS URL of Application B) on event rule. But here I need to avoid Application B become public access. How I need to send the event to Application B as internally either using target as API destination or AWS API gateway.
Is there possible to send the events to the application deployed on the same AWS as internal communication by using VPC?
According to this official blog, you can try using VPC link.
"the Application Load Balancer can be hidden from public access and connected to API Gateway through VPC Link."

Notify all EC2 instances running in ASG

I've a microservice application that has multiple instances running in ASG. All these applications maintains some internal state. This application exposes Actuator endpoints to refresh it's state. I've some applications which are running on-prem. The scenario is, On some event, I want to call those Actuator endpoints of applications running in AWS to refresh their state. The problem is, If I call LoadBalanced url, then call would go to only one instance. So, I'm thinking of below solutions.
Use SQS and let on-prem ap publish and AWS app consume that message. But here also, only one instance will receive the message.
Use SNS but listeners are http/s based so URL would remain same so I think only one instance would receive the message. (AFAIK)
Any other solution? Please suggest.
Thanks
Use SNS but listeners are http/s based so URL would remain same so I
think only one instance would receive the message. (AFAIK)
When using SNS each server would subscribe to the SNS topic, and when each server subscribes it would provide SNS with its direct HTTP(s) URL (not the load balancer URL). When SNS receives a message it would send it to each server that is currently subscribed. I'm not sure SNS will submit the request to the actuator endpoint in the correct format that your application needs though.
There are likely several solutions you could consider, including ones that won't require a code change. Such as establishing a VPN connection between your on-premise applications and the VPC that contains your ASGs, which would allow you to invoke each machine's refresh endpoint by it's unique private ip address.
However, more simply, if you're using an AWS Classic ELB or ALB, than repeated calls to the load balancer url should hit each machine running your application if enough calls to the refresh endpoint are made.
Although this may not meet your use case, say if you must strictly limit refresh calls to 1 time per endpoint. You'd have to experiment with your software and the load balancer's round-robin behavior.

How can Kubernetes Load balance consume queue message (SQS or other) and pass it to pod?

I am planing on using AWS SQS to receive messages from server and then instantly have Kubernetes Load Balancer consume them and pass each message to one of the pods.
My biggest concern is in which way can Load Balancer be triggered by AWS SQS.
Is this possible to do?
If yes in what way?
To expand on #Marcin's comment:
There is no integration between the Elastic Load Balancer implementations and SQS in any direction. Part of the reason is, that they both implement a pattern, which requires a trigger from the outside for them to do anything.
To consume a message from SQS, the consumer needs to actively poll SQS for work using the ReceiveMessage API call.
For the load balancer to serve traffic there needs to be a request from the outside it can respond to.
To get an integration between two passive/reactive services you need an active component inbetween. You could build for example a fleet of containers or a Lambda function. Lambda can be triggered via SQS (under the hood Lambda will poll SQS) and could subsequently send a request to your ALB or whichever load balancer you choose.

Web hook listener in AWS Lambda

I am writing a simple monitoring system for one of our existing production system. The system being monitored is a SMPP gateway. The basic requirement is to send a message to the SMPP gateway at a given frequency and receive the message via a web hook. This is so to ensure that the SMPP gateway is functioning as expected else email alarms are triggered.
This is the flow my program:
Connect to SMPP gateway
Start a web hook listener on a new thread (server)
Send a test message
Listen for incoming web hooks and notify the parent thread via events
If message web hook was received, exit gracefully, else trigger email alarm.
I have implemented this system in AWS Lambda and assigned a elastic IP by placing the Lambda function inside a VPC. I am able to send the message to SMPP gateway and the gateway is attempting to respond via web hook. But unfortunately, the server can't reach the web hook listener via the specified elastic IP. I searched around and figured that one way to implement web hook listener in AWS Lambda is by using an API gateway trigger. This is not use because this will not gaurantee that the same Lambda instance which sent the message via SMPP will receive the web hook request.
So my question is, is it possible to run a web hook listener in AWS Lambda and receive requests via an attached elastic IP?
No, it is not possible to run a web hook listener in AWS Lambda and receive requests via an attached elastic IP.
Lambda functions inside a VPC make outbound requests to the Internet using an Elastic IP attached to a NAT Gateway, via an ENI associated with the container host. Neither the ENI nor the EIP are exclusively bound to one single Lambda invocation. Lambda functions are technically allowed to listen for inbound connections... but they will never arrive via the ENI, and the NAT Gateway is also specifically designed not to allow connections initiated from outside to make their way back in. So there are at least two layers of the design that prevent what you are attempting from being done in this way.

Amazon AWS WebSocket Load Balancing Scale-In

We are in the process of developing a WebSocket application that will run on the same application servers that serve our APIs, which are all within a target group of a new Amazon Application Load Balancer.
I'm not certain that a sticky session would even be needed once the socket is upgraded, however, using listener forwarding rules should make easy work of that.
My concern comes from scaling actions performed during auto scaling of the target groups, specifically the scale-in action. As currently scaling actions are based on RequestCountPerTarget, when instances get terminated because that metric isn't above the threshold, this doesn't guarantee that the instance has no active WebSocket connections.
I'm assuming this would mean that when the instance is shutdown and terminated, those socket connections would abruptly be interrupted.
What would be the best way to combat this?
Is there another metric that I could use to auto scale-out the group based on the number of active connections per target to better facilitate WebSocket scaling in addition to API requests?
I thought about creating an SNS topic and a LifecycleHook for instance termination on the auto scaling group that I could handle in the WebSocketHandler to send a message to all sockets on that server that they will need to disconnect and connect to another server in the load balancer.