Web hook listener in AWS Lambda - amazon-web-services

I am writing a simple monitoring system for one of our existing production system. The system being monitored is a SMPP gateway. The basic requirement is to send a message to the SMPP gateway at a given frequency and receive the message via a web hook. This is so to ensure that the SMPP gateway is functioning as expected else email alarms are triggered.
This is the flow my program:
Connect to SMPP gateway
Start a web hook listener on a new thread (server)
Send a test message
Listen for incoming web hooks and notify the parent thread via events
If message web hook was received, exit gracefully, else trigger email alarm.
I have implemented this system in AWS Lambda and assigned a elastic IP by placing the Lambda function inside a VPC. I am able to send the message to SMPP gateway and the gateway is attempting to respond via web hook. But unfortunately, the server can't reach the web hook listener via the specified elastic IP. I searched around and figured that one way to implement web hook listener in AWS Lambda is by using an API gateway trigger. This is not use because this will not gaurantee that the same Lambda instance which sent the message via SMPP will receive the web hook request.
So my question is, is it possible to run a web hook listener in AWS Lambda and receive requests via an attached elastic IP?

No, it is not possible to run a web hook listener in AWS Lambda and receive requests via an attached elastic IP.
Lambda functions inside a VPC make outbound requests to the Internet using an Elastic IP attached to a NAT Gateway, via an ENI associated with the container host. Neither the ENI nor the EIP are exclusively bound to one single Lambda invocation. Lambda functions are technically allowed to listen for inbound connections... but they will never arrive via the ENI, and the NAT Gateway is also specifically designed not to allow connections initiated from outside to make their way back in. So there are at least two layers of the design that prevent what you are attempting from being done in this way.

Related

How buffer/delay incoming HTTP request until down backend wakes up?

https://companyA.acme.org/custom/api/endpoint1 is hosted on a dedicated ec2 instance
https://companyB.acme.org/another/custom/apiendpoint is hosted on a dedicated ec2 instance
(both ec2 instance are using the same core app, each customer can customize the catalog of API endpoints)
Most of the time those ec2 instances are idle, so we secretly want to stop them, but we don't want the customer to care about the instance being up or not.
We can accept a 2 sec delay on response timing when instance needs to be wake up before answering the customer API call
My idea is to intercept all incoming HTTP request and buffer them before routing + forwarding.
I need a delay to be able to check if a backend matching the subdomain is up or not and wake it up if it is down.
Anyone knows any existing proxy / load balancing solution able to buffer / queue HTTP requests, then allows to do some custom magic (in order to launch the right ec2 instance), then forward the request based on Origin/Referer ?
(the answer being probably every existing proxy for the last part )
I was thinking about the following:
NGINX in front of everyone (point all route53 subdomains to this NGINX)
Catch AWS event when someone is calling https://companyA.acme.com/custom/api/endpoint2
Trigger AWS lambda that will starts corresponding ec2 host
But I am not sure on how NGINX will handle the request buffering / forwarding while I start the ec2 host.
Bonus question : how not to waste any time forwarding the request in case the backend is already up ?

How to configure the aws event bridge to send events to URL as internally or private(VPC)

I have deployed my application A and B on AWS ECS and used an application load balancer to make HTTPS protocol. I have created an AWS event bridge event. Here my scenario application A sends an event with data to rule and I configured the target as URL by using API destination(HTTPS URL of Application B) on event rule. But here I need to avoid Application B become public access. How I need to send the event to Application B as internally either using target as API destination or AWS API gateway.
Is there possible to send the events to the application deployed on the same AWS as internal communication by using VPC?
According to this official blog, you can try using VPC link.
"the Application Load Balancer can be hidden from public access and connected to API Gateway through VPC Link."

Notify all EC2 instances running in ASG

I've a microservice application that has multiple instances running in ASG. All these applications maintains some internal state. This application exposes Actuator endpoints to refresh it's state. I've some applications which are running on-prem. The scenario is, On some event, I want to call those Actuator endpoints of applications running in AWS to refresh their state. The problem is, If I call LoadBalanced url, then call would go to only one instance. So, I'm thinking of below solutions.
Use SQS and let on-prem ap publish and AWS app consume that message. But here also, only one instance will receive the message.
Use SNS but listeners are http/s based so URL would remain same so I think only one instance would receive the message. (AFAIK)
Any other solution? Please suggest.
Thanks
Use SNS but listeners are http/s based so URL would remain same so I
think only one instance would receive the message. (AFAIK)
When using SNS each server would subscribe to the SNS topic, and when each server subscribes it would provide SNS with its direct HTTP(s) URL (not the load balancer URL). When SNS receives a message it would send it to each server that is currently subscribed. I'm not sure SNS will submit the request to the actuator endpoint in the correct format that your application needs though.
There are likely several solutions you could consider, including ones that won't require a code change. Such as establishing a VPN connection between your on-premise applications and the VPC that contains your ASGs, which would allow you to invoke each machine's refresh endpoint by it's unique private ip address.
However, more simply, if you're using an AWS Classic ELB or ALB, than repeated calls to the load balancer url should hit each machine running your application if enough calls to the refresh endpoint are made.
Although this may not meet your use case, say if you must strictly limit refresh calls to 1 time per endpoint. You'd have to experiment with your software and the load balancer's round-robin behavior.

How can Kubernetes Load balance consume queue message (SQS or other) and pass it to pod?

I am planing on using AWS SQS to receive messages from server and then instantly have Kubernetes Load Balancer consume them and pass each message to one of the pods.
My biggest concern is in which way can Load Balancer be triggered by AWS SQS.
Is this possible to do?
If yes in what way?
To expand on #Marcin's comment:
There is no integration between the Elastic Load Balancer implementations and SQS in any direction. Part of the reason is, that they both implement a pattern, which requires a trigger from the outside for them to do anything.
To consume a message from SQS, the consumer needs to actively poll SQS for work using the ReceiveMessage API call.
For the load balancer to serve traffic there needs to be a request from the outside it can respond to.
To get an integration between two passive/reactive services you need an active component inbetween. You could build for example a fleet of containers or a Lambda function. Lambda can be triggered via SQS (under the hood Lambda will poll SQS) and could subsequently send a request to your ALB or whichever load balancer you choose.

KAFKA consumer setup behind a proxy on prem. Producer is in AWS

We have a KAFKA setup in AWS and we publish message from the publisher there, we want to consume those messages through Kafka consumer from on-prem boxes which have access to the internet through a proxy. Is there any setup in the KAFKA consumer so that we can update the proxy detail.
Note: We are able to connect and get a message from a local box (Kafka consumer) which has direct access to the internet (without proxy).
You need to set advertised.listeners to the external IP so that clients can correctly connect to it. Otherwise they'll try to connect to the internal IP (since advertised.listeners will default to listeners unless explicitly set)
Ref: https://kafka.apache.org/documentation/#brokerconfigs