I need to send data from 1 ecs container to another. How can I do that? There is AWS EventBridge that allows me to send data from ECS container to EventBridge. But I could not figure out how to send this data to the other ECS container from EventBridge.
P.S. I have node applications running in ECS containers. I am using HTTP API Gateway and Application Load Balancer (ALB)
Answers to questions asked in comments
What kind of data? Text Data
How big is one msg? Small. Just simple objects
Does it have to be real-time or not? No
I need to send data from 1 ecs container to another. How can I do that?
Usually, when you want your microservices to communicate with each other, an SQS is a preferred choice. The use of the SQS allows you to fully de-couple the producer and the consumer of the messages.
In your case, one container would publish messages to the queue, while the second container would pull for the messages on a fixed schedule. For these to work, both containers would need to have permissions in their task executions role to access the SQS and use AWS SDK to publish and receive the message.
There are other choices as well, such as SNS and EventBridge as you noted. However, due to its simplicity, SQS is often the first choice to consider.
Related
We do have a system that is using Redis pub/sub features to communicate between different parts of the system. To keep it simple we used the pub/sub channel to implement different things. On both ends (producer and consumer), we do have Servers containing code that I see no way to convert into Lambda Functions.
We are migrating to AWS and among other changes, we are trying to replace the use of Redis with a managed pub/sub solution. The required solution is fairly simple: a managed broker that allows to publish a message from one node and to subscribe for its reception from 0 or more other nodes.
It seems impossible to achieve this with any of the available solutions:
Kinesis - It is a streaming solution for data ingestion (similar to Apache Pulsar)
SNS - From the documentation, it looks like exactly what we need until we realize that there is no solution to connect a server (not a Lambda) unless with a custom HTTP endpoint.
EventBridge - Same issue as with SNS
SQS - It is a queue, not a pub/sub.
Amazon MQ / Rabbit MQ - It is a queue, not a pub/sub. But also is not a SaaS solution but rather an installation to an owned node.
I see no reason to remove a feature such as subscribing from a server, this is why I was sure it will be present in one or more of the available solutions. But we went through the documentation and attempted to consume fro SNS and EventBridge without success. Are we missing something? How to achieve what we need?
Example
Assume we have an API server layer, deployed on ECS with a load balancer in front. The API has 2 endpoints, a PUT to update a document, and an SSE to listen for updates on documents.
Assuming a simple round-robin load balancer, an update for document1 may occur on node1 where a client may have an ongoing SSE request for the same document on node2. This can be done with a Redis backbone; node1 publishes on document1 topic and node2 is subscribed to the same topic. This solution is fast and efficient (in this case at-most-once delivery is perfectly acceptable).
Being this an example we will not consider WebSocket pub/sub API or other ready-made solutions for this specific use case.
Lambda
Subscriber side can not be a Lambda. Being two distinct contexts involved (the SSE HTTP Request one and the SNS event one) this will cause two distinct lambdas to fire and no way to 'stitch' them together.
SNS + SQS
We hesitate with SQS in conjunction with SNS being a solution that will add a lot of unneeded complexity:
Number of nodes is not known in advance and they scale, requiring an automated system to increase/reduce the number of SQS queues.
Persistence is not required
Additional latency is introduced
Additional infrastructure cost
HTTP Endpoint
This is the closest thing to a programmatic subscription but suffers from similar issues to the SNS-SQS solution:
Number of nodes is unknown requiring endpoint subscriptions to be automatically added.
Eiter we expose one endpoint for each node or have a particular configuration on the Load Balancer to route the message to the appropriate node.
Additional API endpoints must be exposed, maintained, and secured.
I've got rather rare requirement to deliver SNS topic message to all micro service instances.
Basically it's kind of notification that related data had changed
and all micro service instances should reload their internals from data source.
We are using TerraForm to create our infrastructure, with Kong api gateway.
Micro Service instances could be created 'on the fly' as system load is increased,
so subscriptions to topic could not be created in TerraForm stage.
Micro Service is standard SpringBoot app.
My first approach is:
micro service is exposing http endpoint that can be subscribed to SNS topic
micro service on start will subscribe itself (above endpoint) to required SNS topic, unsubscribe on service shutdown.
My problem is to determine individual micro service instances urls, that can be used in subscription process.
Alternative approach would be to use SQS, create SQS queue per micro srv instance (subscribe it to sns).
Maybe I'm doing it wrong on conceptual level ?
Maybe different architecture approach is required ?
It might be easier for the microservices to check an object in Amazon S3 to "pull" the configuration updates (or at least call HeadObject to check if the configuration has changed) rather than trying to "push" the configuration update to all servers.
Or, use AWS Systems Manager Parameter Store and have the servers cache the credentials for a period (eg 5 minutes) so they aren't always checking the configuration.
Kinda old right now but here is my solution:
create SNS, subscribe with SQS, publish the SQS to redis pub/sub, subscribe to pub/sub
now all your instances will get the event.
I have a PHP web application that is running on an ec2 server. The app is integrated with another service which involves subscribing to a number of webhooks.
The number of requests the server is receiving from these webhooks has become unmanageable, and I'm looking for a more efficient way to deal with data coming from these webhooks.
My initial thought was to use API gateway and put these requests into an SQS queue and read from this queue in batches.
However, I would like these batches to be read by the ec2 instance because the code used to process the webhooks is code reused throughout my application.
Is this possible or am I forced to use a lambda function with SQS? Is there a better way?
The approach you suggested (API Gateway + SQS) will work just fine. There is no need to use AWS Lambda. You'll want to use the AWS SDK for PHP when writing the application code that receives messages from your SQS queue.
I've used this pattern before and it's a great solution.
. . . am I forced to use a lamda function with SQS?
SQS plus Lambda is basically free. At this time, you get 1M (million) lambda calls and 1M (million) SQS requests per month. Remember that those SQS Requests may contain up to 10 messages and that's a potential 10M messages, all inside the free tier. Your EC2 instance is likely always on. Your lambda function is not. Even if you only use Lambda to push the SQS data to a data store like RDBMS for your EC2 to periodically poll, the operation would be bullet-proof and very inexpensive. With the introduction of SQS you could transition the common EC2 code to Lambda function(s). These now have a run time of 15 minutes.
To cite my sources:
SQS pricing for reference: https://aws.amazon.com/sqs/pricing/
Lambda pricing for reference: https://aws.amazon.com/lambda/pricing/
I have a general AWS question. I have started using AWS sdk, but looks like if I want to receive events asynchronously from AWS(ex: cloudwatch events), lambda functions is the only way. I want to write a simple application that registers a callback to AWS for events, but i couldn't find a way to do that till now, since i don't want to use lambda, i have been polling from my application. Please, let me know if polling is the only option or if there is a better way to resolve it without polling.
From the documentation:
You can configure the following AWS services as targets for CloudWatch Events:
Amazon EC2 instances
AWS Lambda functions
Streams in Amazon Kinesis Streams
Delivery streams in Amazon Kinesis Firehose
Amazon ECS tasks
SSM Run Command
SSM Automation
Step Functions state machines
Pipelines in AWS CodePipeline
Amazon Inspector assessment templates
Amazon SNS topics
Amazon SQS queues
Built-in targets
The default event bus of another AWS account
That's a lot more than just Lambda, so I'm not sure why you state in your question that Lambda is the only option. The options of Amazon EC2 instances and Amazon SNS topics both provide a method for Amazon to "push" the events to your services, instead of requiring your services to poll.
With cloudwatch events, you can set rules and trigger a number of different targets, including SQS queues which you can poll from your EC2 Instances.
Lambda is certainly a popular endpoint, but based on the docs, there are other targets you can send the events to
Already above answers might also be helpful, but one of the possible options to address your problem could be one of this as well.
You can make use of AWS SNS service to subscribe for the events on AWS resources. And the SNS can publish the events to your application end point. Which is nothing but pub/sub model.
Refer this link http://docs.aws.amazon.com/sns/latest/api/API_Subscribe.html
The end-point could be your http or https based application.
I'd like to use AWS IoT to manage a grid of devices. Data by device must be sent to a queue service (RabbitMQ) hosted on an EC2 instance that is the starting point for a real time control application. I read how to make a rule to write data to other Service: Here
However there isn't an example for EC2. Using the AWS IoT service, how can I connect to a service on EC2?
Edit:
I have a real time application developed with storm that consume data from RabbitMQ and puts the result of computation in another RabbitMQ queue. RabbitMQ and storm are on EC2. I have devices producing data and connected to IoT. Data produced by devices must be redirected to the queue on EC2 that is the starting point of my application.
I'm sorry if I was not clear.
The AWS IoT supports pushing the data directly to other AWS services. As you have probably figured out by now publishing to third party APIs isn't directly supported.
From the choices AWS offers Lambda, SQS, SNS and Kinesis would probably work best for you.
With Lambda you could directly forward the incoming message using the one of Rabbit MQs APIs.
With SQS you would put it into an AWS queue first and than poll this queue transfering it to RabbitMQ.
Kinesis would allow more sophisticated processing, but is probably too complex.
I suggest you program a Lamba with the programming language of your choice using one of the numerous RabbitMQ APIs.