Is there a way by which we can do integration between On-Premise IBM MQ with AWS SQS/API Gateway.I checked lots of links but found that we can migrate whole IBM MQ to AWS MQ but can't call from AWS to on premise MQ. Please suggest if anyone has tried this kind of integration.
I’m assuming you have an AWS based application that integrates with SQS and an on-premise application that integrates with IBM MQ, and ultimately you want to communicate effectively between the two applications.
At a functional level IBM MQ provides a client interface and a bridge between this and the AWS SQS interface is relatively straight forward to create. One important aspect to consider is the non-functional aspects. The IBM MQ client can either communicate directly back to the on-premise MQ instance, or via an AWS MQ instance. Although it may appear to be more straight forward to communicate directly to the on-premise MQ instance there are a few considerations that may mean an MQ instance in AWS is a more sensible approach.
Applications often use IBM MQ for its assured delivery capabilities,
by building a bridge to AWS SQS which is a non-assured delivery
provider there is a risk that messages can be lost or duplicated
(depending on the implementation of the bridging logic). To minimize
the chance of this occurring you want to ensure that you have a
reliable network between MQ, the bridge and SQS instance. This
removes any fragile network links, as MQ can transfer the message
reliably from on-premise to a MQ instance deployed in AWS, overcoming
any network issues transparently.
The MQ Client is relatively chatty compared to two MQ instances exchanging messages. Due to the network latency between the on-premise and AWS data center, the chatty nature of the MQ Client can impact the overall performance of the solution.
Therefore, it is often sensible to install a lightweight instance of MQ within your AWS availability zone and allow MQ to transfer the messages from on-premise to AWS efficiently and reliably. To help get you up and running quickly, you can grab the IBM MQ developer container for free on DockerHub here.
I created a SQS Adapter in the OnPremise server and called my SQS directly from there.
Related
I am building some form of a monitoring agent application that is running on AWS EC2 machines.
I need to be able to send commands to the agent running on a specific EC2 instance and only an agent running on that instance should pick it up and act on it. New EC2 instances can come and go at any point in time.
I can use kinesis and push all commands for all instances there and agents can pick up the ones targeted for them. The problem with this is that agents will have to receive a lot of commands that are not for them and filter it out.
I can also use SQS per instance, but then this will require to create/delete SQS every time new instance is being provisioned.
Would like to hear if there are already proven solutions for a similar scenario.
There already is a fully functional feature provided by AWS. I would rather use that one as opposed to reinventing the wheel, as it is a robust, well-integrated, and proven solution that’s being leveraged by thousands of AWS customers to gain operational insights into their instance fleets:
AWS Systems Manager Agent (SSM Agent) is a piece of software that can be installed and configured on an EC2 instance (and it’s pre-installed on many of the default AMIs, including both versions of Amazon Linux, Ubuntu, and various versions of Windows Server). SSM Agent makes it possible to update, manage, and configure these resources. The agent processes requests from the Systems Manager service in the AWS Cloud, and then runs them as specified in the request. SSM Agent then sends status and execution information back to the Systems Manager service by using the Amazon Message Delivery Service.
You can learn more about AWS Systems Manager and the breadth and depth of functionality it provides here.
Have you considered using Simple Notifications Service? Each new EC2 instance could subscribe to a topic using e.g. http, and remove previous subscribers.
That way the topic would stay constant regardless of EC2 rotation.
It might be worth noting that SNS supports subscription filters, so it can decide which messages deliver to which endpoint.
To my observation, AWS SWF could be the option here. Since Amazon SWF is to coordinate work across distributed application components and it provides SDKs for various platforms. Refer to the official FAQs for more in-depth understanding. https://aws.amazon.com/swf/faqs/
Not entirely clear what the volume of the monitoring system messages will be.
But the architecture requirements described sounds to me as follows:
The agents on the EC2 instances are (constantly?) polling some centralized service, which is a poll based architecture
The messages being sent are to a specific predetermined EC2 instance, which is a push based architecture.
To support both options without significant filtering of the messages I suggest you try using an intermediate PubSub system such Kafka, which can be managed on AWS by MSK.
Then to differentiate between the instances, create a Kafka topic named by the EC2 instance ID.
This should give you a unique topic that the instance will easily know to access messages for itself on a topic denoted by it's own instance ID.
You can also send/push Producer messages to a specific EC2 instance by sending messages to the topic in the cluster named by it's EC2 instance ID.
Since there are many EC2 instances coming and going you will end up with many topics. To handle the volume of topics, you can trigger and notify CloudWatch on each EC2 termination event and check CloudWatch to see which EC2 instances were terminated and consequently their topic needs deleting.
Alternatively, you can trigger a Lambda directly on the EC2 termination event event and log it by creating a file denoted by the instance ID to an S3 Bucket, which you can watch using an additional Lambda that will delete old EC2 instance topics from the Kafka cluster when their instance ID's appear there.
I'm executing a Flink Job with this tools.
I think both can do exactly the same with the proper configuration. Does Kinesis Data Analytics do something that EMR can not do or vice versa?
Amazon Kinesis Data Analytics is the easiest way to analyze streaming data, gain actionable insights, and respond to your business and customer needs in real time.
Amazon Elastic Map Reduce provides a managed Hadoop framework that makes it easy, fast, and cost-effective to process vast amounts of data across dynamically scalable Amazon EC2 instances. You can also run other popular distributed frameworks such as Apache Spark, HBase, Presto, and Flink in EMR.
The major difference is maintainability and management from your side.
If you want more independent management and more control then I would say go for AWS EMR. Where its your responsibility to manage the EMR infrastructure as well as the Apache Flink cluster in it.
But if you want less control and more focus on application development and you need to deliver faster(tight deadline) then KDA is the way to go. Here AWS provides all the bells and whistles you need for running your application. This also easily sets up with AWS s3 as code source and provides a bare minimum Configuration Management using the UI.
It scales automatically as well.(Need to understand KCU though).
It provides the same Flink dashboard where you can monitor your application and AWS Cloudwatch integration for debugging your application.
Please go through this nice presentation and let me know it that helps.
Please let me know.
https://www.youtube.com/watch?v=c_LswkrwOvk
I will say one major difference between the two is that Kinesis does not provide a hosted Hadoop service unlike Elastic MapReduce (now EMR)
Got this same question also. This video was helpful in explaining with a real architecture scenario and AWS explanation here tries to explain how Kinesis and EMR can fit together with possible use cases.
I've been doing some server architecture design over the past few weeks and have run into an issue that I need outside help with. I'm creating a game server for a massively multiplayer game, so I need to receive constant updates on entity locations, then broadcast them out to relevant clients.
I've written servers with scale in mind before, but they were stateless servers, so it wasn't all that difficult. If I'm deploying this server on a cloud platform like Google Cloud or AWS, is it better to simple scale the instance that the server is running on, or should I opt for the reverse proxy method and deploy the server across multiple instances?
Sorry if this is a vague question. I can provide more details if necessary.
You may want to start here -
https://aws.amazon.com/gaming/
https://aws.amazon.com/gaming/game-server/
You also should consider messaging solutions such as SNS and SQS. If the app can receive push notifications then SNS might be your best option.
Does amazon AWS have decent service for internet-of-things type applications (e.g. NEST thermostat, Wifi controlled appliances)? We would like to connect up to 2 million devices through the cloud. I can see how you might be able to do this with Amazon SQS and the Elastic Beanstalk, however I was hoping that there might be a better way that is less custom. For example, is there a good rules engine for SQS messaging?
I know that the NEST thermostat has solved a similar problem.
Thanks,
Mike
Try Temboo. Temboo offers a platform containing methods for 100+ APIs, databases, and code utilities, the Library and the Temboo platform are accessible via the Temboo SDK in seven programming languages, the Temboo REST API, and the Arduino Yún.
I am exploring AWS, and I'd like to implement in Java EE an EC2 app like the Online Photo Processing Service example in Getting Started with Amazon EC2 and Amazon SQS (PDF). It has a web-based client that submits jobs asynchronously to a client-facing web server app that then queues jobs for one or more worker servers to pick up, run, then post back to a results queue. The web server app monitors the results queue and pushes them back to the client. The block diagram is here.
How would you implement an app like this using Java EE, i.e., what technologies would you use for the servers in the diagram? We're using AWS because our research algorithms will require some heavy computation, so we want it to scale. I am comfortable with AWS basics (e.g., most things you can do in their management console - launch instances, etc), I know Java, I understand the Java AWS APIs, but I have little experience on the server side.
There are many possibilities to solve your problem, go with the simplest one for you. Myself, I would build a simple Java EE 6 (based on weld) web application with Amazon SQS dependency, this web application would send messages to AWS based SQS, another instance (possibly based on stateless EJB's) again with Amazon SQS dependency, which would read incoming messages and process them, you can use stateless EJBs as web service to process data synchronously, set the EJB pool size for each server instance depending on the processing load you need etc..
Most of the functionality in J2EE is way over the top for the majority of tasks. Start trying to implement this by using basic servlets. Keep the code in them as stateless as possible to assist with scaling issues. Only when servlets have some architectural flaw that prevent you completing the task would I move onto something more complex.