How to design a server monitoring system running on AWS - amazon-web-services

I am building some form of a monitoring agent application that is running on AWS EC2 machines.
I need to be able to send commands to the agent running on a specific EC2 instance and only an agent running on that instance should pick it up and act on it. New EC2 instances can come and go at any point in time.
I can use kinesis and push all commands for all instances there and agents can pick up the ones targeted for them. The problem with this is that agents will have to receive a lot of commands that are not for them and filter it out.
I can also use SQS per instance, but then this will require to create/delete SQS every time new instance is being provisioned.
Would like to hear if there are already proven solutions for a similar scenario.

There already is a fully functional feature provided by AWS. I would rather use that one as opposed to reinventing the wheel, as it is a robust, well-integrated, and proven solution that’s being leveraged by thousands of AWS customers to gain operational insights into their instance fleets:
AWS Systems Manager Agent (SSM Agent) is a piece of software that can be installed and configured on an EC2 instance (and it’s pre-installed on many of the default AMIs, including both versions of Amazon Linux, Ubuntu, and various versions of Windows Server). SSM Agent makes it possible to update, manage, and configure these resources. The agent processes requests from the Systems Manager service in the AWS Cloud, and then runs them as specified in the request. SSM Agent then sends status and execution information back to the Systems Manager service by using the Amazon Message Delivery Service.
You can learn more about AWS Systems Manager and the breadth and depth of functionality it provides here.

Have you considered using Simple Notifications Service? Each new EC2 instance could subscribe to a topic using e.g. http, and remove previous subscribers.
That way the topic would stay constant regardless of EC2 rotation.
It might be worth noting that SNS supports subscription filters, so it can decide which messages deliver to which endpoint.

To my observation, AWS SWF could be the option here. Since Amazon SWF is to coordinate work across distributed application components and it provides SDKs for various platforms. Refer to the official FAQs for more in-depth understanding. https://aws.amazon.com/swf/faqs/

Not entirely clear what the volume of the monitoring system messages will be.
But the architecture requirements described sounds to me as follows:
The agents on the EC2 instances are (constantly?) polling some centralized service, which is a poll based architecture
The messages being sent are to a specific predetermined EC2 instance, which is a push based architecture.
To support both options without significant filtering of the messages I suggest you try using an intermediate PubSub system such Kafka, which can be managed on AWS by MSK.
Then to differentiate between the instances, create a Kafka topic named by the EC2 instance ID.
This should give you a unique topic that the instance will easily know to access messages for itself on a topic denoted by it's own instance ID.
You can also send/push Producer messages to a specific EC2 instance by sending messages to the topic in the cluster named by it's EC2 instance ID.
Since there are many EC2 instances coming and going you will end up with many topics. To handle the volume of topics, you can trigger and notify CloudWatch on each EC2 termination event and check CloudWatch to see which EC2 instances were terminated and consequently their topic needs deleting.
Alternatively, you can trigger a Lambda directly on the EC2 termination event event and log it by creating a file denoted by the instance ID to an S3 Bucket, which you can watch using an additional Lambda that will delete old EC2 instance topics from the Kafka cluster when their instance ID's appear there.

Related

AWS Systems Manager Event for Hybrid Activation

I am using AWS Systems Manager to manage some on-prem [Hybrid Activation] servers. When a server is registered with SSM and joins the fleet I want some actions to be taken (tags added, SSM docs run, etc ...). But I cannot find any documentation on any events generated by SSM when this occurs. Unsure if "Association State Change" fits the bill or not.
Currently AWS Systems Manager does not generate any events for this occurrence. Hopefully they will add it in the future.

Send AWS EC2 metrics to AWS Elasticsearch Service Domain for monitoring in Kibana

I am stuck on one point I have created one EC2 Linux based instance in Aws.
Now I want to send the EC2 metrics data to the managed Elasticsearch domain for monitoring purposes in Kiban, I go through the cloud watch console and check the metric is present of instance but didn't get how to connect with the Elasticsearch domain that I have created.
Can anyone please help me with this situation?
There is no build in mechanism for extraction/streaming of metrics data points in real time. You have to develop a custom solution for that. For example, by having a lambda function which is invoked every minute and which reads data points using get_metric_data. The the lambda would inject the points into your ES.
To invoke a lambda function periodically, e.g. every 1 minute you would have to setup CloudWatch Event rule with schedule Expressions. Lambda function would also need to have permissions granted to interact with CloudWatch metrics.
Welcome to SO :)
An alternative to the solution suggested by Marcin is to install metricbeat on the EC2 Instance and configure the metricbeat config file to send metrics to your Managed AWS ES Domain.
This is pretty simple and you should be able to do this fairly quickly.

What is the difference between aws system manager and aws cloudwatch?

I am kind of confused with the difference between aws system manager and aws cloudwatch?
Could someone help me to get clear with the difference?
Thank you very much.
They have different purposes.
aws system manager in the core of its functionality allows you to manage a fleet of instances as well as on-premise servers. Using the manger you can updated hundreds of instances with just a single command, execute custom scripts on all of them, monitor their patch compliance (i.e. do all your instances of interest have latest updates) and so on.
aws cloudwatch is primary used as a central location for storing variety of logs, from your applications (e.g. lambda execution logs), aws services and so on. It also allows you to monitor performance metrics of your instances (e.g. CPU utilization) as well as other resources. Other functionality is to respond to live events from resources (e.g. execute lambda whenever an instance is terminated)
In short, AWS System Manger is a centralized tool to automate management of AWS resources.
Whereas AWS Cloudwatch is centralized tool for monitoring AWS resource logs.
These short video resources might help -
AWS System Manager -
https://www.youtube.com/watch?v=MK4ZoCs-muo&ab_channel=AmazonWebServices
AWS Cloudwatch -
https://www.youtube.com/watch?v=a4dhoTQCyRA&ab_channel=AmazonWebServices

AWS System Design on Preconfigured EC2s

I have an AWS workflow which is as follows:
API call -> Lambda Function (Paramiko Remote Connect) -> EC2 -> output
Basically, I have an API call, which triggers a lambda function. Within the lambda function, I remote connect to a preconfigured EC2 instance using Python Paramiko, run some commands on the ec2 instance, and then return the output. I have two main concerns with this design: 1.) latency and 2.) scalability.
For Latency:
When I call the API, it takes 8-9 seconds to run, but if I were to run the job directly on the EC2 instance, it would take 1-2 seconds. Do the ssh_client.connect() and ssh_client.exec_command() cause significantly increased runtime? Also, I am implementing this on a t2-micro ubuntu 18.04 free-tier EC2 instance. Would using the paid versions cause a difference in runtime?
For Scalability:
I am sure AWS has a solution for this, but suppose that there are several simultaneous API calls. I am sure that I can't have only 1 available EC2 instance to run the job. Should I have multiple EC2 instances preconfigured and use a load-balancer? What AWS features can I use to scale this system?
If anything is unclear, please ask and I will elaborate.
Rather than using Paramiko, the more "cloud-friendly" method of running commands on an EC2 instance would be to use AWS Systems Manager Run Command, which uses an agent to run commands on instance. It can even run commands on multiple instances and also on-premises computers that have the agent installed.
Another design choice is to push a "job" message to an Amazon SQS queue. The worker instances can poll the SQS queue asking for work. When they receive a message, they can perform the work. This is more of an asynchronous model because the main system does not 'wait' for job to finish, so it needs a return path to provide the results (eg another SQS queue). However, it is highly scalable and more resilient, with no load balancer required. This is a common design pattern.

If you are using AWS to autoscale spot instances of your application, how do you handle logging?

Looking into adding autoscaling of a portion of our application using AWS simple message queuing which would launch EC2 on-demand or spot instances based on queue backlog.
One question I had, is how do you deal with collecting logs from autoscaled instances? New instances are spun up based on an image, but then they are shut down when complete. Currently, if there is an issue with one of our services, which causes it to crash, we have a system to automatically restart the service, but the logs and core dump files are there to review. If we switch to an autoscaling system, where new instance are spun up, how do you get logs and core dump files when there is a failure? Particularly if the instance is spun down.
Good practice is to ship these logs and aggregate them somewhere else, and there are many services such as DataDog and Rapid7 which will do this for you at a cost.
AWS however provides CloudWatch logs, which gives you a central place to store and view logs. It also allows you then to give users access to logs on the AWS console without them having to ssh onto a server.
Shipping your logs to CloudWatch logs requires the installation of the CloudWatch agent on your server and specifying in the config which logs to ship.
You could install the CloudWatch agent once and create an AMI of that server to use in your autoscaling group, or install and configure the CloudWatch agent in userdata for every time a server is spun up.
All the information you need to get started can be found here:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/Install-CloudWatch-Agent.html