Monitoring Memory Usage for multiple EC2 instances - amazon-web-services

I am able to monitor a Windows instance's memory usage using custom metrics in CloudWatch.
I have followed the following blog to achieve that :
http://blog.krishnachaitanya.ch/2016/03/monitor-ec2-memory-usage-using-aws.html
Using that, I am able to monitor only one instance. I am now doing the process in every instance launched.
Can I do it at once for all instances instead of changing .json file and enabling cloud watch integration in every instance?

If the instances are already launched, you have to do it for each instance. Else you can take an AMI of the first instance, then launch other instances from that AMI and you do not have to do it for each instance.
If you have to do it manually, consider something like Ansible to do it for you. There is a bit of learning but not difficult.
BTW, adding custom metrics is straightforward for Linux instances. Monitoring Memory and Disk Metrics for Amazon EC2 Linux Instances
For Windows instance: Sending Performance Counters to CloudWatch and Logs to CloudWatch Logs Using Amazon EC2 Simple Systems Manager

If your instances have the appropriate instance profile and are running the SSM agent (which they probably are if you launched from an Amazon provided AMI), you can use SSM run command to run arbitrary powershell against an instance or a set of instances (using tags). There is even a Amazon managed SSM document called AWS-ConfigureCloudWatch that is built specifically for this use case.
See http://docs.aws.amazon.com/systems-manager/latest/userguide/run-command.html

Related

Accessing files in EC2 from Lambda

I have few EC2 servers in AWS. Whenever the disk space exceeds a limit, i want to delete some files (may be logs folder) in EC2 instance automatically. I am planning to use Lambda and cloudwatch for this. Can i use Lambda to interact with EC2. If not possible, what is the alternate approach to achieve this functionality.
This is not an appropriate use-case for an AWS Lambda function.
AWS Lambda is suitable for tasks where compute is required in response to an event. Your use-case, however, is to manipulate information on an EC2 instance, which does not need cloud compute.
You could run a script on each each computer, triggered by a Scheduled Task.
Alternatively, you could use the Systems Manager Run Command (also known as the EC2 Run Command), which allows you to run commands on multiple Amazon EC2 instances and view the results. This could be used to trigger a local script, or it could pass the whole command to run (including the script). It is purpose-built for the type of task you describe.
AWS Lambda has access to your instances if they are available in the internet. If they are not available in the internet, it is possible to give access to AWS lambda using a NAT or instance Gateway in your VPC.
The problem is: access to your instance does not means access to the instances filesystems. To delete the files from Lambda you can use two alternatives:
Configure a network filesystem service in your instances an connect
to this services in your lambda function. Using windows you would
just "share" your disks, but in that case you would use some SMB
library in your lambda code, that "I think" did not have native SMB
support. Just keep in mind that your security guy will scream out
loud when you propose this alternative.
Create a "agent" in your EC2 instances and keep it running as a
Windows Service and call this agent from your lambda function. In
that case, the lambda will start the execution of the agent that
will be responsible for the file deletion.
Another option, is to follow Ramesh's suggestion and create a Powershell script and configure a cron job. To be easy, you can create a Image with this Powershell script and use the image to initialize each instance. The same solution would be applicable to "the agent" solution in the lambda alternantives.
I think that, in any case, you will need to change something in your 150 servers. Using a customized image can help you to simplify this a little bit, but you will not get a solution without some changes.
According to the following thread, you cannot access files inside a EC2 VM unless you are exposing files to the public using different methodology.
AWS Forum
Quoting from the forum
If you are talking about the underlying EC2 instance, answer is No, you cannot access those files.
However as a solution for your problem, you can used scheduled job to cleanup your files depending your usage. You can use a service or cron job.

AWS Lambda run command on EC2 instance and get result

I have an EC2 instance that is running a few processes. I also have a Lambda script that is triggered through various means. I would like this Lambda script to talk to my EC2 instance and get a list of running processes from it (Essentially run ps aux on the EC2 box, and read the output).
Now this is easy enough with just one instance and its instance-id. Just SSH in, run the command, get the output, and be on my way. However, I would like to scale this to multiple EC2 instances, for which only the instance-id is known and SSH keys may not be given.
Is such a configuration possible with Lambda and Boto (or other libraries)? Or do I just have to run a microserver on each of my instances that will reply with the given information (something I'm really trying to avoid)
You can do this easily with AWS Systems Manager - Run Command
AWS Systems Manager provides you safe, secure remote management of your instances at scale without logging into your servers, replacing the need for bastion hosts, SSH, or remote PowerShell.
Specifically:
Use the send-command API from Lambda function to get list of all processes on a group of instances. You can do this by providing a list of instances or even a tag query
You can also use CloudWatch Events to trigger a Run Command directly
I don't think there is something available out of the box for this scenario.
Instead of querying, try an alternate approach. Install an agent on all ec2 instances, which reports the required information to a central service or probably a DynamoDB table, with HashKey as InstanceId.
You may want to bake this script as a cron job, (executed probably hourly?) in the AMI itself.
With this implementation, you reduce the complexity of managing and running a separate web service on each EC2 instance.
Query the DynamoDB table on demand. There will be a lag, as data may not be real time, but you can always reduce the CRON interval per your needs.
Like Yeshodhan mentioned, There is no direct approach for this.
However, There is one more approach.
1) Save your private key file to an s3 bucket, Create a lambda function and use python fabric module to login to the remote machines from lambda function and execute commands.
The above-mentioned approach is possible but I highly recommend launching a separate machine and use a configuration management system (Preferably ansible) and get the results from remote machines.

Amazon EC2 - how to get list of process running on instances via AWS API?

How to get a list of process running on Amazon EC2 instances via the AWS API?
This could be accomplished by using the Amazon EC2 Systems Manager Run Command, which uses an agent installed on EC2 instances to run remote commands.
It takes a bit of configuration, but allows you to run commands on potentially hundreds of instances with one command.
This isn't possible. The EC2 API doesn't provide any actions to perform operations or retrieve data from the operating system layer.

EC2 Event [Running] + Lambda Function

What I need to do is: When a EC2 instance is launched, the lambda function or other installs the script to monitor memory and disk usage in the host.
I'm thinking in how I can do that.. Anyone can give me a idea?
You don't need a lambda. Pass your install script as user data.
See: Running Commands on Your Linux Instance at Launch
It appears that your requirement is to monitor Memory and Disk usage from an Amazon EC2 instance. I will assume that you want to monitor it via Amazon CloudWatch.
Amazon CloudWatch provides default metrics for EC2 instances including CPU utilization, network traffic and disk access. These metrics are visible from the hypervisor. However, CloudWatch cannot see 'inside' the EC2 instance, so it is necessary to run scripts from within the instance to track things like free memory and free disk space. The scripts talk to the operating system to retrieve these metrics, which is why they have to run 'within' the instance.
Some standard monitoring scripts are available for Linux instances: Monitoring Memory and Disk Metrics for Amazon EC2 Linux Instances
You can, of course, write your own scripts to send custom metrics to CloudWatch. Once installed, the scripts will run automatically when the instance is restarted.
If you wish to install these scripts (or your own scripts) on new EC2 instances, there are a couple of methods:
Install the scripts on one instance, then create an Amazon Machine Image (AMI) of that instance that contains a copy of the disk. You can then launch new instances using that AMI, and the scripts will already be installed on the new instances.
Launch the instance(s) with a User Data script to install the monitoring script. Any script passed through User Data will automatically be run the first time that the instance is started.
When you are using a scaling group you must specify a LaunchConfig.
Part of the LaunchConfig is the user-data script which is executed when the instance boots.
This can be also easily done from CloudFormation scripts if that is what you use to create the new EC2 VM.
You can find here samples of scripts.
enter link description here

Boot strapping AWS auto scale instances

We are discussing at a client how to boot strap auto scale AWS instances. Essentially, a instance comes up with hardly anything on it. It has a generic startup script that asks somewhere "what am I supposed to do next?"
I'm thinking we can use amazon tags, and have the instance itself ask AWS using awscli tool set to find out it's role. This could give puppet info, environment info (dev/stage/prod for example) and so on. This should be doable with just the DescribeTags privilege. I'm facing resistance however.
I am looking for suggestions on how a fresh AWS instance can find out about it's own purpose, whether from AWS or perhaps from a service broker of some sort.
EC2 instances offer a feature called User Data meant to solve this problem. User Data executes a shell script to perform provisioning functions on new instances. A typical pattern is to use the User Data to download or clone a configuration management source repository, such as Chef, Puppet, or Ansible, and run it locally on the box to perform more complete provisioning.
As #e-j-brennan states, it's also common to prebundle an AMI that has already been provisioned. This approach is faster since no provisioning needs to happen at boot time, but is perhaps less flexible since the instance isn't customized.
You may also be interested in instance metadata, which exposes some data such as network details and tags via a URL path accessible only to the instance itself.
An instance doesn't have to come up with 'hardly anything on it' though. You can/should build your own custom AMI (Amazon machine image), with any and all software you need to have running on it, and when you need to auto-scale an instance, you boot it from the AMI you previously created and saved.
http://docs.aws.amazon.com/gettingstarted/latest/wah-linux/getting-started-create-custom-ami.html
I would recommend to use AWS Beanstalk for creating specific instances, this makes it easier since it will create the AutoScaling groups and Launch Configurations (Bootup code) which you can edit later. Also you only pay for EC2 instances and you can manage most of the things from Beanstalk console.