Monitor activity on AWS EC2 instances - amazon-web-services

I have a requirement from a customer that wants to know for what purpose his EC2 instances are being used. The users are students, and they have elevated permissions on those EC2 instances. He wants a report that tells him which programs are being used on each instance. Are these students using python, mining Ethereum? Or what are they doing? Is there any tool for Windows and Linux that can help me with this?
I'm thinking to install an agent on each EC2 instance and get a list of the precesses that are being executed. But to get something useful from a list of processes list will be a lot of work. Any other suggestion?
Thanks

You want to look at EC2 System Manager. There are two services that might help.
Inventory can collect information about patch levels, installed software, etc and create reports of the results.
For more details, Run Command will allow you to periodically run something like ps -ef across your fleet and also report on the results.

Look into AWS Cloudtrail. It is native, so easy to add to your instances. Monitors and logs all activity you would be interested in auditing.
https://aws.amazon.com/cloudtrail/

Related

How to get GPU information from multiple EC2 instances?

So I have multiple shared EC2 instances with GPUs used by a team, I want to create a streamlined process for checking which machine has free GPUs. I use ssh to access the instances and can find the relevant GPU information using
nvidia-smi
However, since I have no experience here, how can I go about developing a way to check free GPUs in a given list of instances. So far I can think of only ssh-ing into each and getting and returning information in one place, but would like a better way to do it.
You can configure all your instances to be managed by AWS Systems Manager (SSM) and this will allow you to run commands on a fleet of instances using the SSM Document AWS-RunShellScript. SSM allows you to remotely execute shell commands on managed instances without having to manually log in and execute the way you would with SSH. This doc should get you started.
You might also wanna look into SSM Automation.

How to get the PowerShell version from all EC2 instances running on the aws account?

In our company we have hundreds of instances, some Linux and some Windows. I need to check what instances need to have the latest PowerShell installed, therefore I need a list of PowerShell versions for all the instances.
In order to get the information, I was thinking of somehow using the fleet manager (from system manager) to gather this information about all the instances, but not sure how to do it.
Would appreciate any advice.
For anyone looking for it: run command with systems manager is the answer. Use it with the RunPowerShellScript document, have it ask each instance for their $psversiontable.psversion, and dump it into s3 for easier querying

How to add some new code to an existing EC2 instance

Bear with me, what I am requesting may be impossible. I am a AWS noob.
So I am going to describe to you the situation I am in...
I am doing a freelance gig and was essentially handed the keys to AWS. That is, I was handed the root user login credentials for the AWS account that powers this website.
Now there are 3 EC2 instances. One of the instances is a linux box that, from what I am being told, is running a Django Python backend.
My new "service" if you will must exist within this instance.
How do I introduce new source code into this instance? Is there a way to pull down the existing source code that lives within it?
I am not be helped by any existing/previous developers so I am kind of just handed the AWS credentials and have no idea where to start.
Is this even possible. That is, is it possible to pull the source code from an EC2 instance and/or modify the code? How do I do this?
EC2 instances are just virtual machines. So you can use SSH/SCP/SFTP files to and from. You can use the AWS CLI tools to copy stuff from S3. Dealers choice...
Now to get into this instance... If you look in the web console you can find its IP(s), what the security groups (firewall rules), and the key pair name. Hopefully they gave you the keys. You need these to SSH in.
You'll also want to check to make sure there's a security group applied that has SSH open. Hopefully only to your IP :)
If you don't have the keys you'll have to create an AMI image of the instance so you can create a new one with a key pair you do have.
Amazon has a set of tools for you in Amazon CodeSuite.
The tool used for "deploying" the code is Amazon CodeDeploy. By using this service you install an agent onto your host, then when triggered it will pull down an artifact of a code base and install it matching hosts. You can even specify additional commands through the hook system.
But you also want to trigger this to happen, maybe even automatically? CodeDeploy can be orchestrated using the CodePipeline tool.

AWS Lambda run command on EC2 instance and get result

I have an EC2 instance that is running a few processes. I also have a Lambda script that is triggered through various means. I would like this Lambda script to talk to my EC2 instance and get a list of running processes from it (Essentially run ps aux on the EC2 box, and read the output).
Now this is easy enough with just one instance and its instance-id. Just SSH in, run the command, get the output, and be on my way. However, I would like to scale this to multiple EC2 instances, for which only the instance-id is known and SSH keys may not be given.
Is such a configuration possible with Lambda and Boto (or other libraries)? Or do I just have to run a microserver on each of my instances that will reply with the given information (something I'm really trying to avoid)
You can do this easily with AWS Systems Manager - Run Command
AWS Systems Manager provides you safe, secure remote management of your instances at scale without logging into your servers, replacing the need for bastion hosts, SSH, or remote PowerShell.
Specifically:
Use the send-command API from Lambda function to get list of all processes on a group of instances. You can do this by providing a list of instances or even a tag query
You can also use CloudWatch Events to trigger a Run Command directly
I don't think there is something available out of the box for this scenario.
Instead of querying, try an alternate approach. Install an agent on all ec2 instances, which reports the required information to a central service or probably a DynamoDB table, with HashKey as InstanceId.
You may want to bake this script as a cron job, (executed probably hourly?) in the AMI itself.
With this implementation, you reduce the complexity of managing and running a separate web service on each EC2 instance.
Query the DynamoDB table on demand. There will be a lag, as data may not be real time, but you can always reduce the CRON interval per your needs.
Like Yeshodhan mentioned, There is no direct approach for this.
However, There is one more approach.
1) Save your private key file to an s3 bucket, Create a lambda function and use python fabric module to login to the remote machines from lambda function and execute commands.
The above-mentioned approach is possible but I highly recommend launching a separate machine and use a configuration management system (Preferably ansible) and get the results from remote machines.

Boot strapping AWS auto scale instances

We are discussing at a client how to boot strap auto scale AWS instances. Essentially, a instance comes up with hardly anything on it. It has a generic startup script that asks somewhere "what am I supposed to do next?"
I'm thinking we can use amazon tags, and have the instance itself ask AWS using awscli tool set to find out it's role. This could give puppet info, environment info (dev/stage/prod for example) and so on. This should be doable with just the DescribeTags privilege. I'm facing resistance however.
I am looking for suggestions on how a fresh AWS instance can find out about it's own purpose, whether from AWS or perhaps from a service broker of some sort.
EC2 instances offer a feature called User Data meant to solve this problem. User Data executes a shell script to perform provisioning functions on new instances. A typical pattern is to use the User Data to download or clone a configuration management source repository, such as Chef, Puppet, or Ansible, and run it locally on the box to perform more complete provisioning.
As #e-j-brennan states, it's also common to prebundle an AMI that has already been provisioned. This approach is faster since no provisioning needs to happen at boot time, but is perhaps less flexible since the instance isn't customized.
You may also be interested in instance metadata, which exposes some data such as network details and tags via a URL path accessible only to the instance itself.
An instance doesn't have to come up with 'hardly anything on it' though. You can/should build your own custom AMI (Amazon machine image), with any and all software you need to have running on it, and when you need to auto-scale an instance, you boot it from the AMI you previously created and saved.
http://docs.aws.amazon.com/gettingstarted/latest/wah-linux/getting-started-create-custom-ami.html
I would recommend to use AWS Beanstalk for creating specific instances, this makes it easier since it will create the AutoScaling groups and Launch Configurations (Bootup code) which you can edit later. Also you only pay for EC2 instances and you can manage most of the things from Beanstalk console.