Run a batch file on EC2 from a (python) lambda - amazon-web-services

I can see a generic way of starting an EC2 from lambda in Start and Stop Instances at Scheduled Intervals Using Lambda and CloudWatch.
Suppose I use that method to start an EC2, and suppose the AMI is a windows server 2019 customised to have a .bat file on the desktop, and also suppose I'm using a python lambda.
How can I execute this batch file from the lambda? (i.e. just as though someone had RDP'd into the instance and double-clicked on it)
Note: To be very clear, basically I want to start the EC2 using the method given in the AWS docs (above), and right after the instance has started, to run the batch file that will be sitting on the instance's desktop

I think you have a few concepts mixed together.
AWS Lambda functions run on the Lambda service, without having to use Amazon EC2 instances. This is what makes them "serverless".
If you have a batch file on an Amazon EC2 instance, you would presumably want to run that batch file on the EC2 instance itself, without involving Lambda (since you have got a server).
If you wish to run a script on an EC2 instance when it launches for the first time, you can provide a PowerShell or Command-Line script via the User Data field. Software on the AMI will automatically execute this script the first time that the instance starts.
This script could do all the work itself, or it could simply call another script that is stored on the disk. Some people use the script to download another script from a repository (eg Amazon S3 or GitHub) and then execute the downloaded script.
For more information, see: Running Commands on Your Windows Instance at Launch - Amazon Elastic Compute Cloud
If the Amazon EC2 instance is already running and you wish to trigger a script to execute, you can use the AWS Systems Manager Run Command. This works by having an agent on the instance which can be remotely triggered, thereby running scripts without having to login to the instance.

Related

How to schedule a AWS CLI script on Windows EC2 instance

I have a Windows EC2 instance in place. I cannot delete it every day since we have multiple tools installed like accessing Postgress RDS via Dbeaver. Now, we have an activity of deleting a few S3 folders. So using the Mobaxterm tool, I can delete it via AWS CLI commands.
However, I am unable to schedule this script which runs once daily in the morning. I explored a few posts which are not relevant to my problem. There, the user is trying to launch > run script > delete instance which I don't want to do.
What can be done in my case?
At least two options come to mind:
Use Windows task scheduler to create a task that will run your script daily directly on the instance
Use AWS Systems Manager State Manager to run a custom document that will execute your script remotely on a daily basis
I would recommend the second option because you would be able to reuse it for other instances if needed.

ec2instance automation with python script [duplicate]

This question already has answers here:
Aws Ec2 run script program at startup
(5 answers)
Closed 1 year ago.
I am trying to run a python script on ec2 instance . The python file is residing on s3.
I am able to run manually from ec2 instance using iam role which allow access to s3 folder and files.
The question is , how can i automate the start and stop of ec2 instance whenever needed and how to invoke /pass a python file to run upon starting the ec2 instance and stop the instance once the python files completes the execution.
Thanks,
Nikhil
Your requirements seem to be:
Schedule an Amazon EC2 instance to start at a specific time every day
The instance should run a Python script after starting
When the Python script has finished running, Stop the instance
Start EC2 instance on a schedule
You can use Amazon EventBridge to trigger an AWS Lambda function on a schedule.
You can code the Lambda function to call StartInstances() on the EC2 instance to Start it.
Run a script on startup
Install a script into the /var/lib/cloud/scripts/per-boot/ directory. This script can download the Python program from S3 and then run it.
When the EC2 instance starts up, it will automatically run any script in that directory.
Stop the instance when the script is finished
At the end of the script, add the command:
shutdown -h now
This will turn off the instance and place it in the Stopped state.
(This assume that the script is running as root. If it is running as another user, it will need to use sudo shutdown -h now.)
EC2 instances use cloudinit which you can customize to run a given script on each boot. You can use use regular os tools from python to shutdown your instance (e.g. shutdown -h now).
Here another alternative could be to use lambda function instead of EC2 instance to run the python script if maximum execution time of script is less than 15 minutes. Go serverless with AWS lambda rather than EC2. just add your script code in AWS lambda and schedule lambda function from AWS event bridge to invoke it.

AWS Lambda run command on EC2 instance and get result

I have an EC2 instance that is running a few processes. I also have a Lambda script that is triggered through various means. I would like this Lambda script to talk to my EC2 instance and get a list of running processes from it (Essentially run ps aux on the EC2 box, and read the output).
Now this is easy enough with just one instance and its instance-id. Just SSH in, run the command, get the output, and be on my way. However, I would like to scale this to multiple EC2 instances, for which only the instance-id is known and SSH keys may not be given.
Is such a configuration possible with Lambda and Boto (or other libraries)? Or do I just have to run a microserver on each of my instances that will reply with the given information (something I'm really trying to avoid)
You can do this easily with AWS Systems Manager - Run Command
AWS Systems Manager provides you safe, secure remote management of your instances at scale without logging into your servers, replacing the need for bastion hosts, SSH, or remote PowerShell.
Specifically:
Use the send-command API from Lambda function to get list of all processes on a group of instances. You can do this by providing a list of instances or even a tag query
You can also use CloudWatch Events to trigger a Run Command directly
I don't think there is something available out of the box for this scenario.
Instead of querying, try an alternate approach. Install an agent on all ec2 instances, which reports the required information to a central service or probably a DynamoDB table, with HashKey as InstanceId.
You may want to bake this script as a cron job, (executed probably hourly?) in the AMI itself.
With this implementation, you reduce the complexity of managing and running a separate web service on each EC2 instance.
Query the DynamoDB table on demand. There will be a lag, as data may not be real time, but you can always reduce the CRON interval per your needs.
Like Yeshodhan mentioned, There is no direct approach for this.
However, There is one more approach.
1) Save your private key file to an s3 bucket, Create a lambda function and use python fabric module to login to the remote machines from lambda function and execute commands.
The above-mentioned approach is possible but I highly recommend launching a separate machine and use a configuration management system (Preferably ansible) and get the results from remote machines.

User-data script doens't launch with EC2 Instance

Background:
Services used: ec2, autoscaling, s3, sqs, cloudwatch
AMI and Environement: Windows 64-bit
Network: IAM and security group attached
Job: Run a script which starts a program (.exe) which is loaded from S3
I have an auto scale option that launches a N number of Instances. The user data script is based on aws CLI and few commands in powershell. I was expecting the instances to execute my script upon their initialization. Note that some of the tasks before the Job is to first download the aws CLI using powershell, because the rest of the script is based on aws commands
What am I missing ? I thought the launch should start the script in user-data.
Note that this script was tested on an instance with the same configurations (VPC, Security Group, etc..)

Automate AWS instance start and stop

I'm running a instance in amazon AWS and it runs non-stop everyday. I'm using ubuntu ec2 instance which is running Apache, Mirthconnect tool and LAMP server. I want to run this instance only on particular time duration of a day. I prefer not use any additional AWS services such as cloud-watch . Is there a way we could acheive this?.
The major purpose is for using Mirthconnect fetching data from mysql database
There are 3 solutions.
AWS Data Pipeline - You can schedule the instance start/stop just like cron. It will cost you one hour of t1.micro instance for every start/stop
AWS Lambda - Define a lambda function that gets triggered at a pre defined time. Your lambda function can start/stop instances. Your cost will be very minimal or $0
Write a shell script and run it as a cron job or run it on demand. The script will have AWS CLI command to start and stop the instance.
I used Data Pipeline for a long time before moving to Lambda. Data Pipeline is very trivial. Just paste the AWS CLI commands to stop and start instances. Lambda is more involved.
I guess for that you'll need another machine which is on 24x7. On which you can write cron job in python using boto or any other language like bash.
I don't see how you start a instance in stopped state without using any other machine.
Or you can have a simple raspberry pi on at your home which does the ON-OFF work for you using AWS CLI or simple Python. How about that? ;)