How to run a script on an EC2 instance remotely? - amazon-web-services

I have an EC2 instance and I need to download a file from its D drive through my program. Currently, it's a very annoying process because I can't access the instance directly from my local machine. The way what I am doing now is running a script on the instance and the instance uploads the file I need to S3 and my program access S3 to read the file.
Just wonder whether there is any simple way to access the drive on the instance instead of going through S3?

I have used AWS DataPipeline and its task runner to execute scripts on a remote instance. The taskrunner waits for a pipeline event published to its worker group.
http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-using-task-runner.html
I use it to execute shell script and commands on a schedule. The script to run should be uploaded to S3, and the Data pipeline template specifies the script's path. Works great for periodic tasks. You can do anything you want on the remote box via the script.

You cannot directly download the file from EC2, but via s3( or maybe using scp command) from your remote ec2.
But to simplify this annoying process you can use AWS Systems Manager.
AWS Systems Manager Run Command allows you to remotely and securely run set of commands on EC2 as well on-premise server. Below are high-level steps to achieve this.
Attach Instance IAM role:
The ec2 instance must have IAM role with policy AmazonSSMFullAccess. This role enables the instance to communicate with the Systems Manager API.
Install SSM Agent:
The EC2 instance must have SSM agent installed on it. The SSM Agent process the run command requests & configure the instance as per command.
Execute command :
Example usage via AWS CLI:
Execute the following command to retrieve the services running on the instance. Replace Instance-ID with ec2 instance id.
aws ssm send-command --document-name "AWS-RunShellScript" --comment "listing services" --instance-ids "Instance-ID" --parameters commands="service --status-all" --region us-west-2 --output text
More detailed information: https://www.justdocloud.com/2018/04/01/run-commands-remotely-ec2-instances/

Related

How to scp to ec2 instance via ssm agent using boto3 and send file

Hi need to transfer a file to ec2 machine via ssm agent. I have successfully installed ssm-agent in ec2 instances and from UI i am able to start session via "session-manager" and login to the shell of that ec2 machine.
Now I tried to automate it via boto3 and using the below code,
ssm_client = boto3.client('ssm', 'us-west-2')
resp = client.send_command(
DocumentName="AWS-RunShellScript", # One of AWS' preconfigured documents
Parameters={'commands': ['echo "hello world" >> /tmp/test.txt']},
InstanceIds=['i-xxxxx'],
)
The above works fine and i am able to send create a file called test.txt in remote machine but his is via echo command
Instead I need to send a file from my local machine to this remove ec2 machine via ssm agent, hence I did the following ,
Modified the "/etc/ssh/ssh_config" with proxy as below,
# SSH over Session Manager
host i-* mi-*
ProxyCommand sh -c "aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"
Then In above code, I have tried to start a session with below code and that is also successfully .
response = ssm_client.start_session(Target='i-04843lr540028e96a')
Now I am not sure how to use this session response or use this aws ssm session and send a file
Environment description:
Source: pod running in an EKS cluster
dest: ec2 machine (which has ssm agent running)
file to be transferred: Important private key which will be used by some process in ec2 machine and it will be different for different machine's
Solution tried:
I can push the file to s3 in source and execute ssm boto3 libaray can pull from s3 and store in the remote ec2 machine
But I don't want to do the above due to the reason I don't want to store the private key i s3. So wanted to directly send the file from memory to the remote ec2 machine
Basically i wanted to achieve scp which is mentioned in this aws document : https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-working-with-sessions-start.html#sessions-start-ssh
If you have SSH over SSM setup, you can just use normal scp, like so:
scp file.txt ec2-user#i-04843lr540028e96a
If it isn't working, make sure you have:
Session Manager plugin installed locally
Your key pair on the instance and locally (you will need to define it in your ssh config, or via the -i switch)
SSM agent on the instance (installed by default on Amazon Linux 2)
An instance role attached to the instance that allows Session Manager (it needs to be there at boot, so if you just attached, reboot)
Reference: https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started-enable-ssh-connections.html
If you need more detail, give me more info on your setup, and I'll try and help.

Automating the installation of CloudWatch agent

I just want to know if there are other ways to approach this problem:
I have an AWS multi account setup. The EC2's are going to be monitored over all the accounts and when alerts are triggered via sns there is a mail going to be sent. For all EC2's with Windows Server 2016 and later, Amazon Linux and Ubuntu 16.04 and 18.04 the SSM agents come pre-installed. That way I can push the CloudWatch agent via System Manager Run Command to the EC2's per AWS account.
I was wondering is there a more simple way that i can force that CloudWatch Agent is installed with every new EC2 deployed in an AWS account, without installing the agent manually on the instance or via Run Command?
I was thinking working with tags, something like: "IsMonitored" and as value true or false. for example everyday at 17hr a Lambda function will go over all the instances in that account and search for IsMonitored = false, Get that instance ID and with a (boto3?) cript push the agent on that instance. This seemed to complicated so i wanted to check if there is maybe other simple solutions that would do the same.
Thanks in advance,
Iman
To install a cloudwatch agent in each instance particular region you can implement by shell script.
The approach is:
Manual work is create some default configuration file in parameter store for both the type of instance a. for windows b. for linux based
In shell script
For particular region
Get the total number of ec2 instance id list
Check the platform which type of machine is using Windows or Linux based
If the platform is Windows then add Windows type configuration file from parameter store else add Linux configuration file
For getting platform name :
platform=$(aws ec2 describe-instances --instance-ids <instance id> --query 'Reservations[*].Instances[*].[Platform]' --output text)
For installing packages :
aws ssm send-command --instance-ids <instance id> --document-name "AWS-ConfigureAWSPackage" --parameters "name=AmazonCloudWatchAgent,action=Install,installationType=Uninstall and reinstall" --comment "Install CloudWatch Agent on EC2 Windows/Linux machine"
For start CWagent :
aws ssm send-command --instance-ids $one_instance --document-name "AmazonCloudWatch-ManageAgent" --parameters "mode=ec2,optionalRestart=yes,optionalConfigurationSource=ssm,action=configure,optionalConfigurationLocation=AmazonLinuxCloudWatchAgentConfig" --comment "Configure CloudWatch Agent on EC2 Linux machine"
For more reference you can use this link.
One simpler approach could be using prebaked AMI. First, spin up an EC2 with the normal AMI you use. Next, install the CloudWatch agent and create an image. From now on, you can spin up EC2's using the new AMI which has CloudWatch agent preinstalled.
If prebaked AMI doesn't work for you, I recommend using an infrastructure-as-code (IaC) tool like Ansible to automate the installation process.

Copying File From S3 To EC2 by User Data Approach

I have been searching solution for this task, all I find CLI approaches which I don't want.
I simply want:
I have an S3 Bucket, which has one private file, file can be an image/zip file anything.
And I want when I launch any EC2 instance it should have taken that file from S3 bucket to EC2 instance directory.
And for this, I want to use only EC2 User Data Approach.
The User Data field in Amazon EC2 passes information to the instance that is accessible to applications running on the instance.
Amazon EC2 instances launched with Amazon-provided AMIs (eg Amazon Linux 2) include a program called Cloud-Init that looks at the User Data and, if a script is provided, runs that script the first time that the instance is booted.
Therefore, you can configure a script (passed via User Data) that will run when the instance is first launched. The script will run as the root user. Your script could copy a file from Amazon S3 by using the AWS Command-Line Interface (CLI), like this:
#!
aws s3 cp s3://my-bucket/foo.txt /home/ec2-user/foo.txt
chown ec2-user foo.txt
Please note that you will need to assign an IAM Role to the instance that has permission to access the bucket. The AWS CLI will use these permissions for the file copy.
You mention that you do not wish to use the AWS CLI. You could, instead, write a program that calls the Amazon S3 API using a preferred programming language (eg Python), but using the CLI is much simpler.

Terminate AWS EC2 instance when SSM Run Command status changes

I would like to (1) launch an AWS EC2 instance, (2) run a shell script (that sends output to an S3 bucket) and (3) terminate the instance automatically when the script terminates, all remotely without logging into the instance. I have managed to get parts (1) and (2) working using the AWS CLI commands aws ec2 run-instances and aws ssm send-command. I am struggling with part (3) - getting the instance to terminate automatically when the script completes.
I have seen in the AWS docs that you can use CloudWatch to monitor the SSM Run Command status, and I thought that this might be a solution - when the status changes, terminate the instance. Is this a feasible option? If so, how do you implement it using AWS CLI?
Within the ssm script, you can issue a command to the operating system to shutdown the computer. If you launched the instance with a Shutdown behavior of Terminate, then this will terminate the instance.
Alternatively, the script can retrieve the Instance ID of the instance it is running on, and issue the aws ec2 terminate-instances command, specifying its own Instance ID.
See: Self-Terminating AWS EC2 Instance?

ec2 instance with role AmazonEC2RoleforSSM is unable to do EC2 operations in ansible

I have an instance with AmazonEC2RoleforSSM role. I want to run ansible task in this machine which commissions ec2 instances, without setting AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
This doesn't work as expected, it always needs to set AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY. Is there a way to do this?
Jaks, could you explain a little bit more about what you're trying to do?
Having an instance profile with the AmazonEC2RoleforSSM policy will allow the instance to call the Systems Manager APIs and be treated as a managed instance, allowing you to use features like Run Command, Inventory, Patch Manager and the like. It will not, however, grant the instance permission to call EC2 APIs (e.g. run-instances).
What is the specific operation you're performing that's failing and what error message are you getting?
AWS Systems Manager requires the SSM Role to be attached in order to execute a SSM Agent in the EC2 instance. Once SSM agent was installed into a particular EC2 instance, you could freely exec commands from AWS Systems Manager.
I guess after the installation of SSM agent, you can execute ansible script freely (it's not related with access key issue). Is that OK ?
Documents to execute commands with SSM:
Executing Commands Using Systems Manager Run Command
Executing Commands from the Console