Launch and install applications on EC2 instance using aws sdk - amazon-web-services

I don't know If this will be possible at the first place.
The requirement is to launch ec2 instance using aws sdk (I know this is possible) using based on some application login.
Then I wanted to install some application on the newly launched instance let's say docker.
Is this possible using sdk? Or My idea itself is wrong and there is a better solution to the scenario?
Can i run a command on a Running instance using SDK?

Yes you can install anything on EC2 when it is launched by providing script/commands in userdata section. This is also possible from AWS SDK https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_UserData.html
you can pass command like yum install docker in userdata
UserData='yum install docker'

installing applications uysing cmd on a running command is possible using boto3.
ssm_client = boto3.client('ssm')
response = ssm_client.send_command(
InstanceIds=['i-xxxxxxx'],
DocumentName="AWS-RunShellScript",
Parameters={'commands': ['echo "abc" > 1.txt']}, )
command_id = response['Command']['CommandId']
output = ssm_client.get_command_invocation(
CommandId=command_id,
InstanceId='i-xxxxxxxx',
)
print(output)
ssm client Runs commands on one or more managed instances.
You can also optimize this using function
def execute_commands(ssm_client, command, instance_id):
response = client.send_command(
DocumentName="AWS-RunShellScript", #preconfigured documents
Parameters={'commands': commands},
InstanceIds=instance_ids,
)
return resp
ssm_client = boto3.client('ssm')
commands = ['ifconfig']
instance_ids = ['i-xxxxxxxx']
execute_commands_on_linux_instances(ssm_client, commands, instance_ids)

Related

How to run AWS CLI command from .NET Core

Is it possible to run aws cli command from .NET Core app? I need automatically sync content of folder with S3. I use AWS SDK for other setup, but AWS SDK not contains s3 sync method.
I tried create .NET Core Console and create .bat file with (for the test only check an version of aws)
aws --version
PAUSE
And start from .NET
string pathToRun = #"C:\Users\Adam\source\repos\StaticWeb\StaticWeb\run.bat";
Process p = new Process();
p.StartInfo.FileName = pathToRun;
// Run the process and wait for it to complete
p.Start();
p.WaitForExit();
Error
aws --version
'aws' is not recognized as an internal or external command,
operable program or batch file.
If I start manual run.bat, it works properly.
I have installed AWS CLI 32 bit and 64 bit on my computer.
I've found the solution, so I have replaced aws keywork with path to aws cli. I don't need .bat anymore.
string command = $"/C start \"\" \"C:/Program Files/Amazon/AWSCLIV2/aws.exe\" --version";
ProcessStartInfo info = new ProcessStartInfo();
info.FileName = "cmd.exe";
info.Arguments = command;
info.WindowStyle = System.Diagnostics.ProcessWindowStyle.Hidden;
var process = new Process();
process.StartInfo = info;
process.Start();

Can't run remote PowerShell commands in custom CloudFormation AMI (WinServer 2012)

The issue I'm going to describe works OK on a stock Windows Server 2012 AMI from Amazon. I'm facing issues with a custom AMI.
I created a custom AMI for Windows Server 2012 by creating an image from an EC2 machine.
Just before creating the custom AMI, I used the Ec2ConfigServiceSetting.exe to make sure:
The instance receives a new machine name based on its IP.
The password of the user is changed on boot.
The instance is provisioned using the script I have in place in UserData.
I also shut down the instance using Sysprep from the Ec2ConfigServiceSetting before creating the image for the custom AMI.
However, when I run a remote PowerShell command (from C# code, if it matters), it doesn't work. From C#-land, the command gets executed OK, but nothing happens in the machine.
Let's say my remote PS command launches a program in the remote machine (agent.exe). My script looks a little bit like:
Set-Location C:\path\in\disk
$env:Path = "C:\some\thing;" + $env:Path
C:\path\to\agent.exe --daemon
Once I log into the Ec2 instance, agent.exe --daemon is NOT running. However, if I first log into the instance, then run the remote PowerShell command, agent.exe --daemon DOES run.
This works perfectly with a stock AMI from Amazon, so I can only assume there's some configuration I'm missing for this to work (and, why does it work if I first log in using RDesktop?)
We found in the past some issues regarding SSL initialization without a user profile, so in our provisioning script (UserData) we do some things someone might consider shenanigans:
net user Administrator hardcoded-password
net user ec2-user hardcoded-password /add
$pwd = (ConvertTo-SecureString 'hardcoded-password' -AsPlainText -Force)
$cred = New-Object System.Management.Automation.PSCredential('Administrator', $pwd)
Start-Process cmd -LoadUserProfile -Credential $cred

Automatically "stop" Sagemaker notebook instance after inactivity?

I have a Sagemaker Jupyter notebook instance that I keep leaving online overnight by mistake, unnecessarily costing money...
Is there any way to automatically stop the Sagemaker notebook instance when there is no activity for say, 1 hour? Or would I have to make a custom script?
You can use Lifecycle configurations to set up an automatic job that will stop your instance after inactivity.
There's a GitHub repository which has samples that you can use. In the repository, there's a auto-stop-idle script which will shutdown your instance once it's idle for more than 1 hour.
What you need to do is
to create a Lifecycle configuration using the script and
associate the configuration with the instance. You can do this when you edit or create a Notebook instance.
If you think 1 hour is too long you can tweak the script. This line has the value.
You could also use CloudWatch + Lambda to monitor Sagemaker and stop when your utilization hits a minimum. Here is a list of what's available in CW for SM: https://docs.aws.amazon.com/sagemaker/latest/dg/monitoring-cloudwatch.html.
For example, you could set a CW alarm to trigger when CPU utilization falls below ~5% for 30 minutes and have that trigger a Lambda which would shut down the notebook.
After we've burned quite a lot of money by forgetting to turn off these machines, I've decided to create a script. It's based on AWS' script, but provides an explanation why the machine was or was not killed. It's pretty lightweight because it does not use any additional infrastructure like Lambda.
Here is the script and the guide on installing it! It's just a simple lifecycle configuration!
Unfortunately, automatically stopping the Notebook Instance when there is no activity is not possible in SageMaker today. To avoid leaving them overnight, you can write a cron job to check if there's any running Notebook Instance at night and stop them if needed.
SageMaker Studio Notebook Kernels can be terminated by attaching the following lifecycle configuration script to the domain.
#!/bin/bash
# This script installs the idle notebook auto-checker server extension to SageMaker Studio
# The original extension has a lab extension part where users can set the idle timeout via a Jupyter Lab widget.
# In this version the script installs the server side of the extension only. The idle timeout
# can be set via a command-line script which will be also created by this create and places into the
# user's home folder
#
# Installing the server side extension does not require Internet connection (as all the dependencies are stored in the
# install tarball) and can be done via VPCOnly mode.
set -eux
# timeout in minutes
export TIMEOUT_IN_MINS=120
# Should already be running in user home directory, but just to check:
cd /home/sagemaker-user
# By working in a directory starting with ".", we won't clutter up users' Jupyter file tree views
mkdir -p .auto-shutdown
# Create the command-line script for setting the idle timeout
cat > .auto-shutdown/set-time-interval.sh << EOF
#!/opt/conda/bin/python
import json
import requests
TIMEOUT=${TIMEOUT_IN_MINS}
session = requests.Session()
# Getting the xsrf token first from Jupyter Server
response = session.get("http://localhost:8888/jupyter/default/tree")
# calls the idle_checker extension's interface to set the timeout value
response = session.post("http://localhost:8888/jupyter/default/sagemaker-studio-autoshutdown/idle_checker",
json={"idle_time": TIMEOUT, "keep_terminals": False},
params={"_xsrf": response.headers['Set-Cookie'].split(";")[0].split("=")[1]})
if response.status_code == 200:
print("Succeeded, idle timeout set to {} minutes".format(TIMEOUT))
else:
print("Error!")
print(response.status_code)
EOF
chmod +x .auto-shutdown/set-time-interval.sh
# "wget" is not part of the base Jupyter Server image, you need to install it first if needed to download the tarball
sudo yum install -y wget
# You can download the tarball from GitHub or alternatively, if you're using VPCOnly mode, you can host on S3
wget -O .auto-shutdown/extension.tar.gz https://github.com/aws-samples/sagemaker-studio-auto-shutdown-extension/raw/main/sagemaker_studio_autoshutdown-0.1.5.tar.gz
# Or instead, could serve the tarball from an S3 bucket in which case "wget" would not be needed:
# aws s3 --endpoint-url [S3 Interface Endpoint] cp s3://[tarball location] .auto-shutdown/extension.tar.gz
# Installs the extension
cd .auto-shutdown
tar xzf extension.tar.gz
cd sagemaker_studio_autoshutdown-0.1.5
# Activate studio environment just for installing extension
export AWS_SAGEMAKER_JUPYTERSERVER_IMAGE="${AWS_SAGEMAKER_JUPYTERSERVER_IMAGE:-'jupyter-server'}"
if [ "$AWS_SAGEMAKER_JUPYTERSERVER_IMAGE" = "jupyter-server-3" ] ; then
eval "$(conda shell.bash hook)"
conda activate studio
fi;
pip install --no-dependencies --no-build-isolation -e .
jupyter serverextension enable --py sagemaker_studio_autoshutdown
if [ "$AWS_SAGEMAKER_JUPYTERSERVER_IMAGE" = "jupyter-server-3" ] ; then
conda deactivate
fi;
# Restarts the jupyter server
nohup supervisorctl -c /etc/supervisor/conf.d/supervisord.conf restart jupyterlabserver
# Waiting for 30 seconds to make sure the Jupyter Server is up and running
sleep 30
# Calling the script to set the idle-timeout and active the extension
/home/sagemaker-user/.auto-shutdown/set-time-interval.sh
Resource
https://docs.aws.amazon.com/sagemaker/latest/dg/notebook-lifecycle-config.html
https://github.com/aws-samples/sagemaker-studio-lifecycle-config-examples/blob/main/scripts/install-autoshutdown-server-extension/on-jupyter-server-start.sh

AWS Sagemaker - Install External Library and Make it Persist

I have a sagemaker instance up and running and I have a few libraries that I frequently use with it but each time I restart the instance they get wiped and I have to reinstall them. Is it possible to install my libraries to one of the anaconda environments and have the change remain?
The supported way to do this for Sagemaker notebook instances is with Lifecycle Configurations.
You can create an onStart lifecycle hook that can install the required packages into the respective Conda environments each time your notebook instance starts.
Please see the following blog post for more details
https://aws.amazon.com/blogs/machine-learning/customize-your-amazon-sagemaker-notebook-instances-with-lifecycle-configurations-and-the-option-to-disable-internet-access/
When creating you model, you can specify the requirements.txt as an environment variable.
For Eg.
env = {
'SAGEMAKER_REQUIREMENTS': 'requirements.txt', # path relative to `source_dir` below.
}
sagemaker_model = TensorFlowModel(model_data = 's3://mybucket/modelTarFile,
role = role,
entry_point = 'entry.py',
code_location = 's3://mybucket/runtime-code/',
source_dir = 'src',
env = env,
name = 'model_name',
sagemaker_session = sagemaker_session,
)
This would ensure that the requirements file is run after the docker container is created, before running any code on it.

Running Python script on AWS EC2

Apologies if it is repeat but i couldn't find anything worthwhile to accomplish my task.
I have an instance and i have figured out starting and stopping it using boto3 and it works but the real problem is running the script when instance is up. I would like to wait for script to finish and then stop the instance.
python /home/ubuntu/MyProject/TechInd/EuropeRun.py &
python /home/ubuntu/FTDataCrawlerEU/EuropeRun.py &
Reading quite a few post leads to the direction of Lambda and AWS Beanstalk but those don't appear simple.
Any suggestion is greatly appreciated.
Regards
DC
You can use the following code.
import boto3
import botocore
import os
from termcolor import colored
import paramiko
def stop_instance(instance_id, region_name):
client = boto3.client('ec2', region_name=region_name)
while True:
try:
client.stop_instances(
InstanceIds=[
instance_id,
],
Force=False
)
except Exception, e:
print e
else:
break
# Waiter to wait till instance is stopped
waiter = client.get_waiter('instance_stopped')
try:
waiter.wait(
InstanceIds=[
instance_id,
]
)
except Exception, e:
print e
def ssh_connect(public_ip, cmd):
# Join the paths using directory name and file name, to avoid OS conflicts
key_path = os.path.join('path_to_aws_pem', 'file_name.pem')
key = paramiko.RSAKey.from_private_key_file(key_path)
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
# Connect/ssh to an instance
while True:
try:
client.connect(hostname=public_ip, username="ubuntu", pkey=key)
# Execute a command after connecting/ssh to an instance
stdin, stdout, stderr = client.exec_command(cmd)
print stdout.read()
# close the client connection once the job is done
client.close()
break
except Exception, e:
print e
# Main/Other module where you're doing other jobs:
# Get the public IP address of EC2 instance, I assume you should have handle to the ec2 instance already
# You can use any alternate method to fetch/get public ip of your ec2 instance
public_ip = ec2_instance.public_ip_address
# Get the instance ID of EC2 instance, I assume you should have handle to the ec2 instance already
instance_id = ec2_instance.instance_id
# Command to Run/Execute python scripts
cmd = "nohup python /home/ubuntu/MyProject/TechInd/EuropeRun.py & python /home/ubuntu/FTDataCrawlerEU/EuropeRun.py &"
ssh_connect(public_ip, cmd)
print colored('Script execution finished !!!', 'green')
# Shut down/Stop the instance
stop_instance(instance_id, region_name)
You can execute your shutdown command through python code once your script is done.
An example of using ls
from subprocess import call
call(["ls", "-l"])
But for something that simple lambda is much easier and resource efficient. You only need to upload your script to s3 and the execute the lambda function through boto3.
Actually you can just copy paste your script code to the lambda console if you don't have any dependencies.
Some options for running the script automatically at system startup:
Call the script via the EC2 User-Data
Configure the AMI to start the script on boot via an init.d script, or an #reboot cron job.
To shutdown the instance after the script is complete, add some code at the end of the script to either initiate a OS shutdown, or call the AWS API (via Boto) to shutdown the instance.