Start EC2 Windows instance and logon using boto - amazon-web-services

I want to start windows EC2 instance and logon using my credentials, the following scripts creates a EC2 instance and waits until it is running.
The problem is after this i have to manually go to the aws console and download the remote desktop shortcut and then log-on using my windows credentials (I am using my own AMI which has my credentials saved) but what i want is boto to start my machine without going to AWS console. Do you have any idea about how to do this ?
import boto
import boto.ec2
from settings import AWS_ACCESS_KEY, AWS_SECRET_ACCESS_KEY
from settings import BUCKET_NAME
import time
import os
conn = boto.ec2.connect_to_region("us-west-2",
aws_access_key_id=AWS_ACCESS_KEY,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY)
#Create a Instance
reservation= conn.run_instances(
'ami-c8910***',
key_name='*****',
instance_type='t1.micro',
security_groups=['R***rFarm'])
instance=reservation.instances[0]
#wait until EC2 instance is intitated
while instance.state != 'running':
time.sleep(5)
instance.update() # Updates Instance metadata
print "Instance state: %s" % (instance.state)
print "instance %s done!" % (instance.id)

The remote desktop shortcut is a simple text file with a ".rdp" file extension. So you can create it yourself:
if instance.platform == u'windows':
fobj = open("%s.rdp" % (instance.ip_address), "w")
fobj.write("auto connect:i:1\n")
fobj.write("full address:s:%s\n" % (instance.ip_address))
fobj.write("username:s:Administrator\n")
fobj.close()

Related

Airflow SSHOperator: How To Securely Access Pem File Across Tasks?

We are running Airflow via AWS's managed MWAA Offering. As part of their offering they include a tutorial on securely using the SSH Operator in conjunction with AWS Secrets Manager. The gist of how their solution works is described below:
Run a Task that fetches the pem file from a Secrets Manager location and store it on the filesystem at /tmp/mypem.pem.
In the SSH Connection include the extra information that specifies the file location
{"key_file":"/tmp/mypem.pem"}
Use the SSH Connection in the SSHOperator.
In short the workflow is supposed to be:
Task1 gets the pem -> Task2 uses the pem via the SSHOperator
All of this is great in theory, but it doesn't actually work. It doesn't work because Task1 may run on a different node from Task2, which means Task2 can't access the /tmp/mypem.pem file location that Task1 wrote the file to. AWS is aware of this limitation according to AWS Support, but now we need to understand another way to do this.
Question
How can we securely store and access a pem file that can then be used by Tasks running on different nodes via the SSHOperator?
I ran into the same problem. I extended the SSHOperator to do both steps in one call.
In AWS Secrets Manager, two keys are added for airflow to retrieve on execution.
{variables_prefix}/airflow-user-ssh-key : the value of the private key
{connections_prefix}/ssh_airflow_user : ssh://replace.user#replace.remote.host?key_file=%2Ftmp%2Fairflow-user-ssh-key
from typing import Optional, Sequence
from os.path import basename, splitext
from airflow.models import Variable
from airflow.providers.ssh.operators.ssh import SSHOperator
from airflow.providers.ssh.hooks.ssh import SSHHook
class SSHOperator(SSHOperator):
"""
SSHOperator to execute commands on given remote host using the ssh_hook.
:param ssh_conn_id: :ref:`ssh connection id<howto/connection:ssh>`
from airflow Connections.
:param ssh_key_var: name of Variable holding private key.
Creates "/tmp/{variable_name}.pem" to use in SSH connection.
May also be inferred from "key_file" in "extras" in "ssh_conn_id".
:param remote_host: remote host to connect (templated)
Nullable. If provided, it will replace the `remote_host` which was
defined in `ssh_hook` or predefined in the connection of `ssh_conn_id`.
:param command: command to execute on remote host. (templated)
:param timeout: (deprecated) timeout (in seconds) for executing the command. The default is 10 seconds.
Use conn_timeout and cmd_timeout parameters instead.
:param environment: a dict of shell environment variables. Note that the
server will reject them silently if `AcceptEnv` is not set in SSH config.
:param get_pty: request a pseudo-terminal from the server. Set to ``True``
to have the remote process killed upon task timeout.
The default is ``False`` but note that `get_pty` is forced to ``True``
when the `command` starts with ``sudo``.
"""
template_fields: Sequence[str] = ("command", "remote_host")
template_ext: Sequence[str] = (".sh",)
template_fields_renderers = {"command": "bash"}
def __init__(
self,
*,
ssh_conn_id: Optional[str] = None,
ssh_key_var: Optional[str] = None,
remote_host: Optional[str] = None,
command: Optional[str] = None,
timeout: Optional[int] = None,
environment: Optional[dict] = None,
get_pty: bool = False,
**kwargs,
) -> None:
super().__init__(
ssh_conn_id=ssh_conn_id,
remote_host=remote_host,
command=command,
timeout=timeout,
environment=environment,
get_pty=get_pty,
**kwargs,
)
if ssh_key_var is None:
key_file = SSHHook(ssh_conn_id=self.ssh_conn_id).key_file
key_filename = basename(key_file)
key_filename_no_extension = splitext(key_filename)[0]
self.ssh_key_var = key_filename_no_extension
else:
self.ssh_key_var = ssh_key_var
def import_ssh_key(self):
with open(f"/tmp/{self.ssh_key_var}", "w") as file:
file.write(Variable.get(self.ssh_key_var))
def execute(self, context):
self.import_ssh_key()
super().execute(context)
The answer by holly is good. I am sharing a different way I solved this problem. I used the strategy of converting the SSH Connection into a URI and then input that into Secrets Manager under the expected connections path, and everything worked great via the SSH Operator. Below are the general steps I took.
Generate an encoded URI
import json
from airflow.models.connection import Connection
from pathlib import Path
pem = Path(“/my/pem/file”/pem).read_text()
myconn= Connection(
conn_id="connX”,
conn_type="ssh",
host="10.x.y.z,
login=“mylogin”,
extra=json.dumps(dict(private_key=pem)),
print(myconn.get_uri())
Input that URI under the environment's configured path in Secrets Manager. The important note here is to input the value in the plaintext field without including a key. Example:
airflow/connections/connX and under Plaintext only include the URI value
Now in the SSHOperator you can reference this connection Id like any other.
remote_task = SSHOperator(
task_id="ssh_and_execute_command",
ssh_conn_id="connX"
command="whoami",
)

Save file into EC2 directly through Lambda

I've made a Lambda function that stores a binary file into S3 and it works fine.
Instead, now I would like to save this file directly into my EC2 instance storage volume .
I searched a lot but I didn't understand if it's possible yet. Do you know?
I've already made an SSH connection (inside the Lambda..) to run SSH commands but I don't how to use in my case and if is the right way to save my data ...Do you have some idea?
I know that there is possibility to connect S3 to EC2 but first I would like to understand the possibility above..
Thanks
I made a solution ( Python ):
Using Boto3 and Paramiko package I build an SSH client to EC2, so I move my file to S3 by AWSCLI.
If useful for anyone I post part of code below:
import json
import boto3
import paramiko
def lambda_handler(event, context):
#My Parameters
myBucket = "lorem"
myPemKeyFile="lorem.pem"
myEc2Username="lorem"
ec2_client = boto3.client('ec2')
s3_client = boto3.client("s3")
OutFileName= "lorem.txt"
# PREPARING FOR SSH CLIENT
try:
# GETTING ISTANCE INFORMATION
describeInstance = ec2_client.describe_instances()
hostPublicIP=[]
# fetchin public IP address of the running instances
for i in describeInstance['Reservations']:
for instance in i['Instances']:
if instance["State"]["Name"] == "running":
hostPublicIP.append(instance['PublicIpAddress'])
#print(hostPublicIP)
# DOWNLOADING PEM FILE FROM S3
s3_client.download_file(myBucket,myPemKeyFile, '/tmp/file.pem')
# reading pem file and creating key object
key = paramiko.RSAKey.from_private_key_file("/tmp/file.pem")
# CREATING Paramiko.SSHClient
ssh_client = paramiko.SSHClient()
# setting policy to connect to unknown host
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
host=hostPublicIP[0]
#print("Connecting to : " + host)
# connecting to server
ssh_client.connect(hostname=host, username=myEc2Username, pkey=key)
#print("Connected to :" + host)
except:
raise Exception('OPS, there whas a crash preparing for SSH client!! 500 ')
# MOVING FILE INTO S3
commands = [
"aws s3 mv ~/directoryFrom/"+OutFileName+" s3://"+myBucket+"/"+OutFileName
]
try:
for command in commands:
stdin , stdout, stderr = ssh_client.exec_command(command)
SSHout=stdout.read()
except:
raise Exception('OPS, somethig happends to SSH client. Move file to S3 didn\'t run 500')

How to check the cluster status of Redshift using boto3 module in Python?

I need to check if the Redshift cluster is paused or available using Python. I am aware of the module boto3 which further provides a method describe_clusters(). I am not sure how to proceed further to write a Python script for that.
You could try
import boto3
import pandas as pd
DWH_CLUSTER_IDENTIFIER = 'Your clusterId'
KEY='Your AWS Key'
SECRET='Your AWS Secret'
# Get client to connect redshift
redshift = boto3.client('redshift',
region_name="us-east-1",
aws_access_key_id=KEY,
aws_secret_access_key=SECRET
)
# Get cluster list as it is in AWS console
myClusters = redshift.describe_clusters()['Clusters']
if len(myClusters) > 0:
df = pd.DataFrame(myClusters)
myCluster=df[df.ClusterIdentifier==DWH_CLUSTER_IDENTIFIER.lower()]
print("My cluster status is: {}".format(myCluster['ClusterAvailabilityStatus'].item()))
else:
print('No clusters available')

Trying to create a python function that creates an ec2 with aws marketplace software

I am trying to make a function that should launch an ec2 instance that contains aws marketplace software. What commands from the boto3 documents would you guys recommend because I am having trouble searching for one that
launches a new ec2
utilizes the aws marketplace software product code to launch with the ami
Thank you
You will most likely want to make use of the DescribeImages and RunInstances actions, which are available methods in the boto3 API (describe_images and run_instances).
The following snippet is a brief example that uses a product code from the AWS Marketplace to launch a new EC2 instance using the image ID:
import boto3
def main():
client = boto3.client("ec2")
# Ubuntu 18.04 LTS - Bionic
product_id = "3b73ef49-208f-47e1-8a6e-4ae768d8a333"
response = client.describe_images(
Filters=[{"Name": "name", "Values": [f"*{product_id}*"]}]
)
images = response["Images"]
image = images[0]
image_id = image["ImageId"] # ami-02ad37ec9b98d835f
response = client.run_instances(
ImageId=image_id,
InstanceType="t2.micro",
MaxCount=1,
MinCount=1,
SubnetId="<your_subnet_id>",
)
print(response)
if __name__ == "__main__":
main()

How to find vm in particular folder using pyvmomi

Hi I am new in python and I am exploring pyvmomi. Here I want to fetch vm info.Like I have one data center i.e "DataCenter1"
In that data center there are two folders LinuxServer and WindowsServer these folder contains vms.So I want to fetch vm name with their respective folder names
DataCenter1
|
|----LinuxServer
| |---RHEL-VM
| |---Ubuntu-VM
|
|----WindowsServer
| |---win2k12r2-VM
| |---win2k8r2-VM
My code:
from pyvim.connect import SmartConnect, Disconnect
import ssl
context = ssl.SSLContext(ssl.PROTOCOL_TLSv1)
context.verify_mode = ssl.CERT_NONE
connect = SmartConnect(host="172.0.0.0",user="root",pwd="****",port=int("443"),sslContext=context)
datacenter = connect.content.rootFolder.childEntity[0]
print (datacenter)
vms = datacenter.vmFolder.childEntity
for i in vms:
print(i.name)
#Here I want to fetch vm name and their respective folder names
Disconnect(c)
Here I am able to fetch all vm names but I want to fetch folder name of respective vm.
Is there any method ?
Can you please guide me.
Here you will get parent name of that vm means i.e your folder name if it exist.
from pyvim.connect import SmartConnect, Disconnect
import ssl
context = ssl.SSLContext(ssl.PROTOCOL_TLSv1)
context.verify_mode = ssl.CERT_NONE
connect = SmartConnect(host="172.0.0.0",user="root",pwd="****",port=int("443"),sslContext=context)
datacenter = connect.content.rootFolder.childEntity[0]
print (datacenter)
vms = datacenter.vmFolder.childEntity
for vm in vms:
print(vm.parent.name)
Disconnect(c)
I use python3.6, full example below. It implement login vsphere and print every virtual machine name.
#!/usr/bin/env python3.6
# encoding: utf-8
from pyVim import connect
import ssl
def login():
ssl_context = ssl.SSLContext(ssl.PROTOCOL_SSLv23)
ssl_context.verify_mode = ssl.CERT_NONE
si = connect.SmartConnect(host='192.168.0.1', user='root', pwd='password',
sslContext=ssl_context)
print(si)
print('\nHello World!\n')
print('If you got here, you authenticted into vCenter.')
data_center = si.content.rootFolder.childEntity[0]
vms = data_center.vmFolder.childEntity
for vm in vms:
print(vm.name)
if __name__ == '__main__':
login()
result:
'vim.ServiceInstance:ServiceInstance'
Hello World!
If you got here, you authenticted into vCenter.
sclautoesxd12v03
sclautoesxd12v04
sclautoesxd12v07
sclautoesxd12v09
sclautoesxd12v11
sclautoesxd12v12
sclautoesxd12v13
sclautoesxd12v16
sclautoesxd12v17
sclautoesxd12v01
sclautoesxd12v02
sclautoesxd12v05
sclautoesxd12v06
sclautoesxd12v08
sclautoesxd12v10
sclautoesxd12v14
sclautoesxd12v15