Copy AMI to another region using boto3 - amazon-web-services

I'm trying to automate "Copy AMI" functionality I have on my AWS EC2 console, can anyone point me to some Python code that does this through boto3?

From EC2 — Boto 3 documentation:
response = client.copy_image(
ClientToken='string',
Description='string',
Encrypted=True|False,
KmsKeyId='string',
Name='string',
SourceImageId='string',
SourceRegion='string',
DryRun=True|False
)
Make sure you send the request to the destination region, passing in a reference to the SourceRegion.

To be more precise.
Let's say the AMI you want to copy is in us-east-1 (Source region).
Your requirement is to copy this into us-west-2 (Destination region)
Get the boto3 EC2 client session to us-west-2 region and then pass the us-east-1 in the SourceRegion.
import boto3
session1 = boto3.client('ec2',region_name='us-west-2')
response = session1.copy_image(
Name='DevEnv_Linux',
Description='Copied this AMI from region us-east-1',
SourceImageId='ami-02a6ufwod1f27e11',
SourceRegion='us-east-1'
)

I use high-level resources like EC2.ServiceResource most of time, so the following is the code I use to use both EC2 resource and low-level client,
source_image_id = '....'
profile = '...'
source_region = 'us-west-1'
source_session = boto3.Session(profile_name=profile, region_name=source_region)
ec2 = source_session.resource('ec2')
ami = ec2.Image(source_image_id)
target_region = 'us-east-1'
target_session = boto3.Session(profile_name=profile, region_name=target_region)
target_ec2 = target_session.resource('ec2')
target_client = target_session.client('ec2')
response = target_client.copy_image(
Name=ami.name,
Description = ami.description,
SourceImageId = ami.id,
SorceRegion = source_region
)
target_ami = target_ec2.Image(response['ImageId'])

Related

ClientError: Failed to download data. Please check your s3 objects and ensure that there is no object that is both a folder as well as a file

How are you?
I'm trying to execute a sagemaker job but i get this error:
ClientError: Failed to download data. Cannot download s3://pocaaml/sagemaker/xsell_sc1_test/model/model_lgb.tar.gz, a previously downloaded file/folder clashes with it. Please check your s3 objects and ensure that there is no object that is both a folder as well as a file.
I'm have that model_lgb.tar.gz on that s3 path as you can see here:
This is my code:
project_name = 'xsell_sc1_test'
s3_bucket = "pocaaml"
prefix = "sagemaker/"+project_name
account_id = "029294541817"
s3_bucket_base_uri = "{}{}".format("s3://", s3_bucket)
dev = "dev-{}".format(strftime("%y-%m-%d-%H-%M", gmtime()))
region = sagemaker.Session().boto_region_name
print("Using AWS Region: {}".format(region))
# Get a SageMaker-compatible role used by this Notebook Instance.
role = get_execution_role()
boto3.setup_default_session(region_name=region)
boto_session = boto3.Session(region_name=region)
s3_client = boto3.client("s3", region_name=region)
sagemaker_boto_client = boto_session.client("sagemaker") #este pinta?
sagemaker_session = sagemaker.session.Session(
boto_session=boto_session, sagemaker_client=sagemaker_boto_client
)
sklearn_processor = SKLearnProcessor(
framework_version="0.23-1", role=role, instance_type='ml.m5.4xlarge', instance_count=1
)
PREPROCESSING_SCRIPT_LOCATION = 'funciones_altas.py'
preprocessing_input_code = sagemaker_session.upload_data(
PREPROCESSING_SCRIPT_LOCATION,
bucket=s3_bucket,
key_prefix="{}/{}".format(prefix, "code")
)
preprocessing_input_data = "{}/{}/{}".format(s3_bucket_base_uri, prefix, "data")
preprocessing_input_model = "{}/{}/{}".format(s3_bucket_base_uri, prefix, "model")
preprocessing_output = "{}/{}/{}/{}/{}".format(s3_bucket_base_uri, prefix, dev, "preprocessing" ,"output")
processing_job_name = params["project_name"].replace("_", "-")+"-preprocess-{}".format(strftime("%d-%H-%M-%S", gmtime()))
sklearn_processor.run(
code=preprocessing_input_code,
job_name = processing_job_name,
inputs=[ProcessingInput(input_name="data",
source=preprocessing_input_data,
destination="/opt/ml/processing/input/data"),
ProcessingInput(input_name="model",
source=preprocessing_input_model,
destination="/opt/ml/processing/input/model")],
outputs=[
ProcessingOutput(output_name="output",
destination=preprocessing_output,
source="/opt/ml/processing/output")],
wait=False,
)
preprocessing_job_description = sklearn_processor.jobs[-1].describe()
and on funciones_altas.py i'm using ohe_altas.tar.gz and not model_lgb.tar.gz making this error super weird.
can you help me?
Looks like you are using sagemaker generated execution role and the error is related to S3 permissions.
Here are a couple of things you can do:
make sure to check the policies on the role that they have access to your bucket.
check if the objects are encrypted in your bucket, if so then ensure to also include kms policy to the role you are linking to the job. https://aws.amazon.com/premiumsupport/knowledge-center/s3-403-forbidden-error/
You can always create your own role as well and pass the arn to the code to run the processing job.

How to reboot the Aurora Instance with Failover using Lambda?

Below is my working code already (rebooting the Aurora RDS instance on Lambda without failover).
import boto3
region = 'ap-northeast-1'
instances = 'myAuroraInstanceName'
rds = boto3.client('rds', region_name=region)
def lambda_handler(event, context):
rds.reboot_db_instance(
DBInstanceIdentifier=instances
)
print('Rebooting your DB Instance: ' + str(instances))
Please take a second to read the documentation. It's extremely clear from the documentation how you specify the failover setting:
rds.reboot_db_instance(
DBInstanceIdentifier=instances,
ForceFailover=True
)

Trying to create a python function that creates an ec2 with aws marketplace software

I am trying to make a function that should launch an ec2 instance that contains aws marketplace software. What commands from the boto3 documents would you guys recommend because I am having trouble searching for one that
launches a new ec2
utilizes the aws marketplace software product code to launch with the ami
Thank you
You will most likely want to make use of the DescribeImages and RunInstances actions, which are available methods in the boto3 API (describe_images and run_instances).
The following snippet is a brief example that uses a product code from the AWS Marketplace to launch a new EC2 instance using the image ID:
import boto3
def main():
client = boto3.client("ec2")
# Ubuntu 18.04 LTS - Bionic
product_id = "3b73ef49-208f-47e1-8a6e-4ae768d8a333"
response = client.describe_images(
Filters=[{"Name": "name", "Values": [f"*{product_id}*"]}]
)
images = response["Images"]
image = images[0]
image_id = image["ImageId"] # ami-02ad37ec9b98d835f
response = client.run_instances(
ImageId=image_id,
InstanceType="t2.micro",
MaxCount=1,
MinCount=1,
SubnetId="<your_subnet_id>",
)
print(response)
if __name__ == "__main__":
main()

Iteration not working in Lambda as I want to run lambda in two regions listed in code

Hi I have this simple lambda function which stops all EC-2 instances tagged with Auto_off. I have set a for loop so that it works for two regions us-east-1 and us-east-2. I am running the function in us-east-2 region.
the problem is that only the instance located in us-east2 is stopping and the other instance is not(located in us-east-1). what modifications can i make.
please suggest as i am new to python and boto library
import boto3
import logging
#setup simple logging for INFO
logger = logging.getLogger()
logger.setLevel(logging.INFO)
#define the connection
ec2 = boto3.resource('ec2')
client = boto3.client('ec2', region_name='us-east-1')
ec2_regions = ['us-east-1','us-east-2']
for region in ec2_regions:
conn = boto3.resource('ec2',region_name=region)
def lambda_handler(event, context):
# Use the filter() method of the instances collection to retrieve
# all running EC2 instances.
filters = [{
'Name': 'tag:AutoOff',
'Values': ['True']
},
{
'Name': 'instance-state-name',
'Values': ['running']
}
]
#filter the instances
instances = ec2.instances.filter(Filters=filters)
#locate all running instances
RunningInstances = [instance.id for instance in instances]
#print the instances for logging purposes
#print RunningInstances
#make sure there are actually instances to shut down.
if len(RunningInstances) > 0:
#perform the shutdown
shuttingDown = ec2.instances.filter(InstanceIds=RunningInstances).stop()
print shuttingDown
else:
print "Nothing to see here"
You are creating 2 instances of ec2 resource, and 1 instance of ec2 client. You are only using one instance of ec2 resource, and not using the client at all. You are also setting the region in your loop on a different resource object from the one you are actually using.
Change all of this:
ec2 = boto3.resource('ec2')
client = boto3.client('ec2', region_name='us-east-1')
ec2_regions = ['us-east-1','us-east-2']
for region in ec2_regions:
conn = boto3.resource('ec2',region_name=region)
To this:
ec2_regions = ['us-east-1','us-east-2']
for region in ec2_regions:
ec2 = boto3.resource('ec2',region_name=region)
Also your indentation is all wrong in the code in your question. I hope that's just a copy/paste issue and not how your code is really indented, because indentation is syntax in Python.
The loop you do here
ec2_regions = ['us-east-1','us-east-2']
for region in ec2_regions:
conn = boto3.resource('ec2',region_name=region)
Firstly assigns us-east-1 to the conn variable and on the second step, it overwrites it with us-east-2 and then it enters your function.
So what you can do is put that loop inside your function and do the current definition of the function inside that loop.

AWS Python script vs AWS CLI

I downloaded the AWS cli and was able to successfully list objects from my bucket. But doing the same from a Python script does not work. The error is forbidden error.
How should I configure the boto to use the same default AWS credentials ( as used by AWS cli )
Thank you
import logging import urllib, subprocess, boto, boto.utils, boto.s3
logger = logging.getLogger("test") formatter =
logging.Formatter('%(asctime)s %(message)s') file_handler =
logging.FileHandler("test.log") file_handler.setFormatter(formatter)
stream_handler = logging.StreamHandler(sys.stderr)
logger.addHandler(file_handler) logger.addHandler(stream_handler)
logger.setLevel(logging.INFO)
# wait until user data is available while True:
logger.info('**************************** Test starts *******************************')
userData = boto.utils.get_instance_userdata()
if userData:
break
time.sleep(5)
bucketName = ''
deploymentDomainName = ''
if bucketName:
from boto.s3.key import Key
s3Conn = boto.connect_s3('us-east-1')
logger.info(s3Conn)
bucket = s3Conn.get_bucket('testbucket')
key.key = 'test.py'
key.get_contents_to_filename('test.py')
CLI is -->
aws s3api get-object --bucket testbucket --key test.py my.py
Is it possible to use the latest Python SDK from Amazon (Boto 3)? If so, set up your credentials as outlined here: Boto 3 Quickstart.
Also, you might check your environment variable. If they don't exist, that is okay. If they don't match those on your account, then that could be the problem as some AWS SDKs and other tools with use environment variables over the config files.
*nix:
echo $AWS_ACCESS_KEY_ID && echo $AWS_SECRET_ACCESS_KEY
Windows:
echo %AWS_ACCESS_KEY% & echo %AWS_SECRET_ACCESS_KEY%
(sorry if my windows-foo is weak)
When you use CLI by default it takes credentials from .aws/credentials file, but for running bot you will have to specify access key and secret key in your python script.
import boto
import boto.s3.connection
access_key = 'put your access key here!'
secret_key = 'put your secret key here!'
conn = boto.connect_s3(
aws_access_key_id = access_key,
aws_secret_access_key = secret_key,
host = 'bucketname.s3.amazonaws.com',
#is_secure=False, # uncomment if you are not using ssl
calling_format = boto.s3.connection.OrdinaryCallingFormat(),
)