I wish to retrieve the vpc peering connection id in boto, something that would be done by "aws ec2 describe-vpc-peering-connections". I couldn't locate its boto equivalent. Is it possible to retrieve it in boto ?
boto3 is different from the previous boto. Here is the solution in boto3:
import boto3
prevar = boto3.client('ec2')
var1 = prevar.describe_vpc_peering_connections()
print(var1)
In boto you would use boto.vpc.get_all_peering_connections(), as in:
import boto.vpc
c = boto.vpc.connect_to_region('us-east-1')
vpcs = c.get_all_vpcs()
vpc_peering_connection = c.create_vpc_peering_connection(vpcs[0].id, vpcs[1].id)
Get all vpc peering IDs
import boto.vpc
conn = boto.vpc.connect_to_region('us-east-1')
vpcpeering = conn.get_all_vpc_peering_connections()
for peering in vpcpeering:
print peering.id
If you know the accepter VPC id and requester vpc ID, you should get the vpc peering id by this way:
import boto.vpc
conn = boto.vpc.connect_to_region('us-east-1')
peering = conn.get_all_vpc_peering_connections(filters = {'accepter-vpc-info.vpc-id' = 'vpc-12345abc','requester-vpc-info.vpc-id' = 'vpc-cba54321'})[0]
print peering.id
If that's the only vpc peering in your environment, an easier way:
import boto.vpc
conn = boto.vpc.connect_to_region('us-east-1')
peering = conn.get_all_vpc_peering_connections()[0]
peering.id
Related
Below is my working code already (rebooting the Aurora RDS instance on Lambda without failover).
import boto3
region = 'ap-northeast-1'
instances = 'myAuroraInstanceName'
rds = boto3.client('rds', region_name=region)
def lambda_handler(event, context):
rds.reboot_db_instance(
DBInstanceIdentifier=instances
)
print('Rebooting your DB Instance: ' + str(instances))
Please take a second to read the documentation. It's extremely clear from the documentation how you specify the failover setting:
rds.reboot_db_instance(
DBInstanceIdentifier=instances,
ForceFailover=True
)
I am trying to connect to AWS OpenSearch domain from AWS Lambda using the opensearch python client (development purposes, non production).
I was trying the following:
from opensearchpy import OpenSearch
import boto3
from requests_aws4auth import AWS4Auth
import os
import config
my_region = os.environ['AWS_REGION']
service = 'es' # still es???
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, my_region, service, session_token=credentials.token)
openSearch_endpoint = config.openSearch_endpoint
# sth wrong here:
openSearch_client = OpenSearch(hosts = [openSearch_endpoint], auth = awsauth)
as per the following blogs:
https://aws.amazon.com/blogs/database/indexing-metadata-in-amazon-elasticsearch-service-using-aws-lambda-and-python/
https://docs.aws.amazon.com/opensearch-service/latest/developerguide/search-example.html
but it does not work (it does not want to authenticate, "errorMessage":"AuthorizationException(403, '')" . However if I don't use the python client but simply go through requests instead:
import requests
host = config.openSearch_endpoint
url = host + '/' +'_cat/indices?v'
# this one works:
r = requests.get(url, auth=awsauth)
, my lambda function does communicate with the OpenSearch domain.
I consulted the OpenSearch() documentation but it is not clear to me how its parameters map to boto3 session credentials, and/or to AWS4Auth. So what should this line
openSearch_client = OpenSearch(hosts = [openSearch_endpoint], auth = awsauth)
be?
actually managed to find the solution a couple of hours later:
from opensearchpy import OpenSearch, RequestsHttpConnection
my_region = os.environ['AWS_REGION']
service = 'es' # still es?
credentials = boto3.Session().get_credentials()
awsauth = AWS4Auth(credentials.access_key, credentials.secret_key, my_region, service, session_token=credentials.token)
host = config.openSearch_endpoint
openSearch_client = OpenSearch(
hosts=[openSearch_endpoint],
http_auth = awsauth,
use_ssl = True,
verify_certs = True,
ssl_assert_hostname = False,
ssl_show_warn = False,
connection_class=RequestsHttpConnection
)
OpenSearch Service requires port 443 for incoming requests therefore you need to add a new Inbound Rule under Security Group attached to your OpenSearch Service domain.
Try the rule:
Type: HTTPS
Protocol: TCP
Port range: 443
Source: 0.0.0.0/0 (Anywhere-IPv4)
Additionally, you should have a Resource-based policy for your Lambda function to perform requests to your OpenSearch Service domain.
Hi I have this simple lambda function which stops all EC-2 instances tagged with Auto_off. I have set a for loop so that it works for two regions us-east-1 and us-east-2. I am running the function in us-east-2 region.
the problem is that only the instance located in us-east2 is stopping and the other instance is not(located in us-east-1). what modifications can i make.
please suggest as i am new to python and boto library
import boto3
import logging
#setup simple logging for INFO
logger = logging.getLogger()
logger.setLevel(logging.INFO)
#define the connection
ec2 = boto3.resource('ec2')
client = boto3.client('ec2', region_name='us-east-1')
ec2_regions = ['us-east-1','us-east-2']
for region in ec2_regions:
conn = boto3.resource('ec2',region_name=region)
def lambda_handler(event, context):
# Use the filter() method of the instances collection to retrieve
# all running EC2 instances.
filters = [{
'Name': 'tag:AutoOff',
'Values': ['True']
},
{
'Name': 'instance-state-name',
'Values': ['running']
}
]
#filter the instances
instances = ec2.instances.filter(Filters=filters)
#locate all running instances
RunningInstances = [instance.id for instance in instances]
#print the instances for logging purposes
#print RunningInstances
#make sure there are actually instances to shut down.
if len(RunningInstances) > 0:
#perform the shutdown
shuttingDown = ec2.instances.filter(InstanceIds=RunningInstances).stop()
print shuttingDown
else:
print "Nothing to see here"
You are creating 2 instances of ec2 resource, and 1 instance of ec2 client. You are only using one instance of ec2 resource, and not using the client at all. You are also setting the region in your loop on a different resource object from the one you are actually using.
Change all of this:
ec2 = boto3.resource('ec2')
client = boto3.client('ec2', region_name='us-east-1')
ec2_regions = ['us-east-1','us-east-2']
for region in ec2_regions:
conn = boto3.resource('ec2',region_name=region)
To this:
ec2_regions = ['us-east-1','us-east-2']
for region in ec2_regions:
ec2 = boto3.resource('ec2',region_name=region)
Also your indentation is all wrong in the code in your question. I hope that's just a copy/paste issue and not how your code is really indented, because indentation is syntax in Python.
The loop you do here
ec2_regions = ['us-east-1','us-east-2']
for region in ec2_regions:
conn = boto3.resource('ec2',region_name=region)
Firstly assigns us-east-1 to the conn variable and on the second step, it overwrites it with us-east-2 and then it enters your function.
So what you can do is put that loop inside your function and do the current definition of the function inside that loop.
I'm trying to automate "Copy AMI" functionality I have on my AWS EC2 console, can anyone point me to some Python code that does this through boto3?
From EC2 — Boto 3 documentation:
response = client.copy_image(
ClientToken='string',
Description='string',
Encrypted=True|False,
KmsKeyId='string',
Name='string',
SourceImageId='string',
SourceRegion='string',
DryRun=True|False
)
Make sure you send the request to the destination region, passing in a reference to the SourceRegion.
To be more precise.
Let's say the AMI you want to copy is in us-east-1 (Source region).
Your requirement is to copy this into us-west-2 (Destination region)
Get the boto3 EC2 client session to us-west-2 region and then pass the us-east-1 in the SourceRegion.
import boto3
session1 = boto3.client('ec2',region_name='us-west-2')
response = session1.copy_image(
Name='DevEnv_Linux',
Description='Copied this AMI from region us-east-1',
SourceImageId='ami-02a6ufwod1f27e11',
SourceRegion='us-east-1'
)
I use high-level resources like EC2.ServiceResource most of time, so the following is the code I use to use both EC2 resource and low-level client,
source_image_id = '....'
profile = '...'
source_region = 'us-west-1'
source_session = boto3.Session(profile_name=profile, region_name=source_region)
ec2 = source_session.resource('ec2')
ami = ec2.Image(source_image_id)
target_region = 'us-east-1'
target_session = boto3.Session(profile_name=profile, region_name=target_region)
target_ec2 = target_session.resource('ec2')
target_client = target_session.client('ec2')
response = target_client.copy_image(
Name=ami.name,
Description = ami.description,
SourceImageId = ami.id,
SorceRegion = source_region
)
target_ami = target_ec2.Image(response['ImageId'])
I'm trying to use boto to manage an AWS OpsWorks stack but I can't get the connection and I don't understand why.
With that credentials I can connect to with .ec2 or .sns libs and both are in the same stack's region (eu-west-1).
When I try to connect, conn is None
Here's my code:
import boto.opsworks
AWS_ACCESS_KEY_ID = ' ... '
AWS_SECRET_ACCESS_KEY = ' ... '
conn = boto.opsworks.connect_to_region('eu-west-1',
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
validate_certs=False)
print(conn) # conn is None
conn.describe_layers()
Is it possible to connect to opsworks from outside AWS?
Thank you!
There's only one opsworks endpoint available ('us-east-1')