I'm trying to get EBS volumes ID so i can create LLD Zabbix discovery
https://github.com/omni-lchen/zabbix-cloudwatch/blob/master/awsLLD.py
def getEBS(a, r):
account = a
aws_account = awsAccount(account)
aws_access_key_id = aws_account._aws_access_key_id
aws_secret_access_key = aws_account._aws_secret_access_key
aws_region = r
#component = c
# Init LLD Data
lldlist = []
llddata = {"data":lldlist}
# Connect to EC2 service
conn = awsConnection()
conn.ebsConnect(aws_region, aws_access_key_id, aws_secret_access_key)
ebsConn = conn._aws_connection
# Save EBS function results in a list
functionResultsList = []
# Save volume names in a list
tdata = []
# Get a list of EBS volumes
functionResults = ebsConn.get_all_volumes()
output:
[Volume:vol-029213f06d66eadac, Volume:vol-00fbd5dfaebd79e83, Volume:vol-0eeb126d13ecf0eed, Volume:vol-09a1f3446b3f78ea5]
I'm having issues parsing above output to get
vol-029213f06d66eadac
vol-0eeb126d13ecf0eed
vol-09a1f3446b3f78ea5
I know i need to write something like:
for la in functionResultsList:
print la[0]
getting first element
but don't know how to continue
removed list and used:
for la in range(len(functionResults)):
print functionResults[la]
output:
Volume:vol-029213f06d66eadac
Volume:vol-00fbd5dfaebd79e83
Volume:vol-0eeb126d13ecf0eed
Volume:vol-09a1f3446b3f78ea5
New to python and don't know what's list actually- but i don't think it's reason for downwote
Related
I'm trying to use the following two times in the same function but get the invalid syntax error in aws lambda function.
i am trying to make these two different files in the same s3 bucket. please help
this works fine
s3 = boto3.resource('s3', region_name = <region-name>, aws_access_key_id = AWS_ACCESS_KEY_ID,
aws_secret_access_key = AWS_SECRET_ACCESS_KEY)
s3.Object(<bucket_name>, 'filename.txt').put(Body = "somedata")
But when i use this it give invalid syntax error
s3 = boto3.resource('s3', region_name = <region-name>, aws_access_key_id = AWS_ACCESS_KEY_ID,
aws_secret_access_key = AWS_SECRET_ACCESS_KEY)
s3.Object(<bucket_name>, 'filename.txt').put(Body = "somedata")
s3.Object(<bucket_name>, 'differentfilename.txt').put(Body = "some else data")
i am trying to make these two different files in the same s3 bucket. please help
just add
time.sleep(0.1)
sleeper in between the functions.
just a work around
I am trying to export past 15mins access logs that are in cloudwatch to S3 bucket however when run it, it succesfully store logs into S3 however missing lots of logs. for example in cloudwatch past 15mins, there are 30 logs and in S3 it only has about 3 logs.
group_name = '/aws/elasticbeanstalk/my-env/var/app/current/storage/logs/laravel.log'
s3 = boto3.client('s3')
log_file = boto3.client('logs')
now = datetime.now()
deletionDate = now - timedelta(minutes=15)
start = math.floor(deletionDate.replace(second=0, microsecond=0).timestamp()*1000)
end = math.floor(now.replace(second=0, microsecond=0).timestamp()*1000)
destination_bucket = 'past15mins-log'
prefix = 'lambda2-test-log/'+str(start)+'-'+str(end)
# # TODO implement
response = log_file.create_export_task(
logGroupName = group_name,
fromTime = start,
to = end,
destination = destination_bucket,
destinationPrefix = prefix
)
if not response['ResponseMetadata']['HTTPStatusCode'] == 200:
raise Exception('fail ' + str(start)+" - "+ str(end))
In the documentation it says it is an asynchronous call, my guess is since there are 3 servers where I get logs from, that is causing the problem?
Thanks in advance.
Hi I have this simple lambda function which stops all EC-2 instances tagged with Auto_off. I have set a for loop so that it works for two regions us-east-1 and us-east-2. I am running the function in us-east-2 region.
the problem is that only the instance located in us-east2 is stopping and the other instance is not(located in us-east-1). what modifications can i make.
please suggest as i am new to python and boto library
import boto3
import logging
#setup simple logging for INFO
logger = logging.getLogger()
logger.setLevel(logging.INFO)
#define the connection
ec2 = boto3.resource('ec2')
client = boto3.client('ec2', region_name='us-east-1')
ec2_regions = ['us-east-1','us-east-2']
for region in ec2_regions:
conn = boto3.resource('ec2',region_name=region)
def lambda_handler(event, context):
# Use the filter() method of the instances collection to retrieve
# all running EC2 instances.
filters = [{
'Name': 'tag:AutoOff',
'Values': ['True']
},
{
'Name': 'instance-state-name',
'Values': ['running']
}
]
#filter the instances
instances = ec2.instances.filter(Filters=filters)
#locate all running instances
RunningInstances = [instance.id for instance in instances]
#print the instances for logging purposes
#print RunningInstances
#make sure there are actually instances to shut down.
if len(RunningInstances) > 0:
#perform the shutdown
shuttingDown = ec2.instances.filter(InstanceIds=RunningInstances).stop()
print shuttingDown
else:
print "Nothing to see here"
You are creating 2 instances of ec2 resource, and 1 instance of ec2 client. You are only using one instance of ec2 resource, and not using the client at all. You are also setting the region in your loop on a different resource object from the one you are actually using.
Change all of this:
ec2 = boto3.resource('ec2')
client = boto3.client('ec2', region_name='us-east-1')
ec2_regions = ['us-east-1','us-east-2']
for region in ec2_regions:
conn = boto3.resource('ec2',region_name=region)
To this:
ec2_regions = ['us-east-1','us-east-2']
for region in ec2_regions:
ec2 = boto3.resource('ec2',region_name=region)
Also your indentation is all wrong in the code in your question. I hope that's just a copy/paste issue and not how your code is really indented, because indentation is syntax in Python.
The loop you do here
ec2_regions = ['us-east-1','us-east-2']
for region in ec2_regions:
conn = boto3.resource('ec2',region_name=region)
Firstly assigns us-east-1 to the conn variable and on the second step, it overwrites it with us-east-2 and then it enters your function.
So what you can do is put that loop inside your function and do the current definition of the function inside that loop.
I'm trying to automate "Copy AMI" functionality I have on my AWS EC2 console, can anyone point me to some Python code that does this through boto3?
From EC2 — Boto 3 documentation:
response = client.copy_image(
ClientToken='string',
Description='string',
Encrypted=True|False,
KmsKeyId='string',
Name='string',
SourceImageId='string',
SourceRegion='string',
DryRun=True|False
)
Make sure you send the request to the destination region, passing in a reference to the SourceRegion.
To be more precise.
Let's say the AMI you want to copy is in us-east-1 (Source region).
Your requirement is to copy this into us-west-2 (Destination region)
Get the boto3 EC2 client session to us-west-2 region and then pass the us-east-1 in the SourceRegion.
import boto3
session1 = boto3.client('ec2',region_name='us-west-2')
response = session1.copy_image(
Name='DevEnv_Linux',
Description='Copied this AMI from region us-east-1',
SourceImageId='ami-02a6ufwod1f27e11',
SourceRegion='us-east-1'
)
I use high-level resources like EC2.ServiceResource most of time, so the following is the code I use to use both EC2 resource and low-level client,
source_image_id = '....'
profile = '...'
source_region = 'us-west-1'
source_session = boto3.Session(profile_name=profile, region_name=source_region)
ec2 = source_session.resource('ec2')
ami = ec2.Image(source_image_id)
target_region = 'us-east-1'
target_session = boto3.Session(profile_name=profile, region_name=target_region)
target_ec2 = target_session.resource('ec2')
target_client = target_session.client('ec2')
response = target_client.copy_image(
Name=ami.name,
Description = ami.description,
SourceImageId = ami.id,
SorceRegion = source_region
)
target_ami = target_ec2.Image(response['ImageId'])
I want to start 10 instances, get their instance id's and get their private IP addresses.
I know this can be done using AWS CLI, I'm wondering if there are any such scripts already written so I don't have to reinvent the wheel.
Thanks
I recommend to use python and boto package for such automation. Python is more clear than bash. You can user following page as starting point: http://boto.readthedocs.org/en/latest/ec2_tut.html
In the off chance that someone in the future comes across my question, I thought I'd give my (somewhat) final solution.
Using python and the Boto package that was suggested, I have the following python script.
It's pretty well commented but feel free to ask if you have any questions.
import boto
import time
import sys
IMAGE = 'ami-xxxxxxxx'
KEY_NAME = 'xxxxx'
INSTANCE_TYPE = 't1.micro'
SECURITY_GROUPS = ['xxxxxx'] # If multiple, separate by commas
COUNT = 2 #number of servers to start
private_dns = [] # will be populated with private dns of each instance
print 'Connecting to AWS'
conn = boto.connect_ec2()
print 'Starting instances'
#start instance
reservation = conn.run_instances(IMAGE, instance_type=INSTANCE_TYPE, key_name=KEY_NAME, security_groups=SECURITY_GROUPS, min_count=COUNT, max_count=COUNT)#, dry_run=True)
#print reservation #debug
print 'Waiting for instances to start'
# ONLY CHECKS IF RUNNING, MAY NOT BE SSH READY
for instance in reservation.instances: #doing this for every instance we started
while not instance.update() == 'running': #while it's not running (probably 'pending')
print '.', # trailing comma is intentional to print on same line
sys.stdout.flush() # make the thing print immediately instead of buffering
time.sleep(2) # Let the instance start up
print 'Done\n'
for instance in reservation.instances:
instance.add_tag("Name","Hadoop Ecosystem") # tag the instance
private_dns.append(instance.private_dns_name) # adding ip to array
print instance, 'is ready at', instance.private_dns_name # print to console
print private_dns