I need to capture variable value for Linux EC2 instances.
'Name': 'device',
'Value': "xvdg"
The value Disk Name(device name) changes for EC2 Linux instances e.g sdh,xvdg, etc..
I am using boto3 SDK
Any help is much appreciated.
Current Code (Hard coded the Device Name):
cloudwatch.put_metric_alarm(
AlarmName=prefix + ' - Elastic Search Disc % Availabilty is less than the configured threshold value - Please increase size of EBS Volume',
ComparisonOperator='LowerThanThreshold',
EvaluationPeriods=3,
MetricName='disk_used_percent',
Namespace='CWAgent',
Period=300,
Statistic='Average',
Threshold=75,
AlarmActions=[snstopicarn],
AlarmDescription='Standard Disc Alert - Elastic Search Disc % Availabilty is less than the configured threshold value - Please increase size of EBS Volume',
Dimensions=[
{
'Name': 'InstanceId',
'Value': instance["InstanceId"]
},
{
'Name': 'path',
'Value': "/mnt/elasticsearch"
},
{
'Name': 'device',
'Value': "xvdi"
},
{
'Name': 'fstype',
'Value': "ext4"
},
]
)
You can use describe_instances and extract DeviceName. (boto3 doc)
import boto3
instance_id = <instance_id>
client = boto3.client('ec2')
response = client.describe_instances(
InstanceIds=[
instance_id,
],
)
device_name = response['Reservations'][0]['Instances'][0]['BlockDeviceMappings'][0]['DeviceName']
#/dev/xvda
print(device_name)
When using describe_alarms_for_metric, the code would be like this.
response = client.describe_alarms_for_metric(
MetricName='string',
Namespace='string',
Statistic='SampleCount'|'Average'|'Sum'|'Minimum'|'Maximum',
ExtendedStatistic='string',
Dimensions=[
{
'Name': 'string',
'Value': 'string'
},
],
Period=123,
Unit='Seconds'|'Microseconds'|'Milliseconds'|'Bytes'|'Kilobytes'|'Megabytes'|'Gigabytes'|'Terabytes'|'Bits'|'Kilobits'|'Megabits'|'Gigabits'|'Terabits'|'Percent'|'Count'|'Bytes/Second'|'Kilobytes/Second'|'Megabytes/Second'|'Gigabytes/Second'|'Terabytes/Second'|'Bits/Second'|'Kilobits/Second'|'Megabits/Second'|'Gigabits/Second'|'Terabits/Second'|'Count/Second'|'None'
)
response = cloudwatch.put_metric_alarm(....)
for i in response['MetricAlarms'][0]['Dimensions'][0]:
if i['Name'] == 'device':
print(i['Value']) # xvdi
Related
With minimal experience in python, I am looking for ideas in building a lambda that tags AWS resources, especially ec2-instances and their volumes, by using prefix in the instance name tag.
Examples:
dev-cassandra-01
dev-cassandra-02
I would like to setup auto tagging for predefined set of tags by using the prefix "dev-cassandra" which should apply to any new instances created with this prefix.
Similarly different set of tags to different application instances.
This can be applied by provisioning tool for the regular instances, but not to the existing ASG instances.
You can insert tags upon instance creation by using resource-method create_instances
in the 'TagSpecifications' attribute.
Or you can also edit specific instance tags with the client-method create_tags for this method you need to get the instance id first.
The prefix logic can be added to the python script as desired.
Here are 2 examples of these methods:
create_instances:
# Getting resource object with aws credentials
s = boto3.Session(
region_name=<region_name>,
aws_access_key_id=<aws_access_key>,
aws_secret_access_key=<aws_secret_access_key>,
)
ec2 = s.resource('ec2')
# Some of these options are optional
instance = ec2.create_instances(
ImageId=<ami_id>,
MinCount=1,
MaxCount=1,
InstanceType=<instance_type>,
KeyName=<instance_key_pair>,
IamInstanceProfile={
'Name': <instance_profile>
},
SecurityGroupIds=[
<instance_security_group>,
],
TagSpecifications=[
{
'ResourceType': 'instance',
'Tags': [
{
'Key': <key_name>,
'Value': <value_data>,
},
]
},
],
)
create_tags:
# Next line is for getting client object from resource object
ec2_client = ec2.meta.client
ec2_client.create_tags(
Resources=[
<instance_id>,
],
Tags=[
{
'Key': 'Name',
'Value': <dev-cassandra><_your_name>
},
{
'Key': <key_name>,
'Value': <value_data>,
},
]
)
worth mentioning that the "instance name" is also a tag.
For example:
{
'Key': 'Name',
'Value': <dev-cassandra><_your_name>
}
I am updating provisioned service catalog product from lambda function. it is working fine for many products but for one provisioned product when i try to update provisioned service catalog product it is successfully updating but cloud formation stack is not getting updated by service catalog
Here code of lambda function
import json
import boto3
def lambda_handler(event, context):
client=boto3.client('servicecatalog')
response = client.update_provisioned_product(
AcceptLanguage='en',
ProvisionedProductId='pp-3mio2kzru2yc2',
ProductId='prod-zpvv57zereqfu',
ProvisioningArtifactId='pa-k3cx2pkgge4ce',
ProvisioningParameters=[
{
'Key': 'ScheduledScalingInDesiredInstances',
'Value': '0',
'UsePreviousValue': False
},
{
'Key': 'ScheduledScalingInMaxInstances',
'Value': '0',
'UsePreviousValue': False
},
{
'Key': 'ScheduledScalingInMinInstances',
'Value': '0',
'UsePreviousValue': False
},
{
'Key': 'ScheduledScalingInCron',
'Value': 'cron(42 19 * * ? *)',
'UsePreviousValue': False
},
{
'Key': 'EnvironmentName',
'UsePreviousValue': True
},
{
'Key': 'ClusterName',
'UsePreviousValue': True
},
]
)
Lambda function has required permissions as same role is used in another lambda function which is able to update cloudformation stack via provisioned service catalog.
what could be the reason ?
Issue is resolved.
Issue was with parameter mismatch. I was trying to update few parameters while dependent param should be updated as well
I want to execute spark submit job on AWS EMR cluster based on the file upload event on S3. I am using AWS Lambda function to capture the event but I have no idea how to submit spark submit job on EMR cluster from Lambda function.
Most of the answers that i searched talked about adding a step in the EMR cluster. But I do not know if I can add add any step to fire "spark submit --with args" in the added step.
You can, I had to same thing last week!
Using boto3 for Python (other languages would definitely have a similar solution) you can either start a cluster with the defined step, or attach a step to an already up cluster.
Defining the cluster with the step
def lambda_handler(event, context):
conn = boto3.client("emr")
cluster_id = conn.run_job_flow(
Name='ClusterName',
ServiceRole='EMR_DefaultRole',
JobFlowRole='EMR_EC2_DefaultRole',
VisibleToAllUsers=True,
LogUri='s3n://some-log-uri/elasticmapreduce/',
ReleaseLabel='emr-5.8.0',
Instances={
'InstanceGroups': [
{
'Name': 'Master nodes',
'Market': 'ON_DEMAND',
'InstanceRole': 'MASTER',
'InstanceType': 'm3.xlarge',
'InstanceCount': 1,
},
{
'Name': 'Slave nodes',
'Market': 'ON_DEMAND',
'InstanceRole': 'CORE',
'InstanceType': 'm3.xlarge',
'InstanceCount': 2,
}
],
'Ec2KeyName': 'key-name',
'KeepJobFlowAliveWhenNoSteps': False,
'TerminationProtected': False
},
Applications=[{
'Name': 'Spark'
}],
Configurations=[{
"Classification":"spark-env",
"Properties":{},
"Configurations":[{
"Classification":"export",
"Properties":{
"PYSPARK_PYTHON":"python35",
"PYSPARK_DRIVER_PYTHON":"python35"
}
}]
}],
BootstrapActions=[{
'Name': 'Install',
'ScriptBootstrapAction': {
'Path': 's3://path/to/bootstrap.script'
}
}],
Steps=[{
'Name': 'StepName',
'ActionOnFailure': 'TERMINATE_CLUSTER',
'HadoopJarStep': {
'Jar': 's3n://elasticmapreduce/libs/script-runner/script-runner.jar',
'Args': [
"/usr/bin/spark-submit", "--deploy-mode", "cluster",
's3://path/to/code.file', '-i', 'input_arg',
'-o', 'output_arg'
]
}
}],
)
return "Started cluster {}".format(cluster_id)
Attaching a step to an already running cluster
As per here
def lambda_handler(event, context):
conn = boto3.client("emr")
# chooses the first cluster which is Running or Waiting
# possibly can also choose by name or already have the cluster id
clusters = conn.list_clusters()
# choose the correct cluster
clusters = [c["Id"] for c in clusters["Clusters"]
if c["Status"]["State"] in ["RUNNING", "WAITING"]]
if not clusters:
sys.stderr.write("No valid clusters\n")
sys.stderr.exit()
# take the first relevant cluster
cluster_id = clusters[0]
# code location on your emr master node
CODE_DIR = "/home/hadoop/code/"
# spark configuration example
step_args = ["/usr/bin/spark-submit", "--spark-conf", "your-configuration",
CODE_DIR + "your_file.py", '--your-parameters', 'parameters']
step = {"Name": "what_you_do-" + time.strftime("%Y%m%d-%H:%M"),
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 's3n://elasticmapreduce/libs/script-runner/script-runner.jar',
'Args': step_args
}
}
action = conn.add_job_flow_steps(JobFlowId=cluster_id, Steps=[step])
return "Added step: %s"%(action)
AWS Lambda function python code if you want to execute Spark jar using spark submit command:
from botocore.vendored import requests
import json
def lambda_handler(event, context):
headers = { "content-type": "application/json" }
url = 'http://ip-address.ec2.internal:8998/batches'
payload = {
'file' : 's3://Bucket/Orchestration/RedshiftJDBC41.jar
s3://Bucket/Orchestration/mysql-connector-java-8.0.12.jar
s3://Bucket/Orchestration/SparkCode.jar',
'className' : 'Main Class Name',
'args' : [event.get('rootPath')]
}
res = requests.post(url, data = json.dumps(payload), headers = headers, verify = False)
json_data = json.loads(res.text)
return json_data.get('id')
I cant find code with boto3. I'm able to get elb name and InstanceID separately but cant link them together to find attached instances to ELB names.
Classic Load Balancer
The boto3 describe_load_balancers() functions returns a list of instances:
{
'LoadBalancerDescriptions': [
{
'LoadBalancerName': 'string',
'DNSName': 'string',
....
'Instances': [
{
'InstanceId': 'string'
},
],
....
},
],
'NextMarker': 'string'
}
The Instances section returns the IDs of the instances for the load balancer.
Application Load Balancer (ELBv2)
This one is harder. The Application Load Balancer has multiple Target Groups. Ports on instances are registered to a Target Group.
The only command that seems to list instances in a Target Group is describe_target_health(), which returns the instance and port (because one instance can serve multiple targets):
{
'TargetHealthDescriptions': [
{
'Target': {
'Id': 'i-0f76fade',
'Port': 80,
},
'TargetHealth': {
'Description': 'Given target group is not configured to receive traffic from ELB',
'Reason': 'Target.NotInUse',
'State': 'unused',
},
},
{
'HealthCheckPort': '80',
'Target': {
'Id': 'i-0f76fade',
'Port': 80,
},
'TargetHealth': {
'State': 'healthy',
},
},
],
'ResponseMetadata': {
'...': '...',
},
}
This is my solution which I got it to work. Thank you John.
elbList = boto3.client('elb')
ec2 = boto3.resource('ec2')
bals = elbList.describe_load_balancers()
for elb in bals['LoadBalancerDescriptions']:
print 'ELB DNS Name : ' + elb['DNSName']
for ec2Id in elb['Instances']:
running_instances = \
ec2.instances.filter(Filters=[{'Name': 'instance-state-name'
, 'Values': ['running']},
{'Name': 'instance-id',
'Values': [ec2Id['InstanceId']]}])
for instance in running_instances:
print(" Instance : " + instance.public_dns_name);
This is a simple script to find the list of instance attached to an ELB, as an input you may have to provide the ELB name
#!/usr/bin/python
import boto3
import sys
import string
elb_name = raw_input("What is ELB name? :: ")
print ("\n")
print ("THE LIST OF INSTANCES ATTACHED TO THIS ELB IS \n")
elbList = boto3.client('elb')
ec2 = boto3.resource('ec2')
bals = elbList.describe_load_balancers()
for elb in bals['LoadBalancerDescriptions']:
set2 = elb['LoadBalancerName']
if elb_name == set2 :
inst = elb['Instances']
ct = sys.getrefcount(inst)
count = ct
for x in range(count):
iv = elb['Instances'][x]
id = str(iv.values())
id = string.replace(id, "'", "")
print id.strip('[]')
I have a web server, load balancing, auto scaling, vpc, beanstalk environment with RDS DB instance attached.
I use EB CLI eb create with --database to create beanstalk environments.
I'd like to use boto3 create_environment instead.
Although I'm using OptionsSettings to define the RDS database configuration, it is creating the environment without RDS.
Does anyone know how to create an environment with and RDS instance using boto3?
Here is the boto3 command I'm using with only the RDS portion of my OptionsSettings ():
eb_client = boto3.client('elasticbeanstalk')
response = eb_client.create_environment(
ApplicationName='APP',
EnvironmentName='ENV',
CNAMEPrefix='CNAME',
Tier={
'Name': 'WebServer',
'Type': 'Standard'
},
SolutionStackName='64bit Amazon Linux ...',
OptionSettings=[
...
{
'Namespace': 'aws:rds:dbinstance',
'OptionName': 'DBEngineVersion',
'Value': '5.6'
},
{
'Namespace': 'aws:rds:dbinstance',
'OptionName': 'DBPassword',
'Value': 'PASSWORD_HASH'
},
{
'Namespace': 'aws:rds:dbinstance',
'OptionName': 'DBAllocatedStorage',
'Value': '5'
},
{
'Namespace': 'aws:rds:dbinstance',
'OptionName': 'DBInstanceClass',
'Value': 'db.t2.micro'
},
{
'Namespace': 'aws:rds:dbinstance',
'OptionName': 'DBEngine',
'Value': 'mysql'
},
{
'Namespace': 'aws:rds:dbinstance',
'OptionName': 'DBUser',
'Value': 'ebroot'
},
{
'Namespace': 'aws:rds:dbinstance',
'OptionName': 'DBDeletionPolicy',
'Value': 'Snapshot'
},
...
]
)