I have a web server, load balancing, auto scaling, vpc, beanstalk environment with RDS DB instance attached.
I use EB CLI eb create with --database to create beanstalk environments.
I'd like to use boto3 create_environment instead.
Although I'm using OptionsSettings to define the RDS database configuration, it is creating the environment without RDS.
Does anyone know how to create an environment with and RDS instance using boto3?
Here is the boto3 command I'm using with only the RDS portion of my OptionsSettings ():
eb_client = boto3.client('elasticbeanstalk')
response = eb_client.create_environment(
ApplicationName='APP',
EnvironmentName='ENV',
CNAMEPrefix='CNAME',
Tier={
'Name': 'WebServer',
'Type': 'Standard'
},
SolutionStackName='64bit Amazon Linux ...',
OptionSettings=[
...
{
'Namespace': 'aws:rds:dbinstance',
'OptionName': 'DBEngineVersion',
'Value': '5.6'
},
{
'Namespace': 'aws:rds:dbinstance',
'OptionName': 'DBPassword',
'Value': 'PASSWORD_HASH'
},
{
'Namespace': 'aws:rds:dbinstance',
'OptionName': 'DBAllocatedStorage',
'Value': '5'
},
{
'Namespace': 'aws:rds:dbinstance',
'OptionName': 'DBInstanceClass',
'Value': 'db.t2.micro'
},
{
'Namespace': 'aws:rds:dbinstance',
'OptionName': 'DBEngine',
'Value': 'mysql'
},
{
'Namespace': 'aws:rds:dbinstance',
'OptionName': 'DBUser',
'Value': 'ebroot'
},
{
'Namespace': 'aws:rds:dbinstance',
'OptionName': 'DBDeletionPolicy',
'Value': 'Snapshot'
},
...
]
)
Related
I need to capture variable value for Linux EC2 instances.
'Name': 'device',
'Value': "xvdg"
The value Disk Name(device name) changes for EC2 Linux instances e.g sdh,xvdg, etc..
I am using boto3 SDK
Any help is much appreciated.
Current Code (Hard coded the Device Name):
cloudwatch.put_metric_alarm(
AlarmName=prefix + ' - Elastic Search Disc % Availabilty is less than the configured threshold value - Please increase size of EBS Volume',
ComparisonOperator='LowerThanThreshold',
EvaluationPeriods=3,
MetricName='disk_used_percent',
Namespace='CWAgent',
Period=300,
Statistic='Average',
Threshold=75,
AlarmActions=[snstopicarn],
AlarmDescription='Standard Disc Alert - Elastic Search Disc % Availabilty is less than the configured threshold value - Please increase size of EBS Volume',
Dimensions=[
{
'Name': 'InstanceId',
'Value': instance["InstanceId"]
},
{
'Name': 'path',
'Value': "/mnt/elasticsearch"
},
{
'Name': 'device',
'Value': "xvdi"
},
{
'Name': 'fstype',
'Value': "ext4"
},
]
)
You can use describe_instances and extract DeviceName. (boto3 doc)
import boto3
instance_id = <instance_id>
client = boto3.client('ec2')
response = client.describe_instances(
InstanceIds=[
instance_id,
],
)
device_name = response['Reservations'][0]['Instances'][0]['BlockDeviceMappings'][0]['DeviceName']
#/dev/xvda
print(device_name)
When using describe_alarms_for_metric, the code would be like this.
response = client.describe_alarms_for_metric(
MetricName='string',
Namespace='string',
Statistic='SampleCount'|'Average'|'Sum'|'Minimum'|'Maximum',
ExtendedStatistic='string',
Dimensions=[
{
'Name': 'string',
'Value': 'string'
},
],
Period=123,
Unit='Seconds'|'Microseconds'|'Milliseconds'|'Bytes'|'Kilobytes'|'Megabytes'|'Gigabytes'|'Terabytes'|'Bits'|'Kilobits'|'Megabits'|'Gigabits'|'Terabits'|'Percent'|'Count'|'Bytes/Second'|'Kilobytes/Second'|'Megabytes/Second'|'Gigabytes/Second'|'Terabytes/Second'|'Bits/Second'|'Kilobits/Second'|'Megabits/Second'|'Gigabits/Second'|'Terabits/Second'|'Count/Second'|'None'
)
response = cloudwatch.put_metric_alarm(....)
for i in response['MetricAlarms'][0]['Dimensions'][0]:
if i['Name'] == 'device':
print(i['Value']) # xvdi
When I create a record in my hosted zone via the AWS Web Console, I can select the Routing Policy as "Simple".
When I try to create the same record programmatically via boto3, I seem to have no option to set a Routing Policy, and it is "Latency" by default.
What am I missing?
r53.change_resource_record_sets(
HostedZoneId=hz_id,
ChangeBatch={
'Changes': [{
'Action': 'UPSERT',
'ResourceRecordSet': {
'Name': root_domain,
'Type': 'A',
'Region': region,
'AliasTarget': {
'DNSName': f's3-website.{region}.amazonaws.com',
'EvaluateTargetHealth': False,
'HostedZoneId': s3_hz_id,
},
'SetIdentifier': str(uuid.uuid4())
}
}]
}
)
removing region and SetIdentifier works for me here - can't explain it though :)
Is it possible to get AMI details like Operating System Type, Operating System Version and Softwares that were used to build the AMI without creating EC2 instance of it.
I know that I can get the details by creating a EC2 Instance from the AMI.
I what to get these details without creating EC2 instance.
Sometimes. It all depends on who created the AMI. In general, an AMI provides the following information:
'Architecture': 'i386'|'x86_64'|'arm64',
'CreationDate': 'string',
'ImageId': 'string',
'ImageLocation': 'string',
'ImageType': 'machine'|'kernel'|'ramdisk',
'Public': True|False,
'KernelId': 'string',
'OwnerId': 'string',
'Platform': 'Windows',
'ProductCodes': [
{
'ProductCodeId': 'string',
'ProductCodeType': 'devpay'|'marketplace'
},
],
'RamdiskId': 'string',
'State': 'pending'|'available'|'invalid'|'deregistered'|'transient'|'failed'|'error',
'BlockDeviceMappings': [
{
'DeviceName': 'string',
'VirtualName': 'string',
'Ebs': {
'DeleteOnTermination': True|False,
'Iops': 123,
'SnapshotId': 'string',
'VolumeSize': 123,
'VolumeType': 'standard'|'io1'|'gp2'|'sc1'|'st1',
'Encrypted': True|False,
'KmsKeyId': 'string'
},
'NoDevice': 'string'
},
],
'Description': 'string',
'EnaSupport': True|False,
'Hypervisor': 'ovm'|'xen',
'ImageOwnerAlias': 'string',
'Name': 'string',
'RootDeviceName': 'string',
'RootDeviceType': 'ebs'|'instance-store',
'SriovNetSupport': 'string',
'StateReason': {
'Code': 'string',
'Message': 'string'
},
'Tags': [
{
'Key': 'string',
'Value': 'string'
},
],
'VirtualizationType': 'hvm'|'paravirtual'
So while you can get the architecture, unless the creator included a Name, Desription, or Tags with the information you are looking for, you may be out of luck.
Yes. You can query AMI using CLI, Console and APIs.
An example CLI to query AMI using ami-id is as below:
aws ec2 describe-images --region us-east-1 --image-ids ami-XXXXXXXXXX
I am trying to create simple VM based on example given in here1.
I want to add custom service account 2 to this VM.
My config looks something like this
def GenerateConfig(context):
"""Create instance with disks."""
resources = [{
'type': 'compute.v1.instance',
'name': 'vm-' + context.env['deployment'],
'properties': {
'zone': context.properties['zone'],
'disks': [{
'deviceName': 'boot',
'type': 'PERSISTENT',
'boot': True,
'autoDelete': True,
'initializeParams': {
'diskName': 'disk-' + context.env['deployment'],
}
}],
'networkInterfaces': [{
'network': '...',
'subnetwork': '...',
'no-address': True,
}],
'tags':{
'items': [context.env['deployment']]
},
'service-account': ''.join(['custom-compute#',
context.env['project'],
'.iam.gserviceaccount.com']),
'scopes': ['https://www.googleapis.com/auth/devstorage.read_only',
'https://www.googleapis.com/auth/logging.write',
'https://www.googleapis.com/auth/monitoring.write',
'https://www.googleapis.com/auth/trace.append']
}
}]
return {'resources': resources}
I am able to successfully create the deployment. However when I describe the newly created instance it doesn't have any "service-account" associated with the vm.
I couldn't find any example of adding service-account to Deployment manager template. I have also tried to use "serviceAccount" key instead of 'service-account' without any success.
Does anyone knows what I am missing?
I found the reference DM reference docs.
The required changes were
'serviceAccounts': [{
'email': '....',
'scopes': '...'
}]
I want to execute spark submit job on AWS EMR cluster based on the file upload event on S3. I am using AWS Lambda function to capture the event but I have no idea how to submit spark submit job on EMR cluster from Lambda function.
Most of the answers that i searched talked about adding a step in the EMR cluster. But I do not know if I can add add any step to fire "spark submit --with args" in the added step.
You can, I had to same thing last week!
Using boto3 for Python (other languages would definitely have a similar solution) you can either start a cluster with the defined step, or attach a step to an already up cluster.
Defining the cluster with the step
def lambda_handler(event, context):
conn = boto3.client("emr")
cluster_id = conn.run_job_flow(
Name='ClusterName',
ServiceRole='EMR_DefaultRole',
JobFlowRole='EMR_EC2_DefaultRole',
VisibleToAllUsers=True,
LogUri='s3n://some-log-uri/elasticmapreduce/',
ReleaseLabel='emr-5.8.0',
Instances={
'InstanceGroups': [
{
'Name': 'Master nodes',
'Market': 'ON_DEMAND',
'InstanceRole': 'MASTER',
'InstanceType': 'm3.xlarge',
'InstanceCount': 1,
},
{
'Name': 'Slave nodes',
'Market': 'ON_DEMAND',
'InstanceRole': 'CORE',
'InstanceType': 'm3.xlarge',
'InstanceCount': 2,
}
],
'Ec2KeyName': 'key-name',
'KeepJobFlowAliveWhenNoSteps': False,
'TerminationProtected': False
},
Applications=[{
'Name': 'Spark'
}],
Configurations=[{
"Classification":"spark-env",
"Properties":{},
"Configurations":[{
"Classification":"export",
"Properties":{
"PYSPARK_PYTHON":"python35",
"PYSPARK_DRIVER_PYTHON":"python35"
}
}]
}],
BootstrapActions=[{
'Name': 'Install',
'ScriptBootstrapAction': {
'Path': 's3://path/to/bootstrap.script'
}
}],
Steps=[{
'Name': 'StepName',
'ActionOnFailure': 'TERMINATE_CLUSTER',
'HadoopJarStep': {
'Jar': 's3n://elasticmapreduce/libs/script-runner/script-runner.jar',
'Args': [
"/usr/bin/spark-submit", "--deploy-mode", "cluster",
's3://path/to/code.file', '-i', 'input_arg',
'-o', 'output_arg'
]
}
}],
)
return "Started cluster {}".format(cluster_id)
Attaching a step to an already running cluster
As per here
def lambda_handler(event, context):
conn = boto3.client("emr")
# chooses the first cluster which is Running or Waiting
# possibly can also choose by name or already have the cluster id
clusters = conn.list_clusters()
# choose the correct cluster
clusters = [c["Id"] for c in clusters["Clusters"]
if c["Status"]["State"] in ["RUNNING", "WAITING"]]
if not clusters:
sys.stderr.write("No valid clusters\n")
sys.stderr.exit()
# take the first relevant cluster
cluster_id = clusters[0]
# code location on your emr master node
CODE_DIR = "/home/hadoop/code/"
# spark configuration example
step_args = ["/usr/bin/spark-submit", "--spark-conf", "your-configuration",
CODE_DIR + "your_file.py", '--your-parameters', 'parameters']
step = {"Name": "what_you_do-" + time.strftime("%Y%m%d-%H:%M"),
'ActionOnFailure': 'CONTINUE',
'HadoopJarStep': {
'Jar': 's3n://elasticmapreduce/libs/script-runner/script-runner.jar',
'Args': step_args
}
}
action = conn.add_job_flow_steps(JobFlowId=cluster_id, Steps=[step])
return "Added step: %s"%(action)
AWS Lambda function python code if you want to execute Spark jar using spark submit command:
from botocore.vendored import requests
import json
def lambda_handler(event, context):
headers = { "content-type": "application/json" }
url = 'http://ip-address.ec2.internal:8998/batches'
payload = {
'file' : 's3://Bucket/Orchestration/RedshiftJDBC41.jar
s3://Bucket/Orchestration/mysql-connector-java-8.0.12.jar
s3://Bucket/Orchestration/SparkCode.jar',
'className' : 'Main Class Name',
'args' : [event.get('rootPath')]
}
res = requests.post(url, data = json.dumps(payload), headers = headers, verify = False)
json_data = json.loads(res.text)
return json_data.get('id')