Find the cost of each resource in AWS - amazon-web-services

I have a python script to get the cost of each service for the last month, but I want the cost for each resource using the tag mentioned for that resource. For example, I have got the cost for RDS service, am using two database instances, So I want to get the separate cost for two database instances . I have different tags for two database instances
TAGS:
First database instance --> Key: Name Value:rds1
Second database instance --> Key: Name Value:rds2
My output should be like , the tag of the resource and its cost
Example --> rds1 - 15$
rds2 - 10$
Can anyone help me to achieve this ?
I have attached the ouput I got for cost based on the service
Output for cost based on service

Similar work you can find here Boto3 get_cost_and_usage filter Getting zero cost based on tag
Make sure that the tags that you are listing are correct.
On top of this,
you could see all of the tags that have been created in your account by the user or it is AWS managed. here https://us-east-1.console.aws.amazon.com/billing/home#/tags.
with boto3 cost-explorer client, you could use this function list_cost_allocation_tags to get a list of cost allocation tags.
import boto3
start_date = '2022-07-01'
end_date = '2022-08-30'
client = boto3.client('ce')
tags_response = None
try:
tags_response = client.list_cost_allocation_tags(
Status='Inactive', # 'Active'|'Inactive',
# TagKeys=[
# 'Key',
# ],
Type='UserDefined', # 'AWSGenerated'|'UserDefined',
# NextToken='string',
# MaxResults=100,
)
except Exception as e:
print(e)
cost_allocation_tags = tags_response['CostAllocationTags']
print(cost_allocation_tags)
print("-"*5+' Input TagValues with comma separation '+"-"*5)
for cost_allocation_tag in cost_allocation_tags:
tag_key = cost_allocation_tag['TagKey']
tag_type = cost_allocation_tag['Type']
tag_status = cost_allocation_tag['Status']
tag_values = str(input(
f'TagKey: {tag_key}, Type: {tag_type}, Status: {tag_status} -> '))
if tag_values == "":
continue
tag_values_parsed = tag_values.strip().split(',')
if tag_values_parsed == []:
continue
cost_usage_response = None
try:
cost_usage_response = client.get_cost_and_usage(
TimePeriod={
'Start': start_date,
'End': end_date
},
Metrics=['AmortizedCost'],
Granularity='MONTHLY', # 'DAILY'|'MONTHLY'|'HOURLY',
Filter={
'Tags': {
'Key': tag_key,
'Values': tag_values_parsed,
'MatchOptions': [
'EQUALS' # 'EQUALS'|'ABSENT'|'STARTS_WITH'|'ENDS_WITH'|'CONTAINS'|'CASE_SENSITIVE'|'CASE_INSENSITIVE',
]
},
},
# GroupBy=[
# {
# 'Type': 'SERVICE', # 'DIMENSION'|'TAG'|'COST_CATEGORY', # AZ, INSTANCE_TYPE, LEGAL_ENTITY_NAME, INVOICING_ENTITY, LINKED_ACCOUNT, OPERATION, PLATFORM, PURCHASE_TYPE, SERVICE, TENANCY, RECORD_TYPE , and USAGE_TYPE
# 'Key': 'string',
# },
# ],
)
print(cost_usage_response)
except Exception as e:
print(e)

Related

How to fetch/get Google cloud transfer job's run history details in python?

I have few Google cloud transfer jobs running in my GCP account, which transfers data from Azure to GCS bucket.
As per this document - https://cloud.google.com/storage-transfer/docs/reference/rest/v1/transferJobs/get?apix_params=%7B%22jobName%22%3A%22transferJobs%2F213858246512856794%22%2C%22projectId%22%3A%22merlincloud-gcp-preprod%22%7D
the "get" method can fetch details of the job like name, description, bucketName, status, includePrefixes, storageAccount and so on.
Here's the sample output of "get" method.
{
"name": "transferJobs/<job_name>",
"description": "<description given while creating job>",
"projectId": "<project_id>",
"transferSpec": {
"gcsDataSink": {
"bucketName": "<destination_bucket>"
},
"objectConditions": {
"includePrefixes": [
"<prefix given while creating job>"
],
"lastModifiedSince": "2021-06-30T18:30:00Z"
},
"transferOptions": {
},
"azureBlobStorageDataSource": {
"storageAccount": "<account_name>",
"container": "<container_name>"
}
},
"schedule": {
"scheduleStartDate": {
"year": 2021,
"month": 7,
"day": 1
},
"startTimeOfDay": {
"hours": 13,
"minutes": 45
},
"repeatInterval": "86400s"
},
"status": "ENABLED",
"creationTime": "2021-07-01T06:08:19.392111916Z",
"lastModificationTime": "2021-07-01T06:13:32.460934533Z",
"latestOperationName": "transferOperations/transferJobs-<job_name>"
}
Now, how do I fetch the run history details of a particular job in python?
By "Run history details" I mean the metrics (Data transferred, no of files, status, size, duration) displayed in GTS console as shown in the picture below.
I'm unfamiliar with the transfer service but I'm very familiar with GCP.
The only other resource that's provided by the service is transferOperations.
Does that provide the data you need?
If not (!), it's possible that Google hasn't exposed this functionality beyond the Console. This happens occasionally even though the intent is always to be (public) API first.
One way you can investigate is to check the browser's developer tools 'network' tab to see what REST API calls the Console is making to fulfill the request. Another way is to use the equivalent gcloud command and tack on --log-http to see the underlying REST API calls that way.
As #DazWilkin mentioned, I was able to fetch each job's run history details using the transferOperations - list API
I wrote a Cloud Function to fetch GTS metrics by making API calls.
Initially it makes tansferJobs - list API call and fetches the list of jobs and in that fetches only the required job details. It then makes "transferOperations" API call and passes the job name to fetch the run history details.
Here's the code:
from googleapiclient import discovery
from oauth2client.client import GoogleCredentials
from datetime import datetime
import logging
"""
requirements.txt
google-api-python-client==2.3.0
oauth2client==4.1.3
"""
class GTSMetrics:
def __init__(self):
self.project = "<your_gcp_project_name>"
self.source_type_mapping = {"gcsDataSource": "Google Cloud Storage", "awsS3DataSource": "Amazon S3",
"azureBlobStorageDataSource": "Azure Storage"}
self.transfer_job_names = ["transferJobs/<your_job_name>"]
self.credentials = GoogleCredentials.get_application_default()
self.service = discovery.build('storagetransfer', 'v1', credentials=self.credentials)
self.metric_values = {}
def build_run_history_metrics(self, job=None):
try:
if job:
operation_filters = {"projectId": self.project, "jobNames": [job['name']]}
request = self.service.transferOperations().list(name='transferOperations', filter=operation_filters)
while request is not None:
response = request.execute()
if 'operations' in response:
self.metric_values['total_runs'] = len(response['operations'])
metadata = response['operations'][0]['metadata']
status = metadata['status'] if 'status' in metadata else ""
start_time = metadata['startTime'] if 'startTime' in metadata else ""
end_time = metadata['endTime'] if 'endTime' in metadata else ""
start_time_object = datetime.strptime(start_time[:-4], "%Y-%m-%dT%H:%M:%S.%f")
end_time_object = datetime.strptime(end_time[:-4], "%Y-%m-%dT%H:%M:%S.%f")
gts_copy_duration = end_time_object - start_time_object
self.metric_values['latest_run_status'] = status
self.metric_values['latest_run_time'] = str(start_time_object)
self.metric_values['latest_run_errors'] = ""
self.metric_values['start_time'] = str(start_time_object)
self.metric_values['end_time'] = str(end_time_object)
self.metric_values['duration'] = gts_copy_duration.total_seconds()
if status == "FAILED":
if 'errorBreakdowns' in metadata:
errors = metadata['errorBreakdowns'][0]['errorCount']
error_code = metadata['errorBreakdowns'][0]['errorCode']
self.metric_values['latest_run_errors'] = f"{errors} - {error_code}"
elif status == "SUCCESS":
counters = metadata['counters']
data_bytes = counters['bytesCopiedToSink'] if 'bytesCopiedToSink' in counters else '0 B'
obj_from_src = str(
counters['objectsFoundFromSource']) if 'objectsFoundFromSource' in counters else 0
obj_copied_sink = str(
counters['objectsCopiedToSink']) if 'objectsCopiedToSink' in counters else 0
data_skipped_bytes = counters[
'bytesFromSourceSkippedBySync'] if 'bytesFromSourceSkippedBySync' in counters else '0 B'
data_skipped_files = counters[
'objectsFromSourceSkippedBySync'] if 'objectsFromSourceSkippedBySync' in counters else '0'
self.metric_values['data_transferred'] = data_bytes
self.metric_values['files_found_in_source'] = obj_from_src
self.metric_values['files_copied_to_sink'] = obj_copied_sink
self.metric_values['data_skipped_in_bytes'] = data_skipped_bytes
self.metric_values['data_skipped_files'] = data_skipped_files
break
# request = self.service.transferOperations().list_next(previous_request=request,
# previous_response=response)
except Exception as e:
logging.error(f"Exception in build_run_history_metrics - {str(e)}")
def build_job_metrics(self, job):
try:
transfer_spec = list(job['transferSpec'].keys())
source = ""
source_type = ""
if "gcsDataSource" in transfer_spec:
source_type = self.source_type_mapping["gcsDataSource"]
source = job['transferSpec']["gcsDataSource"]["bucketName"]
elif "awsS3DataSource" in transfer_spec:
source_type = self.source_type_mapping["awsS3DataSource"]
source = job['transferSpec']["awsS3DataSource"]["bucketName"]
elif "azureBlobStorageDataSource" in transfer_spec:
source_type = self.source_type_mapping["azureBlobStorageDataSource"]
frequency = "Once"
schedule = list(job['schedule'].keys())
if "repeatInterval" in schedule:
interval = job['schedule']['repeatInterval']
if interval == "86400s":
frequency = "Every day"
elif interval == "604800s":
frequency = "Every week"
else:
frequency = "Custom"
prefix = ""
if 'objectConditions' in transfer_spec:
obj_con = job['transferSpec']['objectConditions']
if 'includePrefixes' in obj_con:
prefix = job['transferSpec']['objectConditions']['includePrefixes'][0]
self.metric_values['job_description'] = job['description']
self.metric_values['job_name'] = job['name']
self.metric_values['source_type'] = source_type
self.metric_values['source'] = source
self.metric_values['destination'] = job['transferSpec']['gcsDataSink']['bucketName']
self.metric_values['frequency'] = frequency
self.metric_values['prefix'] = prefix
except Exception as e:
logging.error(f"Exception in build_job_metrics - {str(e)}")
def build_metrics(self):
try:
request = self.service.transferJobs().list(pageSize=None, pageToken=None, x__xgafv=None,
ilter={"projectId": self.project})
while request is not None:
response = request.execute()
for transfer_job in response['transferJobs']:
if transfer_job['name'] in self.transfer_job_names:
# fetch job details
self.build_job_metrics(job=transfer_job)
# fetch run history details for the job
self.build_run_history_metrics(job=transfer_job)
request = self.service.transferJobs().list_next(previous_request=request, previous_response=response)
logging.info(f"GTS Metrics - {str(self.metric_values)}")
except Exception as e:
logging.error(f"Exception in build_metrics - {str(e)}")
def build_gts_metrics(request):
gts_metrics = GTSMetrics()
gts_metrics.build_metrics()

AWS Creating RDS instance failed on VPCSecurityGroupIds

Forgive my ignorance, I am new to AWS.
When I attempt to run the following Python code sample, it is returning a failure with when calling the CreateDBInstance with an invalid security group...but the groupId "appears" to be correct...value is correct but it is enclosed in double quotes vs single quotes....but the GroupName is blank. I have confirmed that the security id is correct. What do I need to pass into the VpcSecurityGroupIds to avoid the failure. I have attempted to hardcode both the Group ID and Group Name without success
response = rds_client.create_db_instance(
DBInstanceIdentifier=rds_identifier,
DBName=db_name,
DBInstanceClass='db.t2.micro',
Engine='mariadb',
MasterUsername='masteruser',
MasterUserPassword='mymasterpassw0rd1!',
VpcSecurityGroupIds=[sg_id_number],
AllocatedStorage=20,
Tags=[
{
'Key': 'POC-Email',
'Value': admin_email
},
{
'Key': 'Purpose',
'Value': 'AWS Developer Study Guide Demo'
}
]
)
I think you need to pass the security group ID enclosed in double quotes for example
VpcSecurityGroupIds=["sg-123456"]
Refer to: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rds.html#RDS.Client.create_db_instance
You can run the below code in python 3.7 (your rds instance will be placed in default vpc so you need the sg from default vpc)
import boto3
def lambda_handler(event, context):
rds_client = boto3.client('rds')
response = rds_client.create_db_instance(
DBInstanceIdentifier="rds-identifier",
DBName="db_name",
DBInstanceClass='db.t2.micro',
Engine='mariadb',
MasterUsername='masteruser',
MasterUserPassword='mymasterpassw0rd1!',
VpcSecurityGroupIds=['sg-***'], #sg in defautl VPC
AllocatedStorage=20,
Tags=[
{
'Key': 'POC-Email',
'Value': "admin_email"
},
{
'Key': 'Purpose',
'Value': 'AWS Developer Study Guide Demo'
}
]
)

How start an EC2 instance through Apache Guacamole?

In my project, some EC2 instances will be shut down. These instances will only be connected when the user needs to work.
Users will access the instances using a clientless remote desktop gateway called Apache Guacamole.
If the instance is stopped, how start an EC2 instance through Apache Guacamole?
Home Screen
Guacamole is, essentially, an RDP/VNC/SSH client and I don't think you can get the instances to startup by themselves since there is no possibility for a wake-on-LAN feature or something like it out-of-the-box.
I used to have a similar issue and we always had one instance up and running and used it to run the AWS CLI to startup the instances we wanted.
Alternatively you could modify the calls from Guacamole to invoke a Lambda function to check if the instance you wish to connect to is running and start it up if not; but then you'd have to deal with the timeout for starting a session from Guacamole (not sure if this is a configurable value from the web admin console, or files), or set up another way of getting feedback for when your instance becomes available.
There was a discussion in the Guacamole mailing list regarding Wake-on-LAN feature and one approach was proposed. It is based on the script that monitors connection attempts and launches instances when needed.
Although it is more a workaround, maybe it will be helpful for you. For the proper solution, it is possible to develop an extension.
You may find the discussion and a link to the script here:
http://apache-guacamole-general-user-mailing-list.2363388.n4.nabble.com/guacamole-and-wake-on-LAN-td7526.html
http://apache-guacamole-general-user-mailing-list.2363388.n4.nabble.com/Wake-on-lan-function-working-td2832.html
There is unfortunately not a very simple solution. The Lambda approach is the way we solved it.
Guacamole has a feature that logs accesses to Cloudwatch Logs.
So next we need the the information of the connection_id and the username/id as a tag on the instance. We are automatically assigning theses tags with our back-end tool when starting the instances.
Now when a user connects to a machine, a log is written to Cloudwatch Logs.
A filter is applied to only get login attempts and trigger Lambda.
The triggered Lambda script checks if there is an instance with such tags corresponding to the current connection attempt and if the instance is stopped, plus other constraints, like if an instance is expired for example.
If yes, then the instance gets started, and in roughly 40 seconds the user is able to connect.
The lambda scripts looks like this:
#receive information from cloudwatch event, parse it call function to start instances
import re
import boto3
import datetime
from conn_inc import *
from start_instance import *
def lambda_handler(event, context):
# Variables
region = "eu-central-1"
cw_region = "eu-central-1"
# Clients
ec2Client = boto3.client('ec2')
# Session
session = boto3.Session(region_name=region)
# Resource
ec2 = session.resource('ec2', region)
print(event)
#print ("awsdata: ", event['awslogs']['data'])
userdata ={}
userdata = get_userdata(event['awslogs']['data'])
print ("logDataUserName: ", userdata["logDataUserName"], "connection_ids: ", userdata["startConnectionId"])
start_instance(ec2,ec2Client, userdata["logDataUserName"],userdata["startConnectionId"])
import boto3
import datetime
from datetime import date
import gzip
import json
import base64
from start_related_instances import *
def start_instance(ec2,ec2Client,logDataUserName,startConnectionId):
# Boto 3
# Use the filter() method of the instances collection to retrieve
# all stopped EC2 instances which have the tag connection_ids.
instances = ec2.instances.filter(
Filters=[
{
'Name': 'instance-state-name',
'Values': ['stopped'],
},
{
'Name': 'tag:connection_ids',
'Values': [f"*{startConnectionId}*"],
}
]
)
# print ("instances: ", list(instances))
#check if instances are found
if len(list(instances)) == 0:
print("No instances with connectionId ", startConnectionId, " found that is stopped.")
else:
for instance in instances:
print(instance.id, instance.instance_type)
expire = ""
connectionName = ""
for tag in instance.tags:
if tag["Key"] == 'expire': #get expiration date
expire = tag["Value"]
if (expire == ""):
print ("Start instance: ", instance.id, ", no expire found")
ec2Client.start_instances(
InstanceIds=[instance.id]
)
else:
print("Check if instance already expired.")
splitDate = expire.split(".")
expire = datetime.datetime(int(splitDate[2]) , int(splitDate[1]) , int(splitDate[0]) )
args = date.today().timetuple()[:6]
today = datetime.datetime(*args)
if (expire >= today):
print("Instance is not yet expired.")
print ("Start instance: ", instance.id, "expire: ", expire, ", today: ", today)
ec2Client.start_instances(
InstanceIds=[instance.id]
)
else:
print ("Instance not started, because it already expired: ", instance.id,"expiration: ", f"{expire}", "today:", f"{today}")
def get_userdata(cw_data):
compressed_payload = base64.b64decode(cw_data)
uncompressed_payload = gzip.decompress(compressed_payload)
payload = json.loads(uncompressed_payload)
message = ""
log_events = payload['logEvents']
for log_event in log_events:
message = log_event['message']
# print(f'LogEvent: {log_event}')
#regex = r"\'.*?\'"
#m = re.search(str(regex), str(message), re.DOTALL)
logDataUserName = message.split('"')[1] #get the username from the user logged into guacamole "Adm_EKoester_1134faD"
startConnectionId = message.split('"')[3] #get the connection Id of the connection which should be started
# create dict
dict={}
dict["connected"] = False
dict["disconnected"] = False
dict["error"] = True
dict["guacamole"] = payload["logStream"]
dict["logDataUserName"] = logDataUserName
dict["startConnectionId"] = startConnectionId
# check for connected or disconnected
ind_connected = message.find("connected to connection")
ind_disconnected = message.find("disconnected from connection")
# print ("ind_connected: ", ind_connected)
# print ("ind_disconnected: ", ind_disconnected)
if ind_connected > 0 and not ind_disconnected > 0:
dict["connected"] = True
dict["error"] = False
elif ind_disconnected > 0 and not ind_connected > 0:
dict["disconnected"] = True
dict["error"] = False
return dict
The cloudwatch logs trigger for lambda like that:

Aws cost and usage for all the instances

I would like to get the usage cost report of each instance in my aws account form a period of time.
I'm able to get linked_account_id and service in the output but I need instance_id as well. Please help
import argparse
import boto3
import datetime
cd = boto3.client('ce', 'ap-south-1')
results = []
token = None
while True:
if token:
kwargs = {'NextPageToken': token}
else:
kwargs = {}
data = cd.get_cost_and_usage(
TimePeriod={'Start': '2019-01-01', 'End': '2019-06-30'},
Granularity='MONTHLY',
Metrics=['BlendedCost','UnblendedCost'],
GroupBy=[
{'Type': 'DIMENSION', 'Key': 'LINKED_ACCOUNT'},
{'Type': 'DIMENSION', 'Key': 'SERVICE'}
], **kwargs)
results += data['ResultsByTime']
token = data.get('NextPageToken')
if not token:
break
print('\t'.join(['Start_date', 'End_date', 'LinkedAccount', 'Service', 'blended_cost','unblended_cost', 'Unit', 'Estimated']))
for result_by_time in results:
for group in result_by_time['Groups']:
blended_cost = group['Metrics']['BlendedCost']['Amount']
unblended_cost = group['Metrics']['UnblendedCost']['Amount']
unit = group['Metrics']['UnblendedCost']['Unit']
print(result_by_time['TimePeriod']['Start'], '\t',
result_by_time['TimePeriod']['End'],'\t',
'\t'.join(group['Keys']), '\t',
blended_cost,'\t',
unblended_cost, '\t',
unit, '\t',
result_by_time['Estimated'])
As far as I know, Cost Explorer can't treat the usage per instance. There is a function Cost and Usage Reports which gives a detailed billing report by dump files. In this file, you can see the instance id.
It can also be connected to the AWS Athena. Once you did this, then directly query to the file on Athena.
Here is my presto example.
select
lineitem_resourceid,
sum(lineitem_unblendedcost) as unblended_cost,
sum(lineitem_blendedcost) as blended_cost
from
<table>
where
lineitem_productcode = 'AmazonEC2' and
product_operation like 'RunInstances%'
group by
lineitem_resourceid
The result is
lineitem_resourceid unblended_cost blended_cost
i-***************** 279.424 279.424
i-***************** 139.948 139.948
i-******** 68.198 68.198
i-***************** 3.848 3.848
i-***************** 0.013 0.013
where the resourceid containes the instance id. The amount of cost is summed for all usage in this month. For other type of product_operation, it will contains different resource ids.
You can add an individual tag to all instances (e.g. Id) and then group by that tag:
GroupBy=[
{
'Type': 'TAG',
'Key': 'Id'
},
],

cannot make more than one spot request using boto3 in one day

Greeting,
My code work as long as its the first spot request of the day. If I terminate the instance and make another spot request It just gives me back my old request.
Is there something with my code or with AWS ??? Is there work-around ??
I have tried to clone my AMI and then use the clone AMI change the price or change the number of instance in the spec
but it is still not working ???
!/home/makayo/.virtualenvs/boto3/bin/python
"""
http://boto3.readthedocs.io/en/latest/reference/services/ec2.html#EC2.Client.describe_spot_instance_requests
"""
import boto3
import time
myid=
s = boto3.Session()
ec2 = s.resource('ec2')
client = boto3.client('ec2')
images = list(ec2.images.filter(Owners=[myid]))
def getdate(datestr):
ix=datestr.replace('T',' ')
ix=ix[0:len(ix)-5]
idx=time.strptime(ix,'%Y-%m-%d %H:%M:%S')
return(idx)
zz=sorted(images, key=lambda images: getdate(images.creation_date))
#last_ami
myAmi=zz[len(zz)-1]
#earliest
#myAmi=latestAmi=zz[0]
"""
[{u'DeviceName': '/dev/sda1', u'Ebs': {u'DeleteOnTermination': True, u'Encrypted': False, u'SnapshotId': 'snap-d8de3adb', u'VolumeSize': 50, u'VolumeType': 'gp2'}}]
"""
#myimageId='ami-42870a55'
myimageId=myAmi.id
print myimageId
mysubnetId= myinstanceType='c4.4xlarge'
mykeyName='spot-coursera'
#make sure ajust this but dont do multiple in a loop as it can fail!!!
mycount=2
#make sure ajust this but dont do multiple in a loop as it can fail!!!
myprice='5.0'
mytype='one-time'
myipAddr=
myallocId=''
mysecurityGroups=['']
#mydisksize=70
mygroupId=
#mygroupId=
myzone='us-east-1a'
myvpcId='vpc-503dba37'
#latestAmi.block_device_mappings[0]['Ebs']['VolumeSize']=mydisksize
#diskSpec=latestAmi.block_device_mappings[0]['Ebs']['VolumeSize']
response2 = client.request_spot_instances(
DryRun=False,
SpotPrice=myprice,
ClientToken='string',
InstanceCount=1,
Type='one-time',
LaunchSpecification={
'ImageId': myimageId,
'KeyName': mykeyName,
'SubnetId':mysubnetId,
#'SecurityGroups': mysecurityGroups,
'InstanceType': myinstanceType,
'Placement': {
'AvailabilityZone': myzone,
}
}
)
#print(response2)
myrequestId=response2['SpotInstanceRequests'][0]['SpotInstanceRequestId']
import time
XX=True
while XX:
response3 = client.describe_spot_instance_requests(
#DryRun=True,
SpotInstanceRequestIds=[
myrequestId,
]
#Filters=[
# {
# 'Name': 'string',
# 'Values': [
# 'string',
# ]
#},
#]
)
#print(response3)
request_status=response3['SpotInstanceRequests'][0]['Status']['Code']
if(request_status=='fullfilled'):
print myrequestId,request_status
XX=False;
elif ('pending' in request_status):
print myrequestId,request_status
time.sleep(5)
else:
XX=False
print myrequestId,request_status
"""
instances = ec2.instances.filter(Filters=[{'Name': 'instance-state-name', 'Values': ['running']}])
while( len(list(instances))==0):
instances = ec2.instances.filter(Filters=[{'Name': 'instance-state-name', 'Values': ['running']}])
instances = ec2.instances.filter(Filters=[{'Name': 'instance-state-name', 'Values': ['running']}])
for instance in instances:
print(instance.id, instance.instance_type);
response = instance.modify_attribute(Groups=[mygroupId]);
print(response);
This is wrong:
ClientToken='string',
Or, at least, it's wrong most of the time, as you should now realize.
The purpose of the token is to ensure that EC2 does not process the same request twice, due to retries, bugs, or a multitude of other reasons.
It doesn't matter (within reason -- 64 characters, max, ASCII, case-sensitive) what you send here, but you need to send something different with each unique request.
A client token is a unique, case-sensitive string of up to 64 ASCII characters. It is included in the response when you describe the instance. A client token is valid for at least 24 hours after the termination of the instance. You should not reuse a client token in another call later on.
http://docs.aws.amazon.com/AWSEC2/latest/APIReference/Run_Instance_Idempotency.html