boto3 EC2 script email: One email for all status checks - amazon-web-services

I am getting one email per instance that fails status checks. I want to get one email for all status checks.
Here is my code:
import boto3
import smtplib
client = boto3.client("ec2")
clientsns = boto3.client("sns")
status = client.describe_instance_status(IncludeAllInstances = True)
#failed_instances = []
for i in status["InstanceStatuses"]:
# failed_instances.append(i[{'Instance'})]
in_status = i['InstanceStatus']['Details'][0]['Status']
sys_status = i['SystemStatus']['Details'][0]['Status']
# check statuses failed instances
if ((in_status != 'passed') or (sys_status != 'passed')):
msg = f'The following instances failed status checks, {i["InstanceId"]}'
clientsns.publish(TopicArn='arn:aws:sns:us-west-1:462518063038:test',Message=msg)

Try something like this:
import boto3
import botocore
from boto3 import Session
boto3.setup_default_session(profile_name='account2')
def get_tag(tags, key='Name'):
if not tags: return ''
for tag in tags:
if tag['Key'] == key:
return tag['Value']
return ''
client = boto3.client("ec2")
conn = boto3.resource('ec2')
#instances = conn.instances.filter()
instances = conn.instances.filter(
Filters=[{'Name': 'instance-state-name', 'Values': ['running']}])
filter_for = {
"running": [{"Name": "instance-state-name", "Values": ["running"]}],
}
ec2instance = client.describe_instance_status(IncludeAllInstances = True, Filters=filter_for["running"])
failed_instances = []
for i in ec2instance["InstanceStatuses"]:
in_status = i['InstanceStatus']['Details'][0]['Status']
sys_status = i['SystemStatus']['Details'][0]['Status']
# check statuses failed instances
if ((in_status != 'passed') or (sys_status != 'passed')):
failed_instances.append(i["InstanceId"])
if len(failed_instances)>0:
# new_line = '\n'
# msg = f'The following instances failed status checks:{new_line} {new_line.join(failed_instances)}'
# #msg = f'The following instances failed status checks, {failed_instances}'
# clientsns.publish(TopicArn='arn:aws:sns:us-west-1:462518063038:test',Message=msg)
for j in failed_instances:
instance = [x for x in list(instances) if x.id == j][0]
instance_name = get_tag(instance.tags)
print (instance_name, instance.id, instance.instance_type)

Related

How can I integrate SSLCommerze to my django app?

I have created a ecommerce site. Now i want to integrate payment method. By adding SSLCommerce to my site, all payment method will be taken care of in Bangladesh. But I don't know how can I add it to my Django app. Please help!
They said something session. But I did not get it. Here is thier github repo https://github.com/sslcommerz/SSLCommerz-Python?fbclid=IwAR0KkEH3H-AOwaWneQy0POGkTw6O3vvL9NiRM4amflyQEt54_W1g1rgYB48
There is a wrapper library called "sslcommerz-lib". To use it, you'll first need an account on their sandbox environment. After completing the registration, collect your user credentials from email.
First install the package with pip install sslcommerz-lib
Import the library on your module from sslcommerz_lib import SSLCOMMERZ
Instantiate an object of the SSLCOMMERZ class with sandbox user credentials
sslcz = SSLCOMMERZ({ 'store_id': <your_store_id>, 'store_pass': <your_password>, 'issandbox': True })
Build a dictionary with some info about the transaction and customer. In a real application, most of these data will be collected from user input.
data = {
'total_amount': "100.26",
'currency': "BDT",
'tran_id': "tran_12345",
'success_url': "http://127.0.0.1:8000/payment-successful", # if transaction is succesful, user will be redirected here
'fail_url': "http://127.0.0.1:8000/payment-failed", # if transaction is failed, user will be redirected here
'cancel_url': "http://127.0.0.1:8000/payment-cancelled", # after user cancels the transaction, will be redirected here
'emi_option': "0",
'cus_name': "test",
'cus_email': "test#test.com",
'cus_phone': "01700000000",
'cus_add1': "customer address",
'cus_city': "Dhaka",
'cus_country': "Bangladesh",
'shipping_method': "NO",
'multi_card_name': "",
'num_of_item': 1,
'product_name': "Test",
'product_category': "Test Category",
'product_profile': "general",
}
Get the api reponse
response = sslcz.createSession(data)
After doing chores like updating db etc, redirect the user to 'GatewayPageURL' from the response we got earlier -
from django.shortcuts import redirect
return redirect(response['GatewayPageURL'])
SSLCOMMERZ - Python (sslcommerz-lib)
Note: If you're using this wrapper with our sandbox environment issandbox is true and live issandbox is false. (Details: Test Or Sandbox Account).
settings = { 'store_id': 'testbox', 'store_pass': 'qwerty', 'issandbox': True }
sslcommerz = SSLCOMMERZ(settings)
Installation
pip install sslcommerz-lib
Authentication Keys
You can find your store_id and store_pass at the API Documentation Page. Create an account on SSLCOMMERZ, log in and visit this link: https://developer.sslcommerz.com/registration/
Usage
Create a Initial Payment Request Session
from sslcommerz_lib import SSLCOMMERZ
settings = { 'store_id': 'testbox', 'store_pass': 'qwerty', 'issandbox': True }
sslcz = SSLCOMMERZ(settings)
post_body = {}
post_body['total_amount'] = 100.26
post_body['currency'] = "BDT"
post_body['tran_id'] = "12345"
post_body['success_url'] = "your success url"
post_body['fail_url'] = "your fail url"
post_body['cancel_url'] = "your cancel url"
post_body['emi_option'] = 0
post_body['cus_name'] = "test"
post_body['cus_email'] = "test#test.com"
post_body['cus_phone'] = "01700000000"
post_body['cus_add1'] = "customer address"
post_body['cus_city'] = "Dhaka"
post_body['cus_country'] = "Bangladesh"
post_body['shipping_method'] = "NO"
post_body['multi_card_name'] = ""
post_body['num_of_item'] = 1
post_body['product_name'] = "Test"
post_body['product_category'] = "Test Category"
post_body['product_profile'] = "general"
response = sslcz.createSession(post_body) # API response
print(response)
# Need to redirect user to response['GatewayPageURL']
Vaidate payment with IPN
from sslcommerz_lib import SSLCOMMERZ
settings = { 'store_id': 'test_testemi', 'store_pass': 'test_testemi#ssl', 'issandbox': True }
sslcz = SSLCOMMERZ(settings)
post_body = {}
post_body['tran_id'] = '5E121A0D01F92'
post_body['val_id'] = '200105225826116qFnATY9sHIwo'
post_body['amount'] = "10.00"
post_body['card_type'] = "VISA-Dutch Bangla"
post_body['store_amount'] = "9.75"
post_body['card_no'] = "418117XXXXXX6675"
post_body['bank_tran_id'] = "200105225825DBgSoRGLvczhFjj"
post_body['status'] = "VALID"
post_body['tran_date'] = "2020-01-05 22:58:21"
post_body['currency'] = "BDT"
post_body['card_issuer'] = "TRUST BANK, LTD."
post_body['card_brand'] = "VISA"
post_body['card_issuer_country'] = "Bangladesh"
post_body['card_issuer_country_code'] = "BD"
post_body['store_id'] = "test_testemi"
post_body['verify_sign'] = "d42fab70ae0bcbda5280e7baffef60b0"
post_body['verify_key'] = "amount,bank_tran_id,base_fair,card_brand,card_issuer,card_issuer_country,card_issuer_country_code,card_no,card_type,currency,currency_amount,currency_rate,currency_type,risk_level,risk_title,status,store_amount,store_id,tran_date,tran_id,val_id,value_a,value_b,value_c,value_d"
post_body['verify_sign_sha2'] = "02c0417ff467c109006382d56eedccecd68382e47245266e7b47abbb3d43976e"
post_body['currency_type'] = "BDT"
post_body['currency_amount'] = "10.00"
post_body['currency_rate'] = "1.0000"
post_body['base_fair'] = "0.00"
post_body['value_a'] = ""
post_body['value_b'] = ""
post_body['value_c'] = ""
post_body['value_d'] = ""
post_body['risk_level'] = "0"
post_body['risk_title'] = "Safe"
if sslcz.hash_validate_ipn(post_body):
response = sslcz.validationTransactionOrder(post_body['val_id'])
print(response)
else:
print("Hash validation failed")
Get the status or details of a Payment Request by sessionkey
from sslcommerz_lib import SSLCOMMERZ
settings = { 'store_id': 'testbox', 'store_pass': 'qwerty', 'issandbox': True }
sslcz = SSLCOMMERZ(settings)
sessionkey = 'A8EF93B75B8107E4F36049E80B4F9149'
response = sslcz.transaction_query_session(sessionkey)
print(response)
Get the status or details of a Payment Request by tranid
from sslcommerz_lib import SSLCOMMERZ
settings = { 'store_id': 'testbox', 'store_pass': 'qwerty', 'issandbox': True }
sslcz = SSLCOMMERZ(settings)
tranid = '59C2A4F6432F8'
response = sslcz.transaction_query_tranid(tranid)
print(response)

Cross Account Cloudtrail log transfer through Cloudwatch and Kinesis data stream

I am using Cloudwatch subscriptions to send over cloudtrail log of one account into another. The Account receiving the logs has a Kinesis data stream which receives the logs from the cloudwatch subscription and invokes the standard lambda function provided by AWS to parse and store the logs to an S3 bucket of the log receiver account.
The log files getting written to s3 bucket are in the form of :
{"eventVersion":"1.08","userIdentity":{"type":"AssumedRole","principalId":"AA:i-096379450e69ed082","arn":"arn:aws:sts::34502sdsdsd:assumed-role/RDSAccessRole/i-096379450e69ed082","accountId":"34502sdsdsd","accessKeyId":"ASIAVAVKXAXXXXXXXC","sessionContext":{"sessionIssuer":{"type":"Role","principalId":"AROAVAVKXAKDDDDD","arn":"arn:aws:iam::3450291sdsdsd:role/RDSAccessRole","accountId":"345029asasas","userName":"RDSAccessRole"},"webIdFederationData":{},"attributes":{"mfaAuthenticated":"false","creationDate":"2021-04-27T04:38:52Z"},"ec2RoleDelivery":"2.0"}},"eventTime":"2021-04-27T07:24:20Z","eventSource":"ssm.amazonaws.com","eventName":"ListInstanceAssociations","awsRegion":"us-east-1","sourceIPAddress":"188.208.227.188","userAgent":"aws-sdk-go/1.25.41 (go1.13.15; linux; amd64) amazon-ssm-agent/","requestParameters":{"instanceId":"i-096379450e69ed082","maxResults":20},"responseElements":null,"requestID":"a5c63b9d-aaed-4a3c-9b7d-a4f7c6b774ab","eventID":"70de51df-c6df-4a57-8c1e-0ffdeb5ac29d","readOnly":true,"resources":[{"accountId":"34502914asasas","ARN":"arn:aws:ec2:us-east-1:3450291asasas:instance/i-096379450e69ed082"}],"eventType":"AwsApiCall","managementEvent":true,"eventCategory":"Management","recipientAccountId":"345029149342"}
{"eventVersion":"1.08","userIdentity":{"type":"AssumedRole","principalId":"AROAVAVKXAKPKZ25XXXX:AmazonMWAA-airflow","arn":"arn:aws:sts::3450291asasas:assumed-role/dev-1xdcfd/AmazonMWAA-airflow","accountId":"34502asasas","accessKeyId":"ASIAVAVKXAXXXXXXX","sessionContext":{"sessionIssuer":{"type":"Role","principalId":"AROAVAVKXAKPKZXXXXX","arn":"arn:aws:iam::345029asasas:role/service-role/AmazonMWAA-dlp-dev-1xdcfd","accountId":"3450291asasas","userName":"dlp-dev-1xdcfd"},"webIdFederationData":{},"attributes":{"mfaAuthenticated":"false","creationDate":"2021-04-27T07:04:08Z"}},"invokedBy":"airflow.amazonaws.com"},"eventTime":"2021-04-27T07:23:46Z","eventSource":"logs.amazonaws.com","eventName":"CreateLogStream","awsRegion":"us-east-1","sourceIPAddress":"airflow.amazonaws.com","userAgent":"airflow.amazonaws.com","errorCode":"ResourceAlreadyExistsException","errorMessage":"The specified log stream already exists","requestParameters":{"logStreamName":"scheduler.py.log","logGroupName":"dlp-dev-DAGProcessing"},"responseElements":null,"requestID":"40b48ef9-fc4b-4d1a-8fd1-4f2584aff1e9","eventID":"ef608d43-4765-4a3a-9c92-14ef35104697","readOnly":false,"eventType":"AwsApiCall","apiVersion":"20140328","managementEvent":true,"eventCategory":"Management","recipientAccountId":"3450291asasas"}
The problem with this type of log lines is that Athena is not able to Parse these log lines and I am not able to query the logs using Athena.
I tried modifying the blueprint lambda function to save the log file as a standard JSON result which would make it easy for Athena to parse the files.
Eg:
{'Records': ['{"eventVersion":"1.08","userIdentity":{"type":"AssumedRole","principalId":"AROAVAVKXAKPBRW2S3TAF:i-096379450e69ed082","arn":"arn:aws:sts::345029149342:assumed-role/RightslineRDSAccessRole/i-096379450e69ed082","accountId":"345029149342","accessKeyId":"ASIAVAVKXAKPBL653UOC","sessionContext":{"sessionIssuer":{"type":"Role","principalId":"AROAVAVKXAKPXXXXXXX","arn":"arn:aws:iam::34502asasas:role/RDSAccessRole","accountId":"345029asasas","userName":"RDSAccessRole"},"webIdFederationData":{},"attributes":{"mfaAuthenticated":"false","creationDate":"2021-04-27T04:38:52Z"},"ec2RoleDelivery":"2.0"}},"eventTime":"2021-04-27T07:24:20Z","eventSource":"ssm.amazonaws.com","eventName":"ListInstanceAssociations","awsRegion":"us-east-1","sourceIPAddress":"188.208.227.188","userAgent":"aws-sdk-go/1.25.41 (go1.13.15; linux; amd64) amazon-ssm-agent/","requestParameters":{"instanceId":"i-096379450e69ed082","maxResults":20},"responseElements":null,"requestID":"a5c63b9d-aaed-4a3c-9b7d-a4f7c6b774ab","eventID":"70de51df-c6df-4a57-8c1e-0ffdeb5ac29d","readOnly":true,"resources":[{"accountId":"3450291asasas","ARN":"arn:aws:ec2:us-east-1:34502asasas:instance/i-096379450e69ed082"}],"eventType":"AwsApiCall","managementEvent":true,"eventCategory":"Management","recipientAccountId":"345029asasas"}]}
The modified code for Blueprint Lambda function that I looks like:
import base64
import json
import gzip
from io import BytesIO
import boto3
def transformLogEvent(log_event):
return log_event['message'] + '\n'
def processRecords(records):
for r in records:
data = base64.b64decode(r['data'])
striodata = BytesIO(data)
with gzip.GzipFile(fileobj=striodata, mode='r') as f:
data = json.loads(f.read())
recId = r['recordId']
if data['messageType'] == 'CONTROL_MESSAGE':
yield {
'result': 'Dropped',
'recordId': recId
}
elif data['messageType'] == 'DATA_MESSAGE':
result = {}
result["Records"] = {}
events = []
for e in data['logEvents']:
events.append(e["message"])
result["Records"] = events
print(result)
if len(result) <= 6000000:
yield {
'data': result,
'result': 'Ok',
'recordId': recId
}
else:
yield {
'result': 'ProcessingFailed',
'recordId': recId
}
else:
yield {
'result': 'ProcessingFailed',
'recordId': recId
}
def putRecordsToFirehoseStream(streamName, records, client, attemptsMade, maxAttempts):
failedRecords = []
codes = []
errMsg = ''
# if put_record_batch throws for whatever reason, response['xx'] will error out, adding a check for a valid
# response will prevent this
response = None
try:
response = client.put_record_batch(DeliveryStreamName=streamName, Records=records)
except Exception as e:
failedRecords = records
errMsg = str(e)
# if there are no failedRecords (put_record_batch succeeded), iterate over the response to gather results
if not failedRecords and response and response['FailedPutCount'] > 0:
for idx, res in enumerate(response['RequestResponses']):
# (if the result does not have a key 'ErrorCode' OR if it does and is empty) => we do not need to re-ingest
if 'ErrorCode' not in res or not res['ErrorCode']:
continue
codes.append(res['ErrorCode'])
failedRecords.append(records[idx])
errMsg = 'Individual error codes: ' + ','.join(codes)
if len(failedRecords) > 0:
if attemptsMade + 1 < maxAttempts:
print('Some records failed while calling PutRecordBatch to Firehose stream, retrying. %s' % (errMsg))
putRecordsToFirehoseStream(streamName, failedRecords, client, attemptsMade + 1, maxAttempts)
else:
raise RuntimeError('Could not put records after %s attempts. %s' % (str(maxAttempts), errMsg))
def putRecordsToKinesisStream(streamName, records, client, attemptsMade, maxAttempts):
failedRecords = []
codes = []
errMsg = ''
# if put_records throws for whatever reason, response['xx'] will error out, adding a check for a valid
# response will prevent this
response = None
try:
response = client.put_records(StreamName=streamName, Records=records)
except Exception as e:
failedRecords = records
errMsg = str(e)
# if there are no failedRecords (put_record_batch succeeded), iterate over the response to gather results
if not failedRecords and response and response['FailedRecordCount'] > 0:
for idx, res in enumerate(response['Records']):
# (if the result does not have a key 'ErrorCode' OR if it does and is empty) => we do not need to re-ingest
if 'ErrorCode' not in res or not res['ErrorCode']:
continue
codes.append(res['ErrorCode'])
failedRecords.append(records[idx])
errMsg = 'Individual error codes: ' + ','.join(codes)
if len(failedRecords) > 0:
if attemptsMade + 1 < maxAttempts:
print('Some records failed while calling PutRecords to Kinesis stream, retrying. %s' % (errMsg))
putRecordsToKinesisStream(streamName, failedRecords, client, attemptsMade + 1, maxAttempts)
else:
raise RuntimeError('Could not put records after %s attempts. %s' % (str(maxAttempts), errMsg))
def createReingestionRecord(isSas, originalRecord):
if isSas:
return {'data': base64.b64decode(originalRecord['data']), 'partitionKey': originalRecord['kinesisRecordMetadata']['partitionKey']}
else:
return {'data': base64.b64decode(originalRecord['data'])}
def getReingestionRecord(isSas, reIngestionRecord):
if isSas:
return {'Data': reIngestionRecord['data'], 'PartitionKey': reIngestionRecord['partitionKey']}
else:
return {'Data': reIngestionRecord['data']}
def lambda_handler(event, context):
print(event)
isSas = 'sourceKinesisStreamArn' in event
streamARN = event['sourceKinesisStreamArn'] if isSas else event['deliveryStreamArn']
region = streamARN.split(':')[3]
streamName = streamARN.split('/')[1]
records = list(processRecords(event['records']))
projectedSize = 0
dataByRecordId = {rec['recordId']: createReingestionRecord(isSas, rec) for rec in event['records']}
putRecordBatches = []
recordsToReingest = []
totalRecordsToBeReingested = 0
for idx, rec in enumerate(records):
if rec['result'] != 'Ok':
continue
projectedSize += len(rec['data']) + len(rec['recordId'])
# 6000000 instead of 6291456 to leave ample headroom for the stuff we didn't account for
if projectedSize > 6000000:
totalRecordsToBeReingested += 1
recordsToReingest.append(
getReingestionRecord(isSas, dataByRecordId[rec['recordId']])
)
records[idx]['result'] = 'Dropped'
del(records[idx]['data'])
# split out the record batches into multiple groups, 500 records at max per group
if len(recordsToReingest) == 500:
putRecordBatches.append(recordsToReingest)
recordsToReingest = []
if len(recordsToReingest) > 0:
# add the last batch
putRecordBatches.append(recordsToReingest)
# iterate and call putRecordBatch for each group
recordsReingestedSoFar = 0
if len(putRecordBatches) > 0:
client = boto3.client('kinesis', region_name=region) if isSas else boto3.client('firehose', region_name=region)
for recordBatch in putRecordBatches:
if isSas:
putRecordsToKinesisStream(streamName, recordBatch, client, attemptsMade=0, maxAttempts=20)
else:
putRecordsToFirehoseStream(streamName, recordBatch, client, attemptsMade=0, maxAttempts=20)
recordsReingestedSoFar += len(recordBatch)
print('Reingested %d/%d records out of %d' % (recordsReingestedSoFar, totalRecordsToBeReingested, len(event['records'])))
else:
print('No records to be reingested')
return {"records": records}
My end goal is to store the result on S3 as JSON so that it can be queried easily with Athena.
the line where the transformation is happening is:
elif data['messageType'] == 'DATA_MESSAGE':
Any help in this would be greatly appreciated.

Calling Web Service method throwing Remote Disconnected error in Zeep

I am using the Python Zeep library to call Maximo web services and able to fetch the WSDL without any issues. When I am trying to query using one of the method in Web service, it is throwing error
requests.exceptions.ConnectionError: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response',))
multiasset_wsdl = "http://maximoqa.xyz.com:9080/meaweb/services/MULTIASSET?wsdl"
work_order = 'NA0000211'
# set the session
session = Session()
session.auth = HTTPBasicAuth(Username, Password)
session.verify = False
transport = Transport(session=session)
settings = Settings(strict=False ,force_https = False)
maximo_multiasset_client = Client(multiasset_wsdl, transport=transport, settings=settings
multiasset_type_factory = maximo_multiasset_client.type_factory('ns0')
mult_asset_query = multiasset_type_factory.QueryMULTIASSETType(
baseLanguage = "?",
creationDateTime = tt.get_now(),
maximoVersion = "?",
maxItems = "1",
messageID = "?",
rsStart = "0",
transLanguage = "EN",
uniqueResult = False,
MULTIASSETQuery = multiasset_type_factory.MULTIASSETQueryType
(
operandMode = multiasset_type_factory.OperandModeType('AND'),
orderby = "?",
WHERE = f"WONUM='{work_order}'",
WORKORDER = None
)
)
print('Calling Query MultiAsset')
query_response = maximo_multiasset_client.service.QueryMULTIASSET(mult_asset_query)
Appreciate any help on this issue.

AWS boto3 unable to put tags after creating an AMI

I'm trying to put tags after creating AMI from an instance using boto3 and getting an error:
botocore.exceptions.ParamValidationError: Parameter validation failed:
Unknown parameter in input: "TagSpecifications", must be one of:
BlockDeviceMappings, Description, DryRun, InstanceId, Name, NoReboot
Here is my code, can you please check what I'm doing wrong?
It works for snapshot but failing for image.
import xlrd
import boto3
import datetime
client = boto3.client('ec2')
# Give the location of the file
loc = ("/Users/user1/Documents/aws-python/aws-tag-test (1).xlsx")
# To open Workbook
wb = xlrd.open_workbook(loc)
sheet = wb.sheet_by_index(0)
# For row 0 and column 0
#print (sheet.cell_value(0, 0))
nowtime = datetime.datetime.now()
nowdate = (nowtime.strftime("%Y-%m-%d %H-%M"))
print (nowdate)
#print (nowtime)
server_ids = []
instancename =[]
for i in range (1,sheet.nrows):
server_ids.append(sheet.cell_value(i,1))
instancename.append(sheet.cell_value(i,0))
#print (sheet.cell_value(i,1))
# excel closed
for i in range (len(server_ids)):
print(server_ids[i], instancename[i])
response = client.create_image(
Description = 'ami ' + instancename[i] + ' ' + str(nowdate),
InstanceId = server_ids[i],
Name = 'ami ' + instancename[i] + ' ' + str(nowdate),
NoReboot = True,
DryRun=False,
TagSpecifications=[
{
'ResourceType': 'image',
'Tags': [
{
'Key': 'Name',
'Value': 'ami-' + instancename[i] + '-' + str(nowdate)
},
{
'Key': 'date',
'Value': datetime.datetime.now().strftime("%Y-%m-%d")
}
]
},
]
)
#)
print(response)
Really appreciate your help.
Yes, it is now available. Not sure when, but it was definitely added sometime after the original comments.

Websocket stress test with Autobahn Testsuite

I try to do some stress test against my websocket server. On client side I run the following script from this site :
import time, sys
from twisted.internet import defer, reactor
from twisted.internet.defer import Deferred, returnValue, inlineCallbacks
from autobahn.twisted.websocket import connectWS, \
WebSocketClientFactory, \
WebSocketClientProtocol
class MassConnectProtocol(WebSocketClientProtocol):
didHandshake = False
def onOpen(self):
print("websocket connection opened")
self.factory.test.onConnected()
self.factory.test.protos.append(self)
self.didHandshake = True
class MassConnectFactory(WebSocketClientFactory):
protocol = MassConnectProtocol
def clientConnectionFailed(self, connector, reason):
if self.test.onFailed():
reactor.callLater(float(self.retrydelay)/1000., connector.connect)
def clientConnectionLost(self, connector, reason):
if self.test.onLost():
reactor.callLater(float(self.retrydelay)/1000., connector.connect)
class MassConnect:
def __init__(self, name, uri, connections, batchsize, batchdelay, retrydelay):
print('MassConnect init')
self.name = name
self.uri = uri
self.batchsize = batchsize
self.batchdelay = batchdelay
self.retrydelay = retrydelay
self.failed = 0
self.lost = 0
self.targetCnt = connections
self.currentCnt = 0
self.actual = 0
self.protos = []
def run(self):
print('MassConnect runned')
self.d = Deferred()
self.started = time.clock()
self.connectBunch()
return self.d
def onFailed(self):
self.failed += 1
sys.stdout.write("!")
return True
def onLost(self):
self.lost += 1
#sys.stdout.write("*")
return False
return True
def onConnected(self):
print("onconnected")
self.actual += 1
if self.actual % self.batchsize == 0:
sys.stdout.write(".")
if self.actual == self.targetCnt:
self.ended = time.clock()
duration = self.ended - self.started
print " connected %d clients to %s at %s in %s seconds (retries %d = failed %d + lost %d)" % (self.currentCnt, self.name, self.uri, duration, self.failed + self.lost, self.failed, self.lost)
result = {'name': self.name,
'uri': self.uri,
'connections': self.targetCnt,
'retries': self.failed + self.lost,
'lost': self.lost,
'failed': self.failed,
'duration': duration}
for p in self.protos:
p.sendClose()
#self.d.callback(result)
def connectBunch(self):
if self.currentCnt + self.batchsize < self.targetCnt:
c = self.batchsize
redo = True
else:
c = self.targetCnt - self.currentCnt
redo = False
for i in xrange(0, c):
factory = MassConnectFactory(self.uri)
factory.test = self
factory.retrydelay = self.retrydelay
connectWS(factory)
self.currentCnt += 1
if redo:
reactor.callLater(float(self.batchdelay)/1000., self.connectBunch)
class MassConnectTest:
def __init__(self, spec):
self.spec = spec
print('MassConnetest init')
#inlineCallbacks
def run(self):
print self.spec
res = []
for s in self.spec['servers']:
print s['uri']
t = MassConnect(s['name'],
s['uri'],
self.spec['options']['connections'],
self.spec['options']['batchsize'],
self.spec['options']['batchdelay'],
self.spec['options']['retrydelay'])
r = yield t.run()
res.append(r)
returnValue(res)
def startClient(spec, debug = False):
test = MassConnectTest(spec)
d = test.run()
return d
if __name__ == '__main__':
spec = {}
spec['servers'] = [{'name': 'test', 'uri':"ws://127.0.0.1:8080"} ]
spec['options'] ={'connections': 1000,'batchsize': 500, 'batchdelay': 1000, 'retrydelay': 200 }
startClient(spec,False)
But after running this script there are no connections established on the server side. Server seems to be configured properly, because when I connect to my server using different client side (for example web browser), it works fine and websocket connection is established. I also checked network sniffer and it seems that script doesn't produce any websocket connections.
What did I do wrong in this script?
The massconnect.py script you used was supposed to be invoked from another part of the autobahntestsuite, such as the wstest command:
$ echo '{"servers": [{"name": "test", "uri":"ws://127.0.0.1:8080"} ], "options": {"connections": 1000,"batchsize": 500, "batchdelay": 1000, "retrydelay": 200 }}' > spec.json
$ wstest -m massconnect --spec spec.json
If you want to copy massconnect directly, I think it's missing the command to start the Twisted deferred tasks:
if __name__ == '__main__':
spec = {}
spec['servers'] = [{'name': 'test', 'uri':"ws://127.0.0.1:8080"} ]
spec['options'] ={'connections': 1000,'batchsize': 500, 'batchdelay': 1000, 'retrydelay': 200 }
startClient(spec,False)
reactor.run() # <-- add this
And check your Python indentations, either some of them got corrupted when pasting here, or the original code had incorrect indentations in some class and function definitions.