Sending sms notification to multiple phone number using SNS with boto3 - amazon-web-services

I am trying to send an event-driven notification through SMS using aws SNS.
I am trying to write a script to send a message to a single number first.
Here is my below code for that
import json
import boto3
def lambda_handler(event, context):
client = boto3.client("sns")
response = client.publish(
PhoneNumber="+91 xxxxxxxxxx",
Message="Hello World!"
)
print (response)
# TODO implement
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
The thing here is I am not getting any message from this.
Could anyone help me out in achieving this use case of mine?

Related

Testing a HTTP API is working with a boto3 Lambda function

I'm not too sure how the formatting works with using json and boto3 in the same file. The function works how it should but I don't know how to get a response from an API without an Internal server error.
I don't know if it is permissions or the code is wrong.
import boto3
import json
def lambda_handler(event, context):
client = boto3.resource('dynamodb')
table = client.Table('Visit_Count')
input = {'Visits': 1}
table.put_item(Item=input)
return {
'statusCode': 200
body: json.dumps("Hello, World!")
}
Instead of body it should be 'body'. Other than that you should check CloudWatch logs for any further lambda errors.
Anon and Marcin were right, I just tried it and it worked
Your Lambda role also need to have dynamodb:PutItem
import boto3
import json
def lambda_handler(event, context):
client = boto3.resource('dynamodb')
table = client.Table('Visit_Count')
input = {'Visits': 1}
table.put_item(Item=input)
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}

SQS fifo trigger invoke Lambda Function (1 message - 1 invocation)

I have a SQS FIFO queue triggering a Lambda function.
I sent 10 messages (all different) and the lambda was invoked just once.
Details:
SQS
Visibility timeout: 30 min
Delivery delay: 0 secs
Receive Message Wait Time: 0 secs
Lambda:
Batch size: 1
timeout: 3secs
I don't see any errors on Lambda invocations.
I don't want to touch the delivery delay, but if I increase, seems working.
The avg duration time is less than 1,5ms
Any ideas how I can achieve this?
Should I increase the delivery delay or time out?
The message is being sent from a ecs task with the following code:
from flask import Flask, request, redirect, url_for, send_from_directory, jsonify
app = Flask(__name__)
from werkzeug.utils import secure_filename
import os
import random
import boto3
s3 = boto3.client('s3')
sqs = boto3.client('sqs',region_name='eu-west-1')
#app.route('/', methods=['GET'])
def hello_world():
return 'Hello World!'
#app.route('/upload', methods=['POST'])
def upload():
print (str(random.randint(0,9)))
file = request.files['file']
if file:
filename = secure_filename(file.filename)
file.save(filename)
s3.upload_file(
Bucket = os.environ['bucket'],
Filename=filename,
Key = filename
)
resp = sqs.send_message(
QueueUrl=os.environ['queue'],
MessageBody=filename,
MessageGroupId=filename
)
return jsonify({
'msg': "OK"
})
else:
return jsonify({
'msg': "NOT OK"
})
Check if this helps:
The message deduplication ID is the token used for deduplication of sent messages. If a message with a particular message deduplication ID is sent successfully, any messages sent with the same message deduplication ID are accepted successfully but aren't delivered during the 5-minute deduplication interval.
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/using-messagededuplicationid-property.html
At least it explains why it works when you increase delivery delay.

AWS codepipeline stage approval using API gateway and lambda not working

we have a AWS code-pipeline for Angular Application build and deployment.
I have added a approval stage where user needs give approval using the link provided in mail.
following is pipeline work flow
when pipeline reached approval stage, it will send out a mail to a SNS topic, mail body will contain links for Approve and Reject. these are API gateway url to pass the pipeline details and user approval to integrated lambda.
example of API url: https://xxxxx.execute-api.us-east-1.amazonaws.com/v0/pipeline-approval?action=Approved&pipeline=pipeline-name-here=release-approval&pipelineexecutionid=xxxxxxx
And the mail I'm receiving containing symantec link as it's coming from AWS, https://clicktime.symantec.com/34RBC48WLEQRUG7Vc?u=https%3A%2F%xxxxx.execute-api.us-east-1.amazonaws.com%2Fv0%2Fpipeline-approval%3Faction%3DApproved%26pipeline%3Dpiplinexxx%3Drelease-approval%26pipelineexecutionid%3xxxxxx
below is the lambda function code
import json
import logging
import re
import time
import boto3
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
MAX_WAIT_FOR_RESPONSE = 10
WAIT_INCREMENT = 1
def handler(event, context):
logger.info('REQUEST RECEIVED:\n %s', event)
logger.info('REQUEST RECEIVED:\n %s', context)
pipeline = event["queryStringParameters"]['pipeline']
stage = event["queryStringParameters"]['stage']
action = event["queryStringParameters"]['action']
approval_action = 'transition'
pipelineexecutionid = event["queryStringParameters"]['pipelineexecutionid']
client = boto3.client('codepipeline')
r = client.get_pipeline_state(name=pipeline)['stageStates']
print(r)
s = next((x for x in r if x['stageName'] == stage and x['latestExecution']['pipelineExecutionId'] == pipelineexecutionid ), None)
print(s)
s1 = s['actionStates']
print(s1)
s2 = next((y for y in s1 if y['actionName'] == approval_action ), None)
print(s2)
t = s2['latestExecution']['token']
print(t)
client.put_approval_result(
pipelineName=pipeline,
stageName=stage,
actionName=approval_action,
result={
'summary': 'Automatically approved by Lambda.',
'status': action
},
token=t
)
logger.info("Status message: %s", client.put_approval_result)
if action == 'Approved':
return {"statusCode": 200, "body": json.dumps('Thank you for approving the release!!')}
elif action == 'Rejected':
return {"statusCode": 200, "body": json.dumps('rejected.')}
The issue is after reaching the approval stage it's sending out a mail but it's not waiting for user input, it's automatically getting Approved or Rejected itself with in 2 to 5sec.
please help me what is going wrong here, why Lambda not waiting for response from API and why it's Approving or rejecting automatically.

Asyncio is not working within my Python3.7 lambda

I am trying to create a python3.7 lambda which correctly uses asyncio for threading.
I have tried many different code variations but here is the latest block. I am using AWS Xray to look at the timing and it is easy to verify that the async is not working correctly. All these tasks and calls are being done synchronously.
import json
import boto3
import asyncio
from botocore.exceptions import ClientError
from aws_xray_sdk.core import xray_recorder
from aws_xray_sdk.core import patch_all
#xray
patch_all()
def lambda_handler(event, context):
tasks = []
dict_to_populate = {}
for item in list:
tasks.append(asyncio.ensure_future(do_work(item, dict_to_populate)))
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.gather(*tasks))
loop.close
async def do_work(item, dict_to_populate):
#assume regions are obtained
for region in regions:
response_vpcs = describe_vpcs(obj['Id'], session_assumed, region)
if 'Vpcs' in response_vpcs:
for vpc in response_vpcs['Vpcs']:
#process
I expect to see the do_work functions started at essentially the same time (asynchronously) but they are all synchronous according to XRAY. It is processing synchronously and is populating dict_to_populate as expected.
This is how i have done in my aws lambda, I wanted to do 4 post request and then collect all the responses. Hope this helps.
loop = asyncio.get_event_loop()
if loop.is_closed():
loop = asyncio.new_event_loop()
#The perform_traces method i do all the post method
task = loop.create_task(perform_traces(payloads, message, contact_centre))
unique_match, error = loop.run_until_complete(task)
loop.close()
In the perform_trace method this is how i have used wait with session
future_dds_responses = []
async with aiohttp.ClientSession() as session:
for payload in payloads:
future_dds_responses.append(dds_async_trace(session, payload, contact_centre))
dds_responses, pending = await asyncio.wait(future_dds_responses)
In dds_async_trace this is how i have done the post using the aiohttp.ClientSession session
async with session.post(pds_url,
data=populated_template_payload,
headers=PDS_HEADERS,
ssl=ssl_context) as response:
status_code = response.status

AWS CodePipeline Custom Lambda Function Runs Forever and Never Returns

I have a simple AWS CodePipeline with the standard "Source" -> "Build" -> "Deploy" pipeline stages that work fine and I am trying to add my own custom final pipeline stage that is a single AWS Lambda Function. The problem is my last, custom Lambda function runs multiple times and after a very long time, errors with the following message:
Please see the attached screenshot for the whole pipeline:
When the pipeline reaches this final step, it spins for a very long time with the "Blue ( In-Progress )" status before showing an error as shown here:
Here is my Lambda Function code:
from __future__ import print_function
import hashlib
import time
import os
import boto3
import json
from botocore.exceptions import ClientError
def lambda_handler(event, context):
# Test
AWS_ACCESS_KEY = ASDF1234
AWS_SECRET_KEY = ASDF1234
SQS_TESTING_OUTPUT_STATUS_QUEUE_NAME = 'TestingOutputQueue'
# Get the code pipeline
code_pipeline = boto3.client('codepipeline')
# Get the job_id
for key, value in event.items():
print(key,value)
job_id = event['CodePipeline.job']['id']
DATA = json.dumps(event)
# Create a connection the SQS Notification service
sqs_resource_connection = boto3.resource(
'sqs',
aws_access_key_id = AWS_ACCESS_KEY,
aws_secret_access_key = AWS_SECRET_KEY,
region_name = 'us-west-2'
)
# Get the queue handle
print("Waiting for notification from AWS ...")
queue = sqs_resource_connection.get_queue_by_name(QueueName = SQS_TESTING_OUTPUT_STATUS_QUEUE_NAME)
messageContent = ""
cnt = 1
# Replace sender#example.com with your "From" address.
# This address must be verified with Amazon SES.
SENDER = ME
# Replace recipient#example.com with a "To" address. If your account
# is still in the sandbox, this address must be verified.
RECIPIENTS = [YOU]
# If necessary, replace us-west-2 with the AWS Region you're using for Amazon SES.
AWS_REGION = "us-east-1"
# The subject line for the email.
SUBJECT = "Test Case Results"
# The email body for recipients with non-HTML email clients.
BODY_TEXT = ("Test Case Results Were ...")
# The HTML body of the email.
BODY_HTML = """<html>
<head></head>
<body>
<h1>Amazon SES Test (SDK for Python)</h1>
<p>%s</p>
</body>
</html>
"""%(DATA)
# The character encoding for the email.
CHARSET = "UTF-8"
# Create a new SES resource and specify a region.
client = boto3.client('ses', region_name=AWS_REGION)
# Try to send the email.
try:
# Provide the contents of the email.
response = client.send_email(
Destination={
'ToAddresses': RECIPIENTS,
},
Message={
'Body': {
'Html': {
'Charset': CHARSET,
'Data': BODY_HTML,
},
'Text': {
'Charset': CHARSET,
'Data': BODY_TEXT,
},
},
'Subject': {
'Charset': CHARSET,
'Data': SUBJECT,
},
},
Source=SENDER,
# If you are not using a configuration set, comment or delete the
# following line
#ConfigurationSetName=CONFIGURATION_SET,
)
# Display an error if something goes wrong.
except ClientError as e:
code_pipeline.put_third_party_job_failure_result(jobId=job_id, failureDetails={'message': message, 'type': 'JobFailed'})
code_pipeline.put_job_failure_result(jobId=job_id, failureDetails={'message': message, 'type': 'JobFailed'})
print(e.response['Error']['Message'])
else:
code_pipeline.put_third_party_job_success_result(jobId=job_id)
code_pipeline.put_job_success_result(jobId=job_id)
print("Email sent! Message ID:"),
print(response['MessageId'])
print('Function complete.')
return "Complete."
How can I get the Lambda to fire once and return so the pipeline can complete properly.
You are missing an important integration between your Lambda Function and the CodePipeline service.
You MUST notify CodePipeline about the result of your custom step, whether it succeeded or not - see my examples below.
Reporting success:
function reportSuccess(job_id) {
var codepipeline = new AWS.CodePipeline();
var params = {
jobId: job_id,
};
return codepipeline.putJobSuccessResult(params).promise();
}
Reporting failure:
function reportFailure(job_id, invoke_id, message) {
var codepipeline = new AWS.CodePipeline();
var params = {
failureDetails: {
message: message,
type: 'JobFailed',
externalExecutionId: invoke_id,
},
jobId: job_id,
};
return codepipeline.putJobFailureResult(params).promise();
}
The integration was designed this way because one may want to integrate with an external job worker, in which their Lambda starts that worker (example, an approval process), and that worker then takes control and decides whether the whole step succeeded or failed.