SQS fifo trigger invoke Lambda Function (1 message - 1 invocation) - amazon-web-services

I have a SQS FIFO queue triggering a Lambda function.
I sent 10 messages (all different) and the lambda was invoked just once.
Details:
SQS
Visibility timeout: 30 min
Delivery delay: 0 secs
Receive Message Wait Time: 0 secs
Lambda:
Batch size: 1
timeout: 3secs
I don't see any errors on Lambda invocations.
I don't want to touch the delivery delay, but if I increase, seems working.
The avg duration time is less than 1,5ms
Any ideas how I can achieve this?
Should I increase the delivery delay or time out?
The message is being sent from a ecs task with the following code:
from flask import Flask, request, redirect, url_for, send_from_directory, jsonify
app = Flask(__name__)
from werkzeug.utils import secure_filename
import os
import random
import boto3
s3 = boto3.client('s3')
sqs = boto3.client('sqs',region_name='eu-west-1')
#app.route('/', methods=['GET'])
def hello_world():
return 'Hello World!'
#app.route('/upload', methods=['POST'])
def upload():
print (str(random.randint(0,9)))
file = request.files['file']
if file:
filename = secure_filename(file.filename)
file.save(filename)
s3.upload_file(
Bucket = os.environ['bucket'],
Filename=filename,
Key = filename
)
resp = sqs.send_message(
QueueUrl=os.environ['queue'],
MessageBody=filename,
MessageGroupId=filename
)
return jsonify({
'msg': "OK"
})
else:
return jsonify({
'msg': "NOT OK"
})

Check if this helps:
The message deduplication ID is the token used for deduplication of sent messages. If a message with a particular message deduplication ID is sent successfully, any messages sent with the same message deduplication ID are accepted successfully but aren't delivered during the 5-minute deduplication interval.
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/using-messagededuplicationid-property.html
At least it explains why it works when you increase delivery delay.

Related

AWS codepipeline stage approval using API gateway and lambda not working

we have a AWS code-pipeline for Angular Application build and deployment.
I have added a approval stage where user needs give approval using the link provided in mail.
following is pipeline work flow
when pipeline reached approval stage, it will send out a mail to a SNS topic, mail body will contain links for Approve and Reject. these are API gateway url to pass the pipeline details and user approval to integrated lambda.
example of API url: https://xxxxx.execute-api.us-east-1.amazonaws.com/v0/pipeline-approval?action=Approved&pipeline=pipeline-name-here=release-approval&pipelineexecutionid=xxxxxxx
And the mail I'm receiving containing symantec link as it's coming from AWS, https://clicktime.symantec.com/34RBC48WLEQRUG7Vc?u=https%3A%2F%xxxxx.execute-api.us-east-1.amazonaws.com%2Fv0%2Fpipeline-approval%3Faction%3DApproved%26pipeline%3Dpiplinexxx%3Drelease-approval%26pipelineexecutionid%3xxxxxx
below is the lambda function code
import json
import logging
import re
import time
import boto3
logger = logging.getLogger(__name__)
logger.setLevel(logging.INFO)
MAX_WAIT_FOR_RESPONSE = 10
WAIT_INCREMENT = 1
def handler(event, context):
logger.info('REQUEST RECEIVED:\n %s', event)
logger.info('REQUEST RECEIVED:\n %s', context)
pipeline = event["queryStringParameters"]['pipeline']
stage = event["queryStringParameters"]['stage']
action = event["queryStringParameters"]['action']
approval_action = 'transition'
pipelineexecutionid = event["queryStringParameters"]['pipelineexecutionid']
client = boto3.client('codepipeline')
r = client.get_pipeline_state(name=pipeline)['stageStates']
print(r)
s = next((x for x in r if x['stageName'] == stage and x['latestExecution']['pipelineExecutionId'] == pipelineexecutionid ), None)
print(s)
s1 = s['actionStates']
print(s1)
s2 = next((y for y in s1 if y['actionName'] == approval_action ), None)
print(s2)
t = s2['latestExecution']['token']
print(t)
client.put_approval_result(
pipelineName=pipeline,
stageName=stage,
actionName=approval_action,
result={
'summary': 'Automatically approved by Lambda.',
'status': action
},
token=t
)
logger.info("Status message: %s", client.put_approval_result)
if action == 'Approved':
return {"statusCode": 200, "body": json.dumps('Thank you for approving the release!!')}
elif action == 'Rejected':
return {"statusCode": 200, "body": json.dumps('rejected.')}
The issue is after reaching the approval stage it's sending out a mail but it's not waiting for user input, it's automatically getting Approved or Rejected itself with in 2 to 5sec.
please help me what is going wrong here, why Lambda not waiting for response from API and why it's Approving or rejecting automatically.

Sending sms notification to multiple phone number using SNS with boto3

I am trying to send an event-driven notification through SMS using aws SNS.
I am trying to write a script to send a message to a single number first.
Here is my below code for that
import json
import boto3
def lambda_handler(event, context):
client = boto3.client("sns")
response = client.publish(
PhoneNumber="+91 xxxxxxxxxx",
Message="Hello World!"
)
print (response)
# TODO implement
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
The thing here is I am not getting any message from this.
Could anyone help me out in achieving this use case of mine?

Cloud Function with pubsub trigger topic is not working

I have wrote a simple code to print data and context from a pubsub trigger cloud function.
def main(event, context):
"""Background Cloud Function to be triggered by Pub/Sub.
Args:
event (dict): The dictionary with data specific to this type of
event. The data field contains the PubsubMessage message. The
attributes field will contain custom attributes if there are any.
context (google.cloud.functions.Context): The Cloud Functions event
metadata. The event_id field contains the Pub/Sub message ID. The
timestamp field contains the publish time.
"""
import base64
print("""This Function was triggered by messageId {} published at {}
""".format(context.event_id, context.timestamp))
if 'data' in event:
name = base64.b64decode(event['data']).decode('utf-8')
else:
name = 'World'
print('Hello {}!'.format(name))
Cloud function is deployed successfully but whenever I publish a message to the trigger topic in logs I cannot see any function execution statement.
I have already verified that I am calling main function only and publishing to a right pubsub topic.
I cannot see any error statement so I am not able to debug.
Any Suggestion will be helpful
I tested your code function in python 3.8 runtime and all works fine, are you using the same pub/sub topic for push new messages?
This the code that I used on my computer to send pubsub messages.
from google.cloud import pubsub_v1
publisher = pubsub_v1.PublisherClient()
# The `topic_path` method creates a fully qualified identifier
# in the form `projects/{project_id}/topics/{topic_id}`
topic_path = publisher.topic_path("myprojectID", "test")
for n in range(1, 10):
data = u"Message number {}".format(n)
# Data must be a bytestring
data = data.encode("utf-8")
# When you publish a message, the client returns a future.
future = publisher.publish(topic_path, data=data)
print(future.result())
print("Published messages.")
requirements.txt
google-cloud-pubsub
This is full function code
import base64
def hello_pubsub(event, context):
print("""This Function was triggered by messageId {} published at {}
""".format(context.event_id, context.timestamp))
if 'data' in event:
name = base64.b64decode(event['data']).decode('utf-8')
else:
name = 'World'
print('Hello {}!'.format(name))
Expected output
This Function was triggered by messageId 1449686821351887 published at 2020-08-20T21:26:30.600Z
The logs may appears with a delay of 10-30 secs on stackdriver

Asyncio is not working within my Python3.7 lambda

I am trying to create a python3.7 lambda which correctly uses asyncio for threading.
I have tried many different code variations but here is the latest block. I am using AWS Xray to look at the timing and it is easy to verify that the async is not working correctly. All these tasks and calls are being done synchronously.
import json
import boto3
import asyncio
from botocore.exceptions import ClientError
from aws_xray_sdk.core import xray_recorder
from aws_xray_sdk.core import patch_all
#xray
patch_all()
def lambda_handler(event, context):
tasks = []
dict_to_populate = {}
for item in list:
tasks.append(asyncio.ensure_future(do_work(item, dict_to_populate)))
loop = asyncio.get_event_loop()
loop.run_until_complete(asyncio.gather(*tasks))
loop.close
async def do_work(item, dict_to_populate):
#assume regions are obtained
for region in regions:
response_vpcs = describe_vpcs(obj['Id'], session_assumed, region)
if 'Vpcs' in response_vpcs:
for vpc in response_vpcs['Vpcs']:
#process
I expect to see the do_work functions started at essentially the same time (asynchronously) but they are all synchronous according to XRAY. It is processing synchronously and is populating dict_to_populate as expected.
This is how i have done in my aws lambda, I wanted to do 4 post request and then collect all the responses. Hope this helps.
loop = asyncio.get_event_loop()
if loop.is_closed():
loop = asyncio.new_event_loop()
#The perform_traces method i do all the post method
task = loop.create_task(perform_traces(payloads, message, contact_centre))
unique_match, error = loop.run_until_complete(task)
loop.close()
In the perform_trace method this is how i have used wait with session
future_dds_responses = []
async with aiohttp.ClientSession() as session:
for payload in payloads:
future_dds_responses.append(dds_async_trace(session, payload, contact_centre))
dds_responses, pending = await asyncio.wait(future_dds_responses)
In dds_async_trace this is how i have done the post using the aiohttp.ClientSession session
async with session.post(pds_url,
data=populated_template_payload,
headers=PDS_HEADERS,
ssl=ssl_context) as response:
status_code = response.status

How to do HTTP long polling with Django Channels

I'm trying to implement HTTP long polling for a web request, but can't seem to find a suitable example in the Channels documentation, everything is about Web Sockets.
What I need to do when consuming the HTTP message is either:
wait for a message on a Group that will be sent when a certain model is saved (using signals probably)
wait for a timeout, if no message is received
and then return something to the client.
Right now I have the code that can be seen in the examples:
def http_consumer(message):
# Make standard HTTP response - access ASGI path attribute directly
response = HttpResponse("Hello world! You asked for %s" % message.content['path'])
# Encode that response into message format (ASGI)
for chunk in AsgiHandler.encode_response(response):
message.reply_channel.send(chunk)
So I have to return something in this http_consumer that will indicate that I have nothing to send, for now, but I can't block here. Maybe I can just not return anything? And then I have to catch the new message on a specific Group, or reach the timeout, and send the response to the client.
It seems that I will need to store the message.reply_channel somewhere so that I can later respond, but I'm at a loss as to how to:
catch the group message and generate the response
generate a response when no message was received (timeout), maybe the delay server can work here?
So, the way I ended up doing this is described below.
In the consumer, if I find that I have no immediate response to send, I will store the message.reply_channel on a Group that will be notified in the case of relevant events, and schedule a delayed message that will be triggered when the max time to wait is reached.
group_name = group_name_from_mac(mac_address)
Group(group_name).add(message.reply_channel)
message.channel_session['will_wait'] = True
delayed_message = {
'channel': 'long_polling_terminator',
'content': {'mac_address': mac_address,
'reply_channel': message.reply_channel.name,
'group_name': group_name},
'delay': settings.LONG_POLLING_TIMEOUT
}
Channel('asgi.delay').send(delayed_message, immediately=True)
Then, two things can happen. Either we get a message on the relevant Group and a response is sent early, or the delayed message arrives signalling that we have exhausted the time we had to wait and must return a response indicating that there were no events.
In order to trigger the message when a relevant event occurs I'm relying on Django signals:
class PortalConfig(AppConfig):
name = 'portal'
def ready(self):
from .models import STBMessage
post_save.connect(notify_new_message, sender=STBMessage)
def notify_new_message(sender, **kwargs):
mac_address = kwargs['instance'].set_top_box.id
layer = channel_layers['default']
group_name = group_name_from_mac(mac_address)
response = JsonResponse({'error': False, 'new_events': True})
group = Group(group_name)
for chunk in AsgiHandler.encode_response(response):
group.send(chunk)
When the timeout expires, I get a message on the long_polling_terminator channel and I need to send a message that indicates that there are no events:
def long_polling_terminator(message):
reply_channel = Channel(message['reply_channel'])
group_name = message['group_name']
mac_address = message['mac_address']
layer = channel_layers['default']
boxes = layer.group_channels(group_name)
if message['reply_channel'] in boxes:
response = JsonResponse({'error': False, 'new_events': False})
write_http_response(response, reply_channel)
return
The last thing to do is remove this reply_channel from the Group, and I do this in a http.disconnect consumer:
def process_disconnect(message, group_name_from_mac):
if message.channel_session.get('will_wait', False):
reply_channel = Channel(message['reply_channel'])
mac_address = message.channel_session['mac_address']
group_name = group_name_from_mac(mac_address)
Group(group_name).discard(reply_channel)