AWS - Read SQS Message via Lambda - amazon-web-services

Below code is copied from AWS documentation, but my code is almost the same except for the queue URL definition part.
I want to print out the message body in JSON format, but it seems it has some things extra. How can I get rid of them without using substring?
# Create SQS client
# blah blah
# Receive message from SQS queue
response = sqs.receive_message(
QueueUrl=queue_url,
AttributeNames=[
'SentTimestamp'
],
MaxNumberOfMessages=1,
MessageAttributeNames=[
'All'
],
VisibilityTimeout=0,
WaitTimeSeconds=0
)
message = response['Messages'][0]
receipt_handle = message['ReceiptHandle']
print('Received and deleted message: %s' % message)
This printed message has following format:
START RequestId: fe107bc8-3829-4600-9bfc-df89f59b0c70 Version: $LATEST
{JSON body}
END RequestId: fe107bc8-3829-4600-9bfc-df89f59b0c70
REPORT RequestId: fe107bc8-3829-4600-9bfc-df89f59b0c70 Duration: 914.38 ms Billed Duration: 1000 ms Memory Size: 128 MB Max Memory Used: 71 MB Init Duration: 247.03 ms
What I really want is just the {JSON body}. How can I get rid of the rest?

Unfortunately you can't remove
START RequestId: fe107bc8-3829-4600-9bfc-df89f59b0c70 Version: $LATEST
END RequestId: fe107bc8-3829-4600-9bfc-df89f59b0c70
REPORT RequestId: fe107bc8-3829-4600-9bfc-df89f59b0c70 Duration: 914.38 ms Billed Duration: 1000 ms Memory Size: 128 MB Max Memory Used: 71 MB Init Duration: 247.03 ms
from the CloudWatch Logs. This is standard print out behavior for a lambda function.
However, you can use log event filters in console which can help with locating specific {JSON body} of interest. It is the most basic and fastest to solution to use though.
More complex filtering of your logs is also possible, but I think this is not what you are after.

Related

AWS Lambda hang until it timout on stipe invoices list

I am using AWS Lambda to host a nodeJs service that fetch my open invoices on Stripe and execute a payment and update my database.
The problem is that most of the time, but not all the time (sometimes everything goes how it should), it hang on the call of invoice list and do nothing.
Here's the part of the code where log stops :
const stripe = require('stripe')(process.env.STRIPE_SECRET_KEY, {
maxNetworkRetries: 1,
timeout: 2000
});
[other imports]
const microservice = async (event, context, callback) => {
[some code including database connection]
console.log('retrieving all open invoices...')
let invoices;
try {
invoices = await stripe.invoices.list({
status: 'open',
limit: 100,
});
console.log(invoices.data.length + ' data retrieved.');
} catch (error) {
console.log('Unable fetch stripe invoices : ', error);
console.log('Exiting due to stripe connection error.');
reports.push(new Report('Unable fetch stripe invoices', 'ERROR'));
return {
statusCode: 500,
};
}
[code that process invoices]
return {};
};
module.exports.microservice = microservice;
And here the log output :
START RequestId: d628aa1e-dee6-4cc6-9ce0-f7c11cf73249 Version: $LATEST
2021-10-26T00:04:05.741Z d628aa1e-dee6-4cc6-9ce0-f7c11cf73249 INFO Connecting to database...
2021-10-26T00:04:05.929Z d628aa1e-dee6-4cc6-9ce0-f7c11cf73249 INFO Executing (default): SELECT 1+1 AS result
2021-10-26T00:04:05.931Z d628aa1e-dee6-4cc6-9ce0-f7c11cf73249 INFO Connection has been established successfully.
2021-10-26T00:04:05.931Z d628aa1e-dee6-4cc6-9ce0-f7c11cf73249 INFO retrieving all open invoices...
END RequestId: d628aa1e-dee6-4cc6-9ce0-f7c11cf73249
REPORT RequestId: d628aa1e-dee6-4cc6-9ce0-f7c11cf73249 Duration: 15015.49 ms Billed Duration: 15000 ms Memory Size: 400 MB Max Memory Used: 40 MB
2021-10-26T00:04:20.754Z d628aa1e-dee6-4cc6-9ce0-f7c11cf73249 Task timed out after 15.02 seconds
And when it gooes all right it's like that :
START RequestId: e5fb6b08-adf9-433f-b1da-fd9ec29dde31 Version: $LATEST
2021-10-25T14:35:03.369Z e5fb6b08-adf9-433f-b1da-fd9ec29dde31 INFO Connecting to database...
2021-10-25T14:35:03.590Z e5fb6b08-adf9-433f-b1da-fd9ec29dde31 INFO Executing (default): SELECT 1+1 AS result
2021-10-25T14:35:03.600Z e5fb6b08-adf9-433f-b1da-fd9ec29dde31 INFO Connection has been established successfully.
2021-10-25T14:35:03.600Z e5fb6b08-adf9-433f-b1da-fd9ec29dde31 INFO retrieving all open invoices...
2021-10-25T14:35:04.011Z e5fb6b08-adf9-433f-b1da-fd9ec29dde31 INFO 0 data retrieved.
2021-10-25T14:35:04.011Z e5fb6b08-adf9-433f-b1da-fd9ec29dde31 INFO Everything went smoothly !
END RequestId: e5fb6b08-adf9-433f-b1da-fd9ec29dde31
REPORT RequestId: e5fb6b08-adf9-433f-b1da-fd9ec29dde31 Duration: 646.58 ms Billed Duration: 647 ms Memory Size: 400 MB
I don't get why it hangs with no error or log...
Network issues can happen due to various reasons. In your case, what you can try doing is to reduce the limit (e.g. limit : 30) and set your client library to retry the connection again by setting maxNetworkRetries : 3 or number that fits your application. When this is set, Stripe will retry the connection when the timeout error occurs.
This is a perfect match for Step functions use cases. It will allow you to orchestrate the steps of getting the invoices and processing them and easily design a retry mechanism in case of errors.
It is not really a solution, but what is causing the issue is that Stripe take a very long time to return less than 100 results.
We found workaround in order to not fetch this list.

AWS (GovCloud) Lambda Destination Not Triggering

I am working in AWS GovCloud I have the following configuration in AWS Lambda:
A Lambda function which decodes a payload
A Kinesis Stream set as a trigger for the aforementioned function
A Lambda Destination (we have tried Lambda functions as well as SQS, SNS)
No matter the configuration, I cannot seem to get Lambda to trigger the destination function (or queue in the event of SQS).
Here is the current Lambda Function. I have tried many permutations of the result/return payload without avail.
import base64
import json
def lambda_handler(event, context):
#print("Received event: " + json.dumps(event, indent=2))
for record in event['Records']:
payload = base64.b64decode(record['kinesis']['data']).decode('utf-8', 'ignore')
print("Success")
result = {
"statusCode": 202,
"headers": {
#'Content-Type': 'application/json',
},
"body": '{payload}'
}
return json.dumps(result)
I then send a message to Kinesis with the AWS CLI (I have noted that "Test" in the console does not observe desintations as per Jared Short ).
Every 0.1s: aws kinesis put-records --stream-name test-stream --records Data=SGVsbG8sIHRoaXMgaXMgYSB0ZXN0IGZyb20gdGhlIEFXUyBDTEkh,PartitionKey=partitionkey1 Thu Jul 8 19:03:54 2021
{
"FailedRecordCount": 0,
"Records": [
{
"SequenceNumber": "49619938447946944252072058244333476686328287240252293122",
"ShardId": "shardId-000000000000"
}
]
}
Using Cloudwatch metrics and logs I am able to observe the function being triggered by the messages sent to Kinesis every .1 second.
The metrics charts indicate a success (as I expect).
Here is an example log from Cloudwatch:
START RequestId: 0cf3fb87-06e6-4e35-9de8-b30147e7be9d Version: $LATEST
Loading function
Success
END RequestId: 0cf3fb87-06e6-4e35-9de8-b30147e7be9d
REPORT RequestId: 0cf3fb87-06e6-4e35-9de8-b30147e7be9d Duration: 1.27 ms Billed Duration: 2 ms Memory Size: 128 MB Max Memory Used: 51 MB Init Duration: 113.64 ms
START RequestId: e663fa4a-2d0b-42d6-9e38-599712b71101 Version: $LATEST
Success
END RequestId: e663fa4a-2d0b-42d6-9e38-599712b71101
REPORT RequestId: e663fa4a-2d0b-42d6-9e38-599712b71101 Duration: 1.04 ms Billed Duration: 2 ms Memory Size: 128 MB Max Memory Used: 51 MB
START RequestId: b1373bbe-d2c6-49fb-a71f-dcedaf9210eb Version: $LATEST
Success
END RequestId: b1373bbe-d2c6-49fb-a71f-dcedaf9210eb
REPORT RequestId: b1373bbe-d2c6-49fb-a71f-dcedaf9210eb Duration: 0.98 ms Billed Duration: 1 ms Memory Size: 128 MB Max Memory Used: 51 MB
START RequestId: e0382653-9c33-44d6-82a7-a82f0f416297 Version: $LATEST
Success
END RequestId: e0382653-9c33-44d6-82a7-a82f0f416297
REPORT RequestId: e0382653-9c33-44d6-82a7-a82f0f416297 Duration: 1.05 ms Billed Duration: 2 ms Memory Size: 128 MB Max Memory Used: 51 MB
START RequestId: f9600ef5-419f-4271-9680-7368ccc5512d Version: $LATEST
Success
However, viewing the cloudwatch logs/metrics for the destination lambda function or SQS queue clearly show that the destination is not being triggered.
Over the course of troubleshooting, I have over-provisioned IAM policies to the Lambda function execution role so I am fairly confident that it is not an IAM related issue. Additionally, both functions are sharing the same execution role.
One thing I am not clear on after reviewing AWS documentation and 3rd party information is the criteria by which AWS determines success or failure for a given function. I am currently researching the invokation docs in search of what might be wrong here - but my interpretation is that AWS knows our function is successful based on the above Cloudwatch metrics showing a 100% success rate.
Does anyone know what I am doing wrong or how to try to troubleshoot the destination trigger for lambda?
Edit: As pointed out, the code is not correct for multiple record events. This is a function of senseless troubleshooting/changes to the code to get the Destination to trigger. Even something as simple as this does not invoke the destination.
import base64
import json
def lambda_handler(event, context):
#print("Received event: " + json.dumps(event, indent=2))
# for record in event['Records']:
# payload = base64.b64decode(record['kinesis']['data']).decode('utf-8', 'ignore')
# print("Success")
# result = {
# "statusCode": 202,
# "headers": {
# 'Content-Type': 'application/json',
# },
# "body": '{"Success":True, payload}'
# }
return { "result": "OK" }
So, the question: Can someone demonstrate it is possible to have a Kinesis Stream Event Source Trigger a Lambda Function which successfully triggers a Lambda destination in AWS Govcloud?

How to get the user login duration from CloudWatch Logs Insights

How I need to get the user login duration by writing query?
2020-04-24T17:41:21.216+05:30
END RequestId: 8caf16f6-b1ef-434b-a318-c8e3e545a737
10
2020-04-24T17:41:21.216+05:30
REPORT RequestId: 8caf16f6-b1ef-434b-a318-c8e3e545a737 Duration: 368.89 ms Billed Duration: 400 ms Memory Size: 448 MB Max Memory Used: 121 MB Init Duration: 1037.35 ms
11
2020-04-24T17:41:21.215+05:30
Login successfull
12
2020-04-24T17:41:21.215+05:30
((1, 'Admin', 'krish', 'kittu', '6,7,8,9', 1, 'PRA0001))'),)
13
2020-04-24T17:41:20.846+05:30
START RequestId: 8caf16f6-b1ef-434b-a318-c8e3e545a737 Version: $LATEST

CloudWatch running Lambda task and does not restart process

I don't know how to properly describe the trouble I'm having, but I'll show it to you.
I have a Lambda function designed to download a log of GPS buses positions and save them to S3.
import boto3
from datetime import datetime, timedelta
import urllib.request
now = datetime.now() - timedelta(hours=3, minutes=0)
datetimestamp = now.strftime("%d%m%Y%H%M")
print(datetimestamp)
bucket = "gps-onibus-rio-janeiro"
s3folder = "schedules"
filename = "GPS" + datetimestamp + ".csv"
filepath = '/tmp/' + filename
baseURL = 'http://dadosabertos.rio.rj.gov.br/apiTransporte/apresentacao/csv/onibus.cfm'
urllib.request.urlretrieve(baseURL, filepath)
def lambda_handler(event, context):
s3 = boto3.client('s3')
key = s3folder + '/' + filename
s3.upload_file(filepath,bucket,key)
It's working perfectly, then I created a CloudWatch rule to run it every five minutes. The problem is, it runs but keeps refering to the first file and it does not try to create a new file once the five minutes have passed. I'll show you the log.
220320201306
START RequestId: 4cc5df77-dfbb-45e9-85e9-31c6021446fb Version: 5
END RequestId: 4cc5df77-dfbb-45e9-85e9-31c6021446fb
REPORT RequestId: 4cc5df77-dfbb-45e9-85e9-31c6021446fb Duration: 739.87 ms Billed Duration: 800 ms Memory Size: 256 MB Max Memory Used: 82 MB Init Duration: 893.84 ms
START RequestId: 3c76af58-0213-48dc-be77-912abb6212a3 Version: 5
END RequestId: 3c76af58-0213-48dc-be77-912abb6212a3
REPORT RequestId: 3c76af58-0213-48dc-be77-912abb6212a3 Duration: 251.99 ms Billed Duration: 300 ms Memory Size: 256 MB Max Memory Used: 84 MB
START RequestId: 7dc6cb90-1551-4b9e-aee4-3a8df3ddf74b Version: 5
END RequestId: 7dc6cb90-1551-4b9e-aee4-3a8df3ddf74b
REPORT RequestId: 7dc6cb90-1551-4b9e-aee4-3a8df3ddf74b Duration: 186.91 ms Billed Duration: 200 ms Memory Size: 256 MB Max Memory Used: 84 MB
START RequestId: 8d68e7d1-9b44-4830-8e01-f6e33b150dc2 Version: 5
END RequestId: 8d68e7d1-9b44-4830-8e01-f6e33b150dc2
REPORT RequestId: 8d68e7d1-9b44-4830-8e01-f6e33b150dc2 Duration: 234.85 ms Billed Duration: 300 ms Memory Size: 256 MB Max Memory Used: 85 MB
START RequestId: 8ae750c4-73c3-4425-90aa-653b0a1be6e8 Version: 5
END RequestId: 8ae750c4-73c3-4425-90aa-653b0a1be6e8
REPORT RequestId: 8ae750c4-73c3-4425-90aa-653b0a1be6e8 Duration: 184.37 ms Billed Duration: 200 ms Memory Size: 256 MB Max Memory Used: 85 MB
START RequestId: 6c80cc04-f06e-43a9-8764-bdee41619b05 Version: 5
END RequestId: 6c80cc04-f06e-43a9-8764-bdee41619b05
REPORT RequestId: 6c80cc04-f06e-43a9-8764-bdee41619b05 Duration: 221.07 ms Billed Duration: 300 ms Memory Size: 256 MB Max Memory Used: 85 MB
Is there any way to force it to begin again after five minutes to create a new file?
Move the filename construction code inside the handler.

zeit now with flask + telegram

I am using https://zeit.co (free) and was thinking setup a webhook for telegram chat bot.
I sent a message from the telegram app on the phone and it supposes to post a json to the webhook url. It does post the data but i cannot get the json. It seems zeit.co cannot handle the json?
It is like something stuck whenever i tried to call request.json
#app.route("/new_message", methods=["POST", "GET"])
def telegram_webhook_handler():
try:
print(request.json)
if request.method == 'POST':
r = request.get_json()
chat_id = r['message']['chat']['id']
text = "how are you?"
send_message(CHAT_ID, text)
else:
send_message(CHAT_ID, "This is a get")
except Exception as e:
print(e)
pass
return jsonify({"ping": "pong"})
The error message from zeit.co
12/27 01:42 PM (40s)
REPORT RequestId: 3462880b-09d4-11e9-b07e-77492ad19973 Duration: 300021.80 ms Billed Duration: 300000 ms Memory Size: 1024 MB Max Memory Used: 42 MB
12/27 01:42 PM (40s)
2018-12-27T12:42:42.838Z 3462880b-09d4-11e9-b07e-77492ad19973 Task timed out after 300.02 seconds
Any idea how i can get the webhook data?
Cheers
Your code has exceeded its duration limit.
Duration: 300021.80 ms Billed Duration: 300000 ms
If you want to increase the duration limit, you will have to upgrade your Zeit account.