Cloudformation CustomResource "Response is not valid JSON" - amazon-web-services

I'm trying to implement a Custom Resource for the first time. Whenever I try to deploy the containing stack, creation of the Custom Resource fails with "Response is not valid JSON", but I'm not sure why.
My Custom Resource code is below:
import json
import os
import requests
import logging
# https://stackoverflow.com/questions/37703609
log = logging.getLogger(__name__)
log.setLevel(logging.DEBUG)
def handler(event, context):
try:
log.debug(f'DEBUG - called!')
log.info(f'Event: {event}')
if event['RequestType'] in ['Create', 'Update']:
pass
# I _will_ have logic here eventually - when this is working!
else:
log.info('Non-create/update RequestType')
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/crpg-ref-responses.html
responseData = dict(
{
'Status': 'SUCCESS',
'PhysicalResourceId': event['PhysicalResourceId']
if 'PhysicalResourceId' in event
else context.log_stream_name
},
**{key: event[key] for key in
['StackId', 'RequestId', 'LogicalResourceId']}
)
log.debug(f'Response data: {responseData}')
requests.put(event['ResponseURL'], data=responseData)
except Exception as e:
log.error(f'Lambda failed! {e}')
# https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/crpg-ref-responses.html
responseData = dict(
{
'Status': 'FAILED',
'Reason': f'See logs in {context["logStreamName"]}',
'PhysicalResourceId': event['PhysicalResourceId']
if 'PhysicalResourceId' in event
else context.log_stream_name,
},
**{key: event.get(key, '') for key in
['StackId', 'RequestId', 'LogicalResourceId']}
)
log.debug(f'Response data: {responseData}')
requests.put(event['ResponseURL'], data=responseData)
And some example log messages from a Create/Delete loop are below:
START RequestId: f151f270-acc6-4cff-b4c0-0eafaf4e5fef Version: $LATEST
[DEBUG] 2021-03-22T19:29:09.33Z f151f270-acc6-4cff-b4c0-0eafaf4e5fef DEBUG - called!
[INFO] 2021-03-22T19:29:09.33Z f151f270-acc6-4cff-b4c0-0eafaf4e5fef Event: {'RequestType': 'Create', 'ServiceToken': 'arn:aws:lambda:us-east-1:119281758091:function:prod-stage-ApplicationSta-FetchCommitHistoryFuncti-T7Z15V6TDZSO', 'ResponseURL': 'https://cloudformation-custom-resource-response-useast1.s3.amazonaws.com/arn%3Aaws%3Acloudformation%3Aus-east-1%3A119281758091%3Astack/prod-stage-ApplicationStack/94d24fa0-8b44-11eb-a285-0ec6977a61f1%7CFetchCommitsCustomResource%7C1d80a8f0-f72c-4e50-8039-6c0509dd24ee?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20210322T192907Z&X-Amz-SignedHeaders=host&X-Amz-Expires=7200&X-Amz-Credential=AKIA6L7Q4OWT3GW5BT7K%2F20210322%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=4cc0d2590330f42e8928bdfc2bf5a30c31bfb6228d681e11acf25ef17b8dde2c', 'StackId': 'arn:aws:cloudformation:us-east-1:119281758091:stack/prod-stage-ApplicationStack/94d24fa0-8b44-11eb-a285-0ec6977a61f1', 'RequestId': '1d80a8f0-f72c-4e50-8039-6c0509dd24ee', 'LogicalResourceId': 'FetchCommitsCustomResource', 'ResourceType': 'AWS::CloudFormation::CustomResource', 'ResourceProperties': {'ServiceToken': 'arn:aws:lambda:us-east-1:119281758091:function:prod-stage-ApplicationSta-FetchCommitHistoryFuncti-T7Z15V6TDZSO'}}
[DEBUG] 2021-03-22T19:29:09.293Z f151f270-acc6-4cff-b4c0-0eafaf4e5fef Response data: {'Status': 'SUCCESS', 'PhysicalResourceId': '2021/03/22/[$LATEST]8801d1d794c04d25ad31fa43223cbe93', 'StackId': 'arn:aws:cloudformation:us-east-1:119281758091:stack/prod-stage-ApplicationStack/94d24fa0-8b44-11eb-a285-0ec6977a61f1', 'RequestId': '1d80a8f0-f72c-4e50-8039-6c0509dd24ee', 'LogicalResourceId': 'FetchCommitsCustomResource'}
END RequestId: f151f270-acc6-4cff-b4c0-0eafaf4e5fef
REPORT RequestId: f151f270-acc6-4cff-b4c0-0eafaf4e5fef Duration: 444.73 ms Billed Duration: 445 ms Memory Size: 128 MB Max Memory Used: 61 MB Init Duration: 463.55 ms
START RequestId: 616f7c98-1d76-4459-96d3-6257c5620a99 Version: $LATEST
[DEBUG] 2021-03-22T19:29:20.640Z 616f7c98-1d76-4459-96d3-6257c5620a99 DEBUG - called!
[INFO] 2021-03-22T19:29:20.640Z 616f7c98-1d76-4459-96d3-6257c5620a99 Event: {'RequestType': 'Delete', 'ServiceToken': 'arn:aws:lambda:us-east-1:119281758091:function:prod-stage-ApplicationSta-FetchCommitHistoryFuncti-T7Z15V6TDZSO', 'ResponseURL': 'https://cloudformation-custom-resource-response-useast1.s3.amazonaws.com/arn%3Aaws%3Acloudformation%3Aus-east-1%3A119281758091%3Astack/prod-stage-ApplicationStack/94d24fa0-8b44-11eb-a285-0ec6977a61f1%7CFetchCommitsCustomResource%7C85da252d-1d0d-4fe5-84eb-755f1e6dc646?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20210322T192920Z&X-Amz-SignedHeaders=host&X-Amz-Expires=7200&X-Amz-Credential=AKIA6L7Q4OWT3GW5BT7K%2F20210322%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=0bd2e0a28783980b9014289cd852bf02252cdf5cd11f4405d00707002e6e39e9', 'StackId': 'arn:aws:cloudformation:us-east-1:119281758091:stack/prod-stage-ApplicationStack/94d24fa0-8b44-11eb-a285-0ec6977a61f1', 'RequestId': '85da252d-1d0d-4fe5-84eb-755f1e6dc646', 'LogicalResourceId': 'FetchCommitsCustomResource', 'PhysicalResourceId': 'prod-stage-ApplicationStack-FetchCommitsCustomResource-1VTI0Q0VKIJDM', 'ResourceType': 'AWS::CloudFormation::CustomResource', 'ResourceProperties': {'ServiceToken': 'arn:aws:lambda:us-east-1:119281758091:function:prod-stage-ApplicationSta-FetchCommitHistoryFuncti-T7Z15V6TDZSO'}}
[INFO] 2021-03-22T19:29:20.640Z 616f7c98-1d76-4459-96d3-6257c5620a99 Non-create/update RequestType
[DEBUG] 2021-03-22T19:29:20.640Z 616f7c98-1d76-4459-96d3-6257c5620a99 Response data: {'Status': 'SUCCESS', 'PhysicalResourceId': 'prod-stage-ApplicationStack-FetchCommitsCustomResource-1VTI0Q0VKIJDM', 'StackId': 'arn:aws:cloudformation:us-east-1:119281758091:stack/prod-stage-ApplicationStack/94d24fa0-8b44-11eb-a285-0ec6977a61f1', 'RequestId': '85da252d-1d0d-4fe5-84eb-755f1e6dc646', 'LogicalResourceId': 'FetchCommitsCustomResource'}
END RequestId: 616f7c98-1d76-4459-96d3-6257c5620a99
REPORT RequestId: 616f7c98-1d76-4459-96d3-6257c5620a99 Duration: 174.97 ms Billed Duration: 175 ms Memory Size: 128 MB Max Memory Used: 61 MB
The JSON looks valid to me (in particular, PhysicalResourceId is non-empty, which was an issue when I was first fetching it with just event['PhysicalResourceId']. What's the issue?

I think you may need to convert the dictionary into a json string in your requests.put.
Try something like:
requests.put(event['ResponseURL'], data=json.dumps(responseData))

Related

Box SDK client as_user request requires higher privileges than provided by the access token

I have this code in my Django project:
# implememtation
module_dir = os.path.dirname(os.path.dirname(os.path.dirname(__file__))) # get current directory
box_config_path = os.path.join(module_dir, 'py_scripts/transactapi_funded_trades/config.json') # the config json downloaded
config = JWTAuth.from_settings_file(box_config_path) #creating a config via the json file
client = Client(config) #creating a client via config
user_to_impersonate = client.user(user_id='8********6') #iget main user
user_client = client.as_user(user_to_impersonate) #impersonate main user
The above code is what I use to transfer the user from the service account created by Box to the main account user with ID 8********6. No error is thrown so far, but when I try to implement the actual logic to retrieve the files, I get this:
[2022-09-13 02:50:26,146: INFO/MainProcess] GET https://api.box.com/2.0/folders/0/items {'headers': {'As-User': '8********6',
'Authorization': '---LMHE',
'User-Agent': 'box-python-sdk-3.3.0',
'X-Box-UA': 'agent=box-python-sdk/3.3.0; env=python/3.10.4'},
'params': {'offset': 0}}
[2022-09-13 02:50:26,578: WARNING/MainProcess] "GET https://api.box.com/2.0/folders/0/items?offset=0" 403 0
{'Date': 'Mon, 12 Sep 2022 18:50:26 GMT', 'Transfer-Encoding': 'chunked', 'x-envoy-upstream-service-time': '100', 'www-authenticate': 'Bearer realm="Service", error="insufficient_scope", error_description="The request requires higher privileges than provided by the access token."', 'box-request-id': '07cba17694f7ea32f0c2cd42790bce39e', 'strict-transport-security': 'max-age=31536000', 'Via': '1.1 google', 'Alt-Svc': 'h3=":443"; ma=2592000,h3-29=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"'}
b''
[2022-09-13 02:50:26,587: WARNING/MainProcess] Message: None
Status: 403
Code: None
Request ID: None
Headers: {'Date': 'Mon, 12 Sep 2022 18:50:26 GMT', 'Transfer-Encoding': 'chunked', 'x-envoy-upstream-service-time': '100', 'www-authenticate': 'Bearer realm="Service", error="insufficient_scope", error_description="The request requires higher privileges than provided by the access token."', 'box-request-id': '07cba17694f7ea32f0c2cd42790bce39e', 'strict-transport-security': 'max-age=31536000', 'Via': '1.1 google', 'Alt-Svc': 'h3=":443"; ma=2592000,h3-29=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; v="46,43"'}
URL: https://api.box.com/2.0/folders/0/items
Method: GET
Context Info: None
It says it needs higher access. What might I be doing wrong? I've been stuck with this particular problem for a little over a week now so any help is highly appreciated.
Can you test to see if the user is in fact being impersonated?
Something like this:
from boxsdk import JWTAuth, Client
def main():
"""main function"""
auth = JWTAuth.from_settings_file('./.jwt.config.json')
auth.authenticate_instance()
client = Client(auth)
me = client.user().get()
print(f"Service account user: {me.id}:{me.name}")
user_id_to_impersonate = '18622116055'
folder_of_user_to_impersonate = '0'
user_to_impersonate = client.user(user_id=user_id_to_impersonate).get()
# the .get() is just to be able to print the impersonated user
print(f"User to impersonate: {user_to_impersonate.id}:{user_to_impersonate.name}")
user_client = client.as_user(user_to_impersonate)
items = user_client.folder(folder_id=folder_of_user_to_impersonate).get_items()
print(f"Items in folder:{items}")
# we need a loop to actually get the items info
for item in items:
print(f"Item: {item.type}\t{item.id}\t{item.name}")
if __name__ == '__main__':
main()
Check out my output:
Service account user: 20344589936:UI-Elements-Sample
User to impersonate: 18622116055:Rui Barbosa
Items in folder:<boxsdk.pagination.limit_offset_based_object_collection.LimitOffsetBasedObjectCollection object at 0x105fffe20>
Item: folder 172759373899 Barduino User Folder
Item: folder 172599089223 Bookings
Item: folder 162833533610 Box Reports
Item: folder 163422716106 Box UI Elements Demo

AWS (GovCloud) Lambda Destination Not Triggering

I am working in AWS GovCloud I have the following configuration in AWS Lambda:
A Lambda function which decodes a payload
A Kinesis Stream set as a trigger for the aforementioned function
A Lambda Destination (we have tried Lambda functions as well as SQS, SNS)
No matter the configuration, I cannot seem to get Lambda to trigger the destination function (or queue in the event of SQS).
Here is the current Lambda Function. I have tried many permutations of the result/return payload without avail.
import base64
import json
def lambda_handler(event, context):
#print("Received event: " + json.dumps(event, indent=2))
for record in event['Records']:
payload = base64.b64decode(record['kinesis']['data']).decode('utf-8', 'ignore')
print("Success")
result = {
"statusCode": 202,
"headers": {
#'Content-Type': 'application/json',
},
"body": '{payload}'
}
return json.dumps(result)
I then send a message to Kinesis with the AWS CLI (I have noted that "Test" in the console does not observe desintations as per Jared Short ).
Every 0.1s: aws kinesis put-records --stream-name test-stream --records Data=SGVsbG8sIHRoaXMgaXMgYSB0ZXN0IGZyb20gdGhlIEFXUyBDTEkh,PartitionKey=partitionkey1 Thu Jul 8 19:03:54 2021
{
"FailedRecordCount": 0,
"Records": [
{
"SequenceNumber": "49619938447946944252072058244333476686328287240252293122",
"ShardId": "shardId-000000000000"
}
]
}
Using Cloudwatch metrics and logs I am able to observe the function being triggered by the messages sent to Kinesis every .1 second.
The metrics charts indicate a success (as I expect).
Here is an example log from Cloudwatch:
START RequestId: 0cf3fb87-06e6-4e35-9de8-b30147e7be9d Version: $LATEST
Loading function
Success
END RequestId: 0cf3fb87-06e6-4e35-9de8-b30147e7be9d
REPORT RequestId: 0cf3fb87-06e6-4e35-9de8-b30147e7be9d Duration: 1.27 ms Billed Duration: 2 ms Memory Size: 128 MB Max Memory Used: 51 MB Init Duration: 113.64 ms
START RequestId: e663fa4a-2d0b-42d6-9e38-599712b71101 Version: $LATEST
Success
END RequestId: e663fa4a-2d0b-42d6-9e38-599712b71101
REPORT RequestId: e663fa4a-2d0b-42d6-9e38-599712b71101 Duration: 1.04 ms Billed Duration: 2 ms Memory Size: 128 MB Max Memory Used: 51 MB
START RequestId: b1373bbe-d2c6-49fb-a71f-dcedaf9210eb Version: $LATEST
Success
END RequestId: b1373bbe-d2c6-49fb-a71f-dcedaf9210eb
REPORT RequestId: b1373bbe-d2c6-49fb-a71f-dcedaf9210eb Duration: 0.98 ms Billed Duration: 1 ms Memory Size: 128 MB Max Memory Used: 51 MB
START RequestId: e0382653-9c33-44d6-82a7-a82f0f416297 Version: $LATEST
Success
END RequestId: e0382653-9c33-44d6-82a7-a82f0f416297
REPORT RequestId: e0382653-9c33-44d6-82a7-a82f0f416297 Duration: 1.05 ms Billed Duration: 2 ms Memory Size: 128 MB Max Memory Used: 51 MB
START RequestId: f9600ef5-419f-4271-9680-7368ccc5512d Version: $LATEST
Success
However, viewing the cloudwatch logs/metrics for the destination lambda function or SQS queue clearly show that the destination is not being triggered.
Over the course of troubleshooting, I have over-provisioned IAM policies to the Lambda function execution role so I am fairly confident that it is not an IAM related issue. Additionally, both functions are sharing the same execution role.
One thing I am not clear on after reviewing AWS documentation and 3rd party information is the criteria by which AWS determines success or failure for a given function. I am currently researching the invokation docs in search of what might be wrong here - but my interpretation is that AWS knows our function is successful based on the above Cloudwatch metrics showing a 100% success rate.
Does anyone know what I am doing wrong or how to try to troubleshoot the destination trigger for lambda?
Edit: As pointed out, the code is not correct for multiple record events. This is a function of senseless troubleshooting/changes to the code to get the Destination to trigger. Even something as simple as this does not invoke the destination.
import base64
import json
def lambda_handler(event, context):
#print("Received event: " + json.dumps(event, indent=2))
# for record in event['Records']:
# payload = base64.b64decode(record['kinesis']['data']).decode('utf-8', 'ignore')
# print("Success")
# result = {
# "statusCode": 202,
# "headers": {
# 'Content-Type': 'application/json',
# },
# "body": '{"Success":True, payload}'
# }
return { "result": "OK" }
So, the question: Can someone demonstrate it is possible to have a Kinesis Stream Event Source Trigger a Lambda Function which successfully triggers a Lambda destination in AWS Govcloud?

How do I fix a program that runs properly on a local server, but does not run after deploying to AWS?

I am relatively knew to programming, especially in regards to the problem I am facing in regards to running post requests through amazon web services and use of API requests.
I currently have a program written below.
from chalice import Chalice
import requests, json
import alpaca_trade_api as tradeapi
app = Chalice(app_name='tradingview-webhook-alerts')
API_KEY = 'API'
SECRET_KEY = 'SECRET'
BASE_URL = "https://paper-api.alpaca.markets"
ORDERS_URL = "{}/v2/orders".format(BASE_URL)
HEADERS = {'APCA-API-KEY-ID': API_KEY, 'APCA-API-SECRET-KEY': SECRET_KEY}
#app.route('/GDX',methods=['POST'])
def GDX():
request = app.current_request
webhook_message = request.json_body
p = 1-(webhook_message['close'] / webhook_message['high'])
if p<.0175: #if the high price for the 15m candle is 3% higher than the close, thee excution will not occur
data = {
"symbol": webhook_message['ticker'], #want it to access whatever the payload message is, payload message I believe is what the alert from Trading_View will send
"qty": 8,
"side": "buy",
"type": "limit",
"limit_price": webhook_message['close'],
"time_in_force": "gtc",
"order_class": "bracket",
#need to find out the average max profit per trade
"take_profit": {
"limit_price": webhook_message['close'] * 1.0085 #take 0.6%% profit
},
"stop_loss": {
"stop_price": webhook_message['close'] * 0.95, #stop loss of 6%
"limit_price": webhook_message['close'] * 0.93
}
}
r = requests.post(ORDERS_URL, json=data, headers=HEADERS)
response = json.loads(r.content)
print(response)
print(p)
return {
'message': 'I bought the stock!',
'webhook_message': webhook_message
}
else:
return{
'message': 'stock not purchased',
'webhook_message': webhook_message
}
#app.route('/buy_SLV',methods=['POST'])
def buy_stock():
request = app.current_request
webhook_message = request.json_body
data = {
"symbol": webhook_message['ticker'], #want it to access whatever the payload message is, payload message I believe is what the alert from Trading_View will send
"qty": 4,
"side": "buy",
"type": "limit",
"limit_price": webhook_message['close'],
"time_in_force": "gtc",
"order_class": "bracket",
"take_profit": {
"limit_price": webhook_message['close'] * 1.008 #take 1% profit
},
"stop_loss": {
"stop_price": webhook_message['close'] * 0.95, #stop loss of 2%
"limit_price": webhook_message['close'] * 0.94
}
}
r = requests.post(ORDERS_URL, json=data, headers=HEADERS)
response = json.loads(r.content)
print(response)
print(response.keys())
return {
'message': 'I bought the stock!',
'webhook_message': webhook_message
}
#app.route('/GDX_UpperBB',methods=['POST'])
def GDX_UpperBB():
request = app.current_request
webhook_message = request.json_body
api = tradeapi.REST(API_KEY, SECRET_KEY, base_url=BASE_URL)
ids = []
orders = api.list_orders(
limit=100,
nested=True # show nested multi-leg orders
)
GDX_orders = [o for o in orders if o.symbol == 'GDX']
for i in GDX_orders:
ids.append(i.id)
if len(orders)>0:
print(ids)
for i in ids:
api.cancel_order(i)
else:
print('there are no orders')
return{
'message': 'I bought the stock!',
'webhook_message': webhook_message
}
the first two methods have been running fine (both through AWS and on local server). The third method (GDX_UpperBB) is what causes everything to stop working. When I run the program on a local server and make a call to the GDX_UpperBB method, it executes without issue. but when I deploy the program through the amazon web service API through chalice, I get a 502 BadGateway response with a "message": "Internal server error" throwback.
When I go into AWS and test the method this is the console response that I get (I removed about the first half of the response because it was long and everything said it ran successfully)
Mon Oct 26 00:46:51 UTC 2020 : Received response. Status: 200, Integration latency: 15 ms
Mon Oct 26 00:46:51 UTC 2020 : Endpoint response headers: {Date=Mon, 26 Oct 2020 00:46:51 GMT, Content-Type=application/json, Content-Length=127, Connection=keep-alive, x-amzn-RequestId=621b56f9-6bee-43af-8fe2-7f2cbeb7420e, X-Amz-Function-Error=Unhandled, x-amzn-Remapped-Content-Length=0, X-Amz-Executed-Version=$LATEST, X-Amzn-Trace-Id=root=1-5f961c7b-0a211c4e04be837554d0857f;sampled=0}
Mon Oct 26 00:46:51 UTC 2020 : Endpoint response body before transformations: {"errorMessage": "Unable to import module 'app': No module named 'alpaca_trade_api'", "errorType": "Runtime.ImportModuleError"}
Mon Oct 26 00:46:51 UTC 2020 : Lambda execution failed with status 200 due to customer function error: Unable to import module 'app': No module named 'alpaca_trade_api'. Lambda request id: 621b56f9-6bee-43af-8fe2-7f2cbeb7420e
Mon Oct 26 00:46:51 UTC 2020 : Method completed with status: 502
all help is appreciated.
You need to either add the dependency to your requirements.txt filer or add the code to a vendor directory off your main project directory

AWS Lambda function fails while query Athena

I am attempting to write a simple Lambda function to query a table in Athena. But after a few seconds I see "Status: FAILED" in the Cloudwatch logs.
There is no descriptive error message on the cause of failure.
My test code is below:
import json
import time
import boto3
# athena constant
DATABASE = 'default'
TABLE = 'test'
# S3 constant
S3_OUTPUT = 's3://test-output/'
# number of retries
RETRY_COUNT = 1000
def lambda_handler(event, context):
# created query
query = "SELECT * FROM default.test limit 2"
# % (DATABASE, TABLE)
# athena client
client = boto3.client('athena')
# Execution
response = client.start_query_execution(
QueryString=query,
QueryExecutionContext={
'Database': DATABASE
},
ResultConfiguration={
'OutputLocation': S3_OUTPUT,
}
)
# get query execution id
query_execution_id = response['QueryExecutionId']
print(query_execution_id)
# get execution status
for i in range(1, 1 + RETRY_COUNT):
# get query execution
query_status = client.get_query_execution(QueryExecutionId=query_execution_id)
query_execution_status = query_status['QueryExecution']['Status']['State']
if query_execution_status == 'SUCCEEDED':
print("STATUS:" + query_execution_status)
break
if query_execution_status == 'FAILED':
#raise Exception("STATUS:" + query_execution_status)
print("STATUS:" + query_execution_status)
else:
print("STATUS:" + query_execution_status)
time.sleep(i)
else:
# Did not encounter a break event. Need to kill the query
client.stop_query_execution(QueryExecutionId=query_execution_id)
raise Exception('TIME OVER')
# get query results
result = client.get_query_results(QueryExecutionId=query_execution_id)
print(result)
return
The logs show the following:
2020-08-31T10:52:12.443-04:00
START RequestId: e5434651-d36e-48f0-8f27-0290 Version: $LATEST
2020-08-31T10:52:13.481-04:00
88162f38-bfcb-40ae-b4a3-0b5a21846e28
2020-08-31T10:52:13.500-04:00
STATUS:QUEUED
2020-08-31T10:52:14.519-04:00
STATUS:RUNNING
2020-08-31T10:52:16.540-04:00
STATUS:RUNNING
2020-08-31T10:52:19.556-04:00
STATUS:RUNNING
2020-08-31T10:52:23.574-04:00
STATUS:RUNNING
2020-08-31T10:52:28.594-04:00
STATUS:FAILED
2020-08-31T10:52:28.640-04:00
....more status: FAILED
....
END RequestId: e5434651-d36e-48f0-8f27-0290
REPORT RequestId: e5434651-d36e-48f0-8f27-0290 Duration: 30030.22 ms Billed Duration: 30000 ms Memory Size: 128 MB Max Memory Used: 72 MB Init Duration: 307.49 ms
2020-08-31T14:52:42.473Z e5434651-d36e-48f0-8f27-0290 Task timed out after 30.03 seconds
I think I have the right permissions for S3 bucket access given to the role (if not, I would have seen the error message). There are no files created in the bucket either. I am not sure what is going wrong here. What am I missing?
Thanks
The last line in your log shows
2020-08-31T14:52:42.473Z e5434651-d36e-48f0-8f27-0290 Task timed out after 30.03 seconds
To me this looks like the timeout of the Lambda Function is set to 30 seconds. Try increasing it to more than the time the Athena query needs (the maximum is 15 minutes).

AWS - Read SQS Message via Lambda

Below code is copied from AWS documentation, but my code is almost the same except for the queue URL definition part.
I want to print out the message body in JSON format, but it seems it has some things extra. How can I get rid of them without using substring?
# Create SQS client
# blah blah
# Receive message from SQS queue
response = sqs.receive_message(
QueueUrl=queue_url,
AttributeNames=[
'SentTimestamp'
],
MaxNumberOfMessages=1,
MessageAttributeNames=[
'All'
],
VisibilityTimeout=0,
WaitTimeSeconds=0
)
message = response['Messages'][0]
receipt_handle = message['ReceiptHandle']
print('Received and deleted message: %s' % message)
This printed message has following format:
START RequestId: fe107bc8-3829-4600-9bfc-df89f59b0c70 Version: $LATEST
{JSON body}
END RequestId: fe107bc8-3829-4600-9bfc-df89f59b0c70
REPORT RequestId: fe107bc8-3829-4600-9bfc-df89f59b0c70 Duration: 914.38 ms Billed Duration: 1000 ms Memory Size: 128 MB Max Memory Used: 71 MB Init Duration: 247.03 ms
What I really want is just the {JSON body}. How can I get rid of the rest?
Unfortunately you can't remove
START RequestId: fe107bc8-3829-4600-9bfc-df89f59b0c70 Version: $LATEST
END RequestId: fe107bc8-3829-4600-9bfc-df89f59b0c70
REPORT RequestId: fe107bc8-3829-4600-9bfc-df89f59b0c70 Duration: 914.38 ms Billed Duration: 1000 ms Memory Size: 128 MB Max Memory Used: 71 MB Init Duration: 247.03 ms
from the CloudWatch Logs. This is standard print out behavior for a lambda function.
However, you can use log event filters in console which can help with locating specific {JSON body} of interest. It is the most basic and fastest to solution to use though.
More complex filtering of your logs is also possible, but I think this is not what you are after.