Empty datapoints received while retrieving AWS S3 Request metrics - amazon-web-services

Following is my payload
response = cloudwatch.get_metric_statistics(
Namespace='AWS/S3',
Dimensions=[
{
'Name': 'BucketName',
'Value': 'foo-bar'
},
{
'Name': 'StorageType',
'Value': 'AllStorageTypes'
}
],
MetricName='BytesUploaded',
StartTime=datetime(2021, 3, 11),
EndTime=datetime(2021, 3, 14),
Period=86400,
Statistics=[
'Maximum', 'Average'
]
)
and this is the response
{'Label': 'BytesUploaded', 'Datapoints': [], 'ResponseMetadata': {'RequestId': '1c6b02e9-9a8f-48e9-a2fd-1e21fd31a096', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': '1c6b02e9-9a8f-48e9-a2fd-1e21fd31a096', 'content-type': 'text/xml', 'content-length': '336', 'date': 'Tue, 16 Mar 2021 05:51:05 GMT'}, 'RetryAttempts': 0}}
From AWS Console, I'm able to see datapoints for the same timestamp. I tried increasing the timeframe but it still gibes the same result
Can some help me please? thanks

First, If period variable is a refresh period maybe you should need to reduce. When I checked out to example, I saw period is 300.
Second, try to change endTime like:
from datetime import datetime
from datetime import timedelta
EndTime = datetime.utcnow(),

Related

Nothing being written into the Redshift table [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed yesterday.
Improve this question
I have this AWS Lambda function for writing to Redshift. It executes without error but doesn't actually create the table. Does anyone have any thoughts on what might be wrong or what checks I could perform?
import json
import boto3
import botocore.session as bc
from botocore.client import Config
print('Loading function')
secret_arn = 'arn:aws:secretsmanager:<some secret stuff here>'
cluster_id = 'cluster_id'
bc_session = bc.get_session()
region = boto3.session.Session().region_name
session = boto3.Session(
botocore_session=bc_session,
region_name=region
)
config = Config(connect_timeout = 180, read_timeout = 180)
client_redshift = session.client("redshift-data", config = config)
def lambda_handler(event, context):
query_str = "create table db.lambda_func (id int);"
try:
result = client_redshift.execute_statement(Database = 'db',
SecretArn = secret_arn,
Sql = query_str,
ClusterIdentifier = cluster_id)
print("API successfully executed")
print('RESULT: ', result)
stmtid = result['Id']
response = client_redshift.describe_statement(Id=stmtid)
print('RESPONSE: ', response)
except Exception as e:
raise Exception(e)
return str(result)
RESULT: {'ClusterIdentifier': 'redshift-datalake', 'CreatedAt':
datetime.datetime(2023, 2, 16, 16, 56, 9, 722000, tzinfo=tzlocal()),
'Database': 'db', 'Id': '648bd5b6-6d3f-4d12-9435-
94e316e8dbaa', 'SecretArn': 'arn:aws:secretsmanager:<secret_here>',
'ResponseMetadata': {'RequestId': '648bd5b6-6d3f-4d12-9435-
94e316e8dbaa', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-
requestid': '648bd5b6-6d3f-4d12-9435-94e316e8dbaa', 'content-type':
'application/x-amz-json-1.1', 'content-length': '249', 'date': 'Thu,
16 Feb 2023 16:56:09 GMT'}, 'RetryAttempts': 0}}
RESPONSE: {'ClusterIdentifier': 'redshift-datalake', 'CreatedAt':
datetime.datetime(2023, 2, 16, 16, 56, 9, 722000, tzinfo=tzlocal()),
'Duration': -1, 'HasResultSet': False, 'Id': '648bd5b6-6d3f-4d12-
9435-94e316e8dbaa', 'QueryString': 'create table db.lambda_func (id
int);', 'RedshiftPid': 0, 'RedshiftQueryId': 0, 'ResultRows': -1,
'ResultSize': -1, 'SecretArn': 'arn:aws:secretsmanager:
<secret_here>', 'Status': 'PICKED', 'UpdatedAt':
datetime.datetime(2023, 2, 16, 16, 56, 9, 904000, tzinfo=tzlocal()),
'ResponseMetadata': {'RequestId': '15e99ba3-8b63-4775-bd4e-
c8d4f2aa44b4', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-
requestid': '15e99ba3-8b63-4775-bd4e-c8d4f2aa44b4', 'content-type':
'application/x-amz-json-1.1', 'content-length': '437', 'date': 'Thu,
16 Feb 2023 16:56:09 GMT'}, 'RetryAttempts': 0}}

KeyError: 'GroupName'

I'm creating an IAM group and trying to print the group name that gets created. When I try that, it's giving me this error
KeyError: 'GroupName'
Here's my function
def cf_admin_iam_group():
iam = boto3.client('iam')
try:
response = iam.create_group(GroupName='Test')
print(response['GroupName'])
except botocore.exceptions.ClientError as error:
print(error)
When I try to just print(response)
I get the expected output
{'Group': {'Path': '/', 'GroupName': 'Test', 'GroupId': 'AGPAXVCO7KXYHZP24FQFZ', 'Arn':
'arn:aws:iam::526299125232:group/Test', 'CreateDate': datetime.datetime(2022, 9, 13, 19, 17, 51, tzinfo=tzutc())},
'ResponseMetadata': {'RequestId': '7b2e7c6c-a811-497a-b2c5-177c70a0464c',
'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': '7b2e7c6c-a811-497a-b2c5-177c70a0464c',
'content-type': 'text/xml', 'content-length': '490', 'date': 'Tue, 13 Sep 2022 19:17:51 GMT'}, 'RetryAttempts': 0}}
I'm not sure why running print(response['GroupName']) is giving me an error instead of printing the group name.
response is a dictionary, and GroupName is a key of the Group value, so you need to use:
print(response['Group']['GroupName'])

How do I fix a program that runs properly on a local server, but does not run after deploying to AWS?

I am relatively knew to programming, especially in regards to the problem I am facing in regards to running post requests through amazon web services and use of API requests.
I currently have a program written below.
from chalice import Chalice
import requests, json
import alpaca_trade_api as tradeapi
app = Chalice(app_name='tradingview-webhook-alerts')
API_KEY = 'API'
SECRET_KEY = 'SECRET'
BASE_URL = "https://paper-api.alpaca.markets"
ORDERS_URL = "{}/v2/orders".format(BASE_URL)
HEADERS = {'APCA-API-KEY-ID': API_KEY, 'APCA-API-SECRET-KEY': SECRET_KEY}
#app.route('/GDX',methods=['POST'])
def GDX():
request = app.current_request
webhook_message = request.json_body
p = 1-(webhook_message['close'] / webhook_message['high'])
if p<.0175: #if the high price for the 15m candle is 3% higher than the close, thee excution will not occur
data = {
"symbol": webhook_message['ticker'], #want it to access whatever the payload message is, payload message I believe is what the alert from Trading_View will send
"qty": 8,
"side": "buy",
"type": "limit",
"limit_price": webhook_message['close'],
"time_in_force": "gtc",
"order_class": "bracket",
#need to find out the average max profit per trade
"take_profit": {
"limit_price": webhook_message['close'] * 1.0085 #take 0.6%% profit
},
"stop_loss": {
"stop_price": webhook_message['close'] * 0.95, #stop loss of 6%
"limit_price": webhook_message['close'] * 0.93
}
}
r = requests.post(ORDERS_URL, json=data, headers=HEADERS)
response = json.loads(r.content)
print(response)
print(p)
return {
'message': 'I bought the stock!',
'webhook_message': webhook_message
}
else:
return{
'message': 'stock not purchased',
'webhook_message': webhook_message
}
#app.route('/buy_SLV',methods=['POST'])
def buy_stock():
request = app.current_request
webhook_message = request.json_body
data = {
"symbol": webhook_message['ticker'], #want it to access whatever the payload message is, payload message I believe is what the alert from Trading_View will send
"qty": 4,
"side": "buy",
"type": "limit",
"limit_price": webhook_message['close'],
"time_in_force": "gtc",
"order_class": "bracket",
"take_profit": {
"limit_price": webhook_message['close'] * 1.008 #take 1% profit
},
"stop_loss": {
"stop_price": webhook_message['close'] * 0.95, #stop loss of 2%
"limit_price": webhook_message['close'] * 0.94
}
}
r = requests.post(ORDERS_URL, json=data, headers=HEADERS)
response = json.loads(r.content)
print(response)
print(response.keys())
return {
'message': 'I bought the stock!',
'webhook_message': webhook_message
}
#app.route('/GDX_UpperBB',methods=['POST'])
def GDX_UpperBB():
request = app.current_request
webhook_message = request.json_body
api = tradeapi.REST(API_KEY, SECRET_KEY, base_url=BASE_URL)
ids = []
orders = api.list_orders(
limit=100,
nested=True # show nested multi-leg orders
)
GDX_orders = [o for o in orders if o.symbol == 'GDX']
for i in GDX_orders:
ids.append(i.id)
if len(orders)>0:
print(ids)
for i in ids:
api.cancel_order(i)
else:
print('there are no orders')
return{
'message': 'I bought the stock!',
'webhook_message': webhook_message
}
the first two methods have been running fine (both through AWS and on local server). The third method (GDX_UpperBB) is what causes everything to stop working. When I run the program on a local server and make a call to the GDX_UpperBB method, it executes without issue. but when I deploy the program through the amazon web service API through chalice, I get a 502 BadGateway response with a "message": "Internal server error" throwback.
When I go into AWS and test the method this is the console response that I get (I removed about the first half of the response because it was long and everything said it ran successfully)
Mon Oct 26 00:46:51 UTC 2020 : Received response. Status: 200, Integration latency: 15 ms
Mon Oct 26 00:46:51 UTC 2020 : Endpoint response headers: {Date=Mon, 26 Oct 2020 00:46:51 GMT, Content-Type=application/json, Content-Length=127, Connection=keep-alive, x-amzn-RequestId=621b56f9-6bee-43af-8fe2-7f2cbeb7420e, X-Amz-Function-Error=Unhandled, x-amzn-Remapped-Content-Length=0, X-Amz-Executed-Version=$LATEST, X-Amzn-Trace-Id=root=1-5f961c7b-0a211c4e04be837554d0857f;sampled=0}
Mon Oct 26 00:46:51 UTC 2020 : Endpoint response body before transformations: {"errorMessage": "Unable to import module 'app': No module named 'alpaca_trade_api'", "errorType": "Runtime.ImportModuleError"}
Mon Oct 26 00:46:51 UTC 2020 : Lambda execution failed with status 200 due to customer function error: Unable to import module 'app': No module named 'alpaca_trade_api'. Lambda request id: 621b56f9-6bee-43af-8fe2-7f2cbeb7420e
Mon Oct 26 00:46:51 UTC 2020 : Method completed with status: 502
all help is appreciated.
You need to either add the dependency to your requirements.txt filer or add the code to a vendor directory off your main project directory

How to stream word document in bytes stored in AWS S3 from boto3

Using boto3, I am trying to retrieve a Microsoft Word document stored in S3. However, when I try to access the object calling client.get_object() the content-length of Word Document is 0 while files with .txt extensions return the correct content-length. Is there a way to decode the Word Document in order to write its output to a stream?
I have tested this with .txt files and .docs files and I have also tried using the .decode() method after reading the file, but based on the content being returned, there doesn't seem to be anything to decode.
Accessing a .txt Document I notice that the content-length is 17 (the number of characters in the file) and they can be read by calling txt_file.read()
s3 = boto3.client('s3')
txt_file = s3.get_object(Bucket="test_bucket", Key="test.txt").get()
>>> txt_file
{
u'Body': <botocore.response.StreamingBody object at 0x7fc5f0074f10>,
u'AcceptRanges': 'bytes',
u'ContentType': 'text/plain',
'ResponseMetadata': {
'HTTPStatusCode': 200,
'RetryAttempts': 0,
'HTTPHeaders': {
'content-length': '17',
'accept-ranges': 'bytes',
'server': 'AmazonS3',
'last-modified': 'Sat, 06 Jul 2019 02:13:45 GMT',
'date': 'Sat, 06 Jul 2019 15:58:21 GMT',
'x-amz-server-side-encryption': 'AES256',
'content-type': 'text/plain'
}
}
}
Accessing a .docx Document I notice that the content-length is 0 (while the document has the same string written to the .txt file) and calling txt_file.read() outputs the empty string u''
s3 = boto3.client('s3')
word_file = s3.get_object(Bucket="test_bucket", Key="test.docx").get()
>>> word_file
{
u'Body': <botocore.response.StreamingBody object at 0x7fc5f0074f10>,
u'AcceptRanges': 'bytes',
u'ContentType': 'binary/octet-stream',
'ResponseMetadata': {
'HTTPStatusCode': 200,
'RetryAttempts': 0,
'HTTPHeaders': {
'content-length': '0',
'accept-ranges': 'bytes',
'server': 'AmazonS3',
'last-modified': 'Thu, 04 Jul 2019 21:51:53 GMT',
'date': 'Sat, 06 Jul 2019 15:58:30 GMT',
'x-amz-server-side-encryption': 'AES256',
'content-type': 'binary/octet-stream'
}
}
}
I expect the content-length of both files to output the number of bytes in the file, however, only the .txt file is returning data.

Facebook graph API : can post on "me/feed" but not on "page_id/feed" (error : 1455002)

I guess the answer to this one is straightforward but I cannot find it. Any help would be very much appreciated.
I. Use case
The application (back-end in python / django) should write on a facebook page.
II. Symptoms
When running the code below on "me/feed", the post is correctly inserted
When running the code below on "PAGE_ID/feed", there is an exception (see below in section IV.)
The scope of the authorisation is publish_stream, manage_pages
Also, the user_token is from a user in the test domain
III. Code
## Getting the user_access_token is dealt with before
h = Http()
data = dict(message="Hello", access_token=user_access_token['access_token'])
resp, content = h.request("https://graph.facebook.com/PAGE_ID/feed", "POST", urlencode(data))
IV. Exception generated (using /PAGE_ID/feed)
resp : Response: {'status': '400', 'content-length': '119', 'expires': 'Sat, 01 Jan 2000 00:00:00 GMT', 'www-authenticate':
'OAuth "Facebook Platform" "invalid_request" "(#1) An unknown error occurred"', 'x-fb-rev': '976458',
'connection': 'keep-alive', 'pragma': 'no-cache', 'cache-control': 'no-store', 'date': 'Tue, 22 Oct 2013 21:45:20
GMT', 'access-control-allow-origin': '*', 'content-type': 'text/javascript; charset=UTF-8', 'x-fb-debug':
'HFItWh64ob+3hErv+rgYdFzHlRBVHP7Pg0Eg4hvqYlY='}
content str: {"error":{"message":"(#1) An unknown error occurred","type":"OAuthException","code":1,"error_data":
{"kError":1455002}}}