I was testing a Python app using boto3 to access DynamoDB and I got the following error message from boto3.
{'error':
{'Message': u'Signature expired: 20160915T000000Z is now earlier than 20170828T180022Z (20170828T181522Z - 15 min.)',
'Code': u'InvalidSignatureException'}}
I noticed that it's because I'm using the python package 'freezegun.free_time' to freeze the time at 20160915, since the mock data used by the tests is static.
I did research the error a little bit and I found this answer post. Basically, it's saying that AWS makes signatures invalid after a short time after they are created. From my understanding, in my case, the signature is marked to be created at 20160915 because of the use of 'freeze_time', but AWS uses the current time (the time when the test runs). Therefore, AWS thinks that this signature has expired for almost a year and sends an error message back.
Is there any way to make AWS ignore that error? Or is it possible to use boto3 to manually modify the date and time the signature is created at?
Please let me know if I'm not explaining my questions clearly. Any ideas are appreciated.
AWS API calls use a timestamp to prevent replay attacks. If you computer time/date is skewed too far from actual time, then the API calls will be denied.
Running requests from a computer with the date set to 2016 would certainly trigger this failure situation.
The checking is done on the host side, so there is nothing you can fix locally aside from using the real date (or somehow forcing Python into using a different date to the rest of your system).
Just came across a similar issue with immobilus. My solution was to replace datetime from botocore.auth with a unmocked version, as suggested by Antonio.
The pytest example would look like this
import types
from immobilus import logic
#pytest.fixture(scope='session', autouse=True)
def _boto_real_time():
from botocore import auth
auth.datetime = get_original_datetime()
def get_original_datetime():
original_datetime = types.ModuleType('datetime')
original_datetime.mktime = logic.original_mktime
original_datetime.date = logic.original_time
original_datetime.gmtime = logic.original_gmtime
original_datetime.localtime = logic.original_localtime
original_datetime.strftime = logic.original_strftime
original_datetime.date = logic.original_date
original_datetime.datetime = logic.original_datetime
return original_datetime
Is there any way to make AWS ignore that error?
No
Or is it possible to use boto3 to manually modify the date and time the signature is created at?
You should patch any datetime / time call that is in the auth.py file of the botocore library (source: https://github.com/boto/botocore/blob/develop/botocore/auth.py).
Related
I am using search_index function for boto3 library (AWS IOT Service) and passing a query to the function to get desired filtered output.
The problem with this function is that it does not immediately reflect the changes made in IOT.
If I am updating few things in IOT and calling the search_index function after that I am not able to see the updated values. I need to call the function again after few seconds to see the updated values.
for thing in things:
iot_client.update_thing(thingName=thing, attributePayload=attribute_payload)
result = iot_client.search_index(queryString=query_string)
# result = iot_client.list_things()
Here if I do list_things or describe_thing, I can see the updated values for the thing, but I want to use the search_index function as I am using a complicated query on different attributes in IOT.
I also checked the index by calling describe_index after update_thing to see if it is Building or Rebuilding State but it always shows to be in Active State.
I am a newcomer to AWS with very little cloud experience. The project I have is to call and consume a API from NOAA, and then save parse the returned XML document to a database. I have a ASP.NET console app that is able to do this pretty easily and successfully. However, I need to do the same thing, but in the cloud on a serverless architecture. Here are the steps I am wanting it to take:
Lambda calls the API at NOAA everyday at midnight
the API returns an XML doc with results
Parse the data and save the data to a cloud PostgreSQL database
It sounds simple, but I am having one heck of a time figuring out how to do this. I have a DB requisitioned from AWS, as that is where data is currently going through my console app. Does anyone have any advice or a resource I could look at for advice? Also, I would prefer to keep this in .NET, but realize that I may need to move it to Python.
Thanks in advance everyone!
Its pretty simple and you can test your code with below simple python boto3 lambda code.
Create new lambda function with admin access (temporary set Admin role and then you can set required role)
Add the following code
https://github.com/mmakadiya/public_files/blob/main/lambda_call_get_rest_api.py
import json
import urllib3
def lambda_handler(event, context):
# TODO implement
http = urllib3.PoolManager()
r = http.request('GET', 'http://api.open-notify.org/astros.json')
print(r.data)
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
The above code will run the REST API to fetch the data. This is just a sample program and it will help you to go further.
MAKE SURE that Lambda function max run time is `15 minutes and it can not be run >15 min so think accordingly.
I'm trying to set up a very basic AWS Lambda script, but I struggle to get the AWS Lambda Test functionality to recognize the changes I make.
To setup the simplest possible test, I created a new AWS Lambda function for Python 3.7. I then make a simple change in the code as below, add a test output and run Test:
import json
def lambda_handler(event, context):
# TODO implement
return {
'statusCode': 200,
'body': json.dumps('I changed this text')
}
I've verified that Version: is set to $LATEST - yet when I run the test, there is no change in my output - it keeps returning the original code output. Further, if I try to export the function, I get the original code - not my updated code above (despite having saved it).
I realize this seems very basic, but I wanted to check if others have experienced this as well.
Based on feedback, it seems hitting Deploy is required in order to be able to test the updated function
Even with hitting deploy, there's a definite delay. Try adding a print statement and hitting deploy and then testing. This will show that it often doesn't accept the new code right away. EXTREMELY frustrating for debugging. I have to literally refresh my lambda console page to get the changes to take.
When I try to download all log files from a RDS instance, in some cases, I found this error in my python output:
An error occurred (InvalidParameterValue) when calling the
DownloadDBLogFilePortion operation: This file contains binary data and
should be downloaded instead of viewed.
I manage correctly the pagination and the throttling (using The Marker parameter and the sleep function).
This is my calling:
log_page=request_paginated(rds,DBInstanceIdentifier=id_rds,LogFileName=log,NumberOfLines=1000)
rds-> boto3 resource
And this is the definition of my function:
def request_paginated(rds,**kwargs):
return rds.download_db_log_file_portion(**kwargs)
Like I said, most of time this function works but sometime it returns:
"An error occurred (InvalidParameterValue) when calling the
DownloadDBLogFilePortion operation: This file contains binary data and
should be downloaded instead of viewed"
Can you help me please? :)
UPDATE: the problem is a known issue with downloading log files that contain non printable sign. As soon as possible I will try the proposed solution provide by the aws support
LATEST UPDATE: This is an extract of my discussion with aws support team:
There is a known issue with non binary characters when using the boto based AWS cli, however this issue is not present when using the older Java based cli.
There is currently no way to fix the issue that you are experiencing while using the boto based AWS cli, the workaround is to make the API call from the Java based cli
the aws team are aware of this issue and are working on a way to resolve this, however the do not have an ETA for when this will be released.
So the solutions is: use the java API
Giuseppe
LATEST UPDATE: This is an extract of my discussion with aws support team:
There is a known issue with non binary characters when using the boto based AWS cli, however this issue is not present when using the older Java based cli.
There is currently no way to fix the issue that you are experiencing while using the boto based AWS cli, the workaround is to make the API call from the Java based cli
the aws team are aware of this issue and are working on a way to resolve this, however the do not have an ETA for when this will be released. So the solutions is: use the java API
Giuseppe
http://docs.aws.amazon.com/AmazonRDS/latest/APIReference/CommonErrors.html
InvalidParameterValue : An invalid or out-of-range value was supplied
for the input parameter.
Invalid parameter in boto means the data pass does not complied. Probably an invalid name that you specified, possible something wrong with your variable id_rds, or maybe your LogFileName, etc. You must complied with the function arguments requirement.
response = client.download_db_log_file_portion(
DBInstanceIdentifier='string',
LogFileName='string',
Marker='string',
NumberOfLines=123
)
(UPDATE)
For example, LogFileName must be te exact file name exist inside RDS instance.
For the logfile , please make sure the log file EXISTS inside the instance. Use this AWS CLI to get a quick check
aws rds describe-db-log-files --db-instance-identifier <my-rds-name>
Do check Marker (string) and NumberOfLines (Integer) as well. Mismatch type or out of range. Skip them since they are not required, then test it later.
I'm looking for a very simple and free cloud store for small packets of data.
Basically, I want to write a Greasemonkey script that a user can run on multiple machines with a shared data set. The data is primarily just a single number, eight byte per user should be enough.
It all boils down to the following requirements:
simple to develop for (it's a fun project for a few hours, I don't want to invest twice as much in the sync)
store eight bytes per user (or maybe a bit more, but it's really tiny)
ideally, users don't have to sign up (they just get a random key they can enter on all their machines)
I don't need to sign up (it's all Greasemonkey, so there's no way to hide a secret, like a developer key)
there is no private data in the values, so another user getting access to that information by guessing the random key is no big deal
the information is easily recreated (sharing it in the cloud is just for convenience), so another user taking over the 'key' is easily fixed as well
First ideas:
Store on Google Docs with a form as the frontend. Of course, that's kinda ugly and every user needs to set it up again.
I could set up a Google App Engine instance that allows storing a number to a key and retrieving the number by key. It wouldn't be hard, but it still sounds overkill for what I need.
I could create a Firefox add-on instead of a Greasemonkey script and use Mozilla Weave/Sync—which unfortunately doesn't support storing HTML5 local storage yet, so GM isn't enough. Of course I'd have to implement the same for Opera and Chrome then (assuming there are similar services for them), instead of just reusing the user script.
Anybody got a clever idea or a service I'm not aware of?
Update for those who are curious: I ended up going the GAE route (about half a page of Python code). I only discovered OpenKeyval afterwards (see below). The advantage is that it's pretty easy for users to connect on all their machines (just a Google account login, no other key to transfer from machine A to machine B), the disadvantage is that everybody needs a Google account.
OpenKeyval is pretty much what I was looking for.
OpenKeyval was what I was looking for but has apparently been shut down.
I think GAE will be nice choice. With your requirements for storage size you will never pass free 500 mb of GAE's store. And it will be easy to port your script across browsers because of REST nature of your service;)
I was asked to share my GAE key/value store solution, so here it comes. Note that this code hasn't run for years, so it might be wrong and/or very outdated GAE code:
app.yaml
application: myapp
version: 1
runtime: python
api_version: 1
handlers:
- url: /
script: keyvaluestore.py
keyvaluestore.py
from google.appengine.api import users
from google.appengine.ext import db
from google.appengine.ext import webapp
from google.appengine.ext.webapp.util import run_wsgi_app
class KeyValue(db.Model):
v = db.StringProperty(required=True)
class KeyValueStore(webapp.RequestHandler):
def _do_auth(self):
user = users.get_current_user()
if user:
return user
else:
self.response.headers['Content-Type'] = 'text/plain'
self.response.out.write('login_needed|'+users.create_login_url(self.request.get('uri')))
def get(self):
user = self._do_auth()
callback = self.request.get('jsonp_callback')
if user:
self.response.headers['Content-Type'] = 'text/plain'
self.response.out.write(self._read_value(user.user_id()))
def post(self):
user = self._do_auth()
if user:
self._store_value(user.user_id(), self.request.body)
def _read_value(self, key):
result = db.get(db.Key.from_path("KeyValue", key))
return result.v if result else 'none'
def _store_value(self, k, v):
kv = KeyValue(key_name = k, v = v)
kv.put()
application = webapp.WSGIApplication([('/', KeyValueStore)],
debug=True)
def main():
run_wsgi_app(application)
if __name__ == "__main__":
main()
The closest thing I've seen is Amazon's Simple Queue Service.
http://aws.amazon.com/sqs/
I've not used it myself so I'm not sure how the developer key aspect works, but they give you 100,000 free queries a month.