AWS Lambda function is not updating in basic test environment - amazon-web-services

I'm trying to set up a very basic AWS Lambda script, but I struggle to get the AWS Lambda Test functionality to recognize the changes I make.
To setup the simplest possible test, I created a new AWS Lambda function for Python 3.7. I then make a simple change in the code as below, add a test output and run Test:
import json
def lambda_handler(event, context):
# TODO implement
return {
'statusCode': 200,
'body': json.dumps('I changed this text')
}
I've verified that Version: is set to $LATEST - yet when I run the test, there is no change in my output - it keeps returning the original code output. Further, if I try to export the function, I get the original code - not my updated code above (despite having saved it).
I realize this seems very basic, but I wanted to check if others have experienced this as well.

Based on feedback, it seems hitting Deploy is required in order to be able to test the updated function

Even with hitting deploy, there's a definite delay. Try adding a print statement and hitting deploy and then testing. This will show that it often doesn't accept the new code right away. EXTREMELY frustrating for debugging. I have to literally refresh my lambda console page to get the changes to take.

Related

search_index function of boto3 does not immediately reflect the changes made in AWS IOT

I am using search_index function for boto3 library (AWS IOT Service) and passing a query to the function to get desired filtered output.
The problem with this function is that it does not immediately reflect the changes made in IOT.
If I am updating few things in IOT and calling the search_index function after that I am not able to see the updated values. I need to call the function again after few seconds to see the updated values.
for thing in things:
iot_client.update_thing(thingName=thing, attributePayload=attribute_payload)
result = iot_client.search_index(queryString=query_string)
# result = iot_client.list_things()
Here if I do list_things or describe_thing, I can see the updated values for the thing, but I want to use the search_index function as I am using a complicated query on different attributes in IOT.
I also checked the index by calling describe_index after update_thing to see if it is Building or Rebuilding State but it always shows to be in Active State.

how to edit a already deployed pipeline in data fusion?

I am trying to edit a pipeline which is already deployed I understand that we can duplicate a same pipeline and rename it but how can do make a change in existing pipeline as renaming would require a change in production scheduling jobs as well.
There is one way thru http calls executor..
Open https://<cdf instnace url ..datafusion.googleusercontent.com>/cdap/httpexecutor
Select PUT(to change pipeline code) from drop down and give
namespaces/<namespaces_name>/apps/<pipeline_name>
Go to body part and paste the new pipeline code (export the code of updated pipeline to i.e. json formatted)
Click on SEND and Response would come as "Deploy Complete" with status code 200.

Consuming API's using AWS Lambda

I am a newcomer to AWS with very little cloud experience. The project I have is to call and consume a API from NOAA, and then save parse the returned XML document to a database. I have a ASP.NET console app that is able to do this pretty easily and successfully. However, I need to do the same thing, but in the cloud on a serverless architecture. Here are the steps I am wanting it to take:
Lambda calls the API at NOAA everyday at midnight
the API returns an XML doc with results
Parse the data and save the data to a cloud PostgreSQL database
It sounds simple, but I am having one heck of a time figuring out how to do this. I have a DB requisitioned from AWS, as that is where data is currently going through my console app. Does anyone have any advice or a resource I could look at for advice? Also, I would prefer to keep this in .NET, but realize that I may need to move it to Python.
Thanks in advance everyone!
Its pretty simple and you can test your code with below simple python boto3 lambda code.
Create new lambda function with admin access (temporary set Admin role and then you can set required role)
Add the following code
https://github.com/mmakadiya/public_files/blob/main/lambda_call_get_rest_api.py
import json
import urllib3
def lambda_handler(event, context):
# TODO implement
http = urllib3.PoolManager()
r = http.request('GET', 'http://api.open-notify.org/astros.json')
print(r.data)
return {
'statusCode': 200,
'body': json.dumps('Hello from Lambda!')
}
The above code will run the REST API to fetch the data. This is just a sample program and it will help you to go further.
MAKE SURE that Lambda function max run time is `15 minutes and it can not be run >15 min so think accordingly.

Play! and AWS - provider chain is not respected

This likely going to be very simple. I have a Play! app that needs to talk to AWS and i'm trying to get any secrets out of code, even in dev mode.
i have the following code in a function (minus the case class):
case class AwsCredentials(token: String, accessId: String, secretKey: String)
val client = AWSSecurityTokenServiceClientBuilder.defaultClient()
val token = client.getSessionToken
client.shutdown()
val cred = AmazonS3ClientBuilder.standard()
.withRegion(Regions.US_EAST_1).getCredentials.getCredentials
AwsCredentials(token.getCredentials.getSessionToken,
cred.getAWSAccessKeyId,
cred.getAWSSecretKey)
the cred.getAwsAccessKeyId and cred.getAWSSecretKey always fall down to the magic keys, not the ones either from the environment variables that can be set or the user.dir/.aws/credentials or from /home/play/.aws/ directly as the service does run as play.
Locally with Play! running via sbt, this works just fine and uses my local keys, but as soon as I deploy it and start with an init.d script it no longer does. I have confirmed its in the environment variables and we use other environment variables, that work just fine.
Ultimately this is a AWS issue, but since i can get it to work locally with ammonite with the play user, there must be something amiss in Play.
Thanks!
Sorry for wasting anyones time that chose to read and ponder this issue, as the underlying issue, i'm nearly positive of is that, I need to provide the accessKey to fineuploader, so that is on the page and in role mode i need a token which needs to be passed as a header as soon as i do this, i'm not signing the request with the privateKey that matches fineuploader's accessKey
If i figure out, or want to invest the time to modify the JS code, i'll post back.

Is it possible to make boto3 ignore signature expired error?

I was testing a Python app using boto3 to access DynamoDB and I got the following error message from boto3.
{'error':
{'Message': u'Signature expired: 20160915T000000Z is now earlier than 20170828T180022Z (20170828T181522Z - 15 min.)',
'Code': u'InvalidSignatureException'}}
I noticed that it's because I'm using the python package 'freezegun.free_time' to freeze the time at 20160915, since the mock data used by the tests is static.
I did research the error a little bit and I found this answer post. Basically, it's saying that AWS makes signatures invalid after a short time after they are created. From my understanding, in my case, the signature is marked to be created at 20160915 because of the use of 'freeze_time', but AWS uses the current time (the time when the test runs). Therefore, AWS thinks that this signature has expired for almost a year and sends an error message back.
Is there any way to make AWS ignore that error? Or is it possible to use boto3 to manually modify the date and time the signature is created at?
Please let me know if I'm not explaining my questions clearly. Any ideas are appreciated.
AWS API calls use a timestamp to prevent replay attacks. If you computer time/date is skewed too far from actual time, then the API calls will be denied.
Running requests from a computer with the date set to 2016 would certainly trigger this failure situation.
The checking is done on the host side, so there is nothing you can fix locally aside from using the real date (or somehow forcing Python into using a different date to the rest of your system).
Just came across a similar issue with immobilus. My solution was to replace datetime from botocore.auth with a unmocked version, as suggested by Antonio.
The pytest example would look like this
import types
from immobilus import logic
#pytest.fixture(scope='session', autouse=True)
def _boto_real_time():
from botocore import auth
auth.datetime = get_original_datetime()
def get_original_datetime():
original_datetime = types.ModuleType('datetime')
original_datetime.mktime = logic.original_mktime
original_datetime.date = logic.original_time
original_datetime.gmtime = logic.original_gmtime
original_datetime.localtime = logic.original_localtime
original_datetime.strftime = logic.original_strftime
original_datetime.date = logic.original_date
original_datetime.datetime = logic.original_datetime
return original_datetime
Is there any way to make AWS ignore that error?
No
Or is it possible to use boto3 to manually modify the date and time the signature is created at?
You should patch any datetime / time call that is in the auth.py file of the botocore library (source: https://github.com/boto/botocore/blob/develop/botocore/auth.py).