Logs not coming for endpoint being created in sagemaker - amazon-web-services

I am trying to create an endpoint via sagemaker. The status of the endpoint goes to failed with the message "The primary container for production variant variantName did not pass the ping health check. Please check CloudWatch logs for this endpoint".
But there are no logs created to check.
Blocked on this from quite some time, is anyone aware why this could be happening

You have missed defining the ping() method in your model_handler.py file.
The model_handler.py file must define two methods, like this -
custom_handler = CustomHandler()
# define your own health check for the model over here
def ping():
return "healthy"
def handle(request, context): # context is necessary input otherwise Sagemaker will throw exception
if request is None:
return "SOME DEFAULT OUTPUT"
try:
response = custom_handler.predict_fn(request)
return [response] # Response must be a list otherwise Sagemaker will throw exception
except Exception as e:
logger.error('Prediction failed for request: {}. \n'
.format(request) + 'Error trace :: {} \n'.format(str(e)))
You should look at the reference code in the accepted answer here.

Related

Alexa IoT ESP32

I followed this tutorial (https://aws.amazon.com/blogs/compute/building-an-aws-iot-core-device-using-aws-serverless-and-an-esp32/) and it's working in the option "Test MQTT" inside Amazon console.
But I don't quite get it how to do requests inside an Alexa skill, I tried to use the code below but appear this error:
ERROR: An error occurred (ForbiddenException) when calling the Publish
operation: None
Code:
def handle(self, handler_input):
client = boto3.client('iot-data', region_name='sa-east-1')
response = client.publish(
topic='esp32/sub',
qos=1,
payload=json.dumps({"sequence": "2","delay": "2000"})
)
handler_input.response_builder.speak("ok")
return handler_input.response_builder.response
Anyone know if there's something more to do it before the request or another tutorial that complement the other one?

REST API not recognizing events (AWS)

I have an API that is connected to a lambda function that has queryStringParameters. On the function end I have
VARIABLE = event['queryStringParameters']['variable']
When I deploy my API and try to use it "api_url"?variable=something,
I get {"message": "Internal server error"}.
In Cloudwatch I get:
[ERROR] KeyError: 'queryStringParameters'
Traceback (most recent call last):
File "/var/task/index.py", line 217, in handler
VARIABLE = event['queryStringParameters']['variable']
To help troubleshoot I print the event. In Cloudwatch it does, and it appears as "{}", so pretty much as empty.
When I test the function in the console I use the event:
{ "queryStringParameters": {"variable": "T"}}
and the function works just fine.
I've made APIs that are connected to lambda functions before almost identical to this and have had no problem. I'm stumped. Any advice is appreciated.
Based on this AWS guide, you can access the query string parameters using get methods of the event object.
query_string = event.get('queryStringParameters')
if query_string is not None:
name = query_string.get('variable')
In your lambda's case, its quite strange that no queryStringParameters key could be found on the event object. It would be interesting to see if there's an actual parameter sent to the REST API or if the positioning of parameters in the handler's parameters are correct and that the parameter is provided on all requests.
def event_handler(event, context): # Make sure event goes first
query_string = event.get('queryStringParameters')
if query_string is None:
// Return 400 / 422 here to indicate a bad request

How to test a Cloud Function in Google Cloud Platform (GCP)?

I have been trying to find the answer to this but am unable to find it anywhere. On the Cloud Functions section in the Google Cloud Platform console there is a section title 'Testing' but I have no idea what one is supposed to put here to test the function, i.e. syntax.
I have attached an image for clarity:
Any help would be much appreciated.
HTTPS Callable functions must be called using the POST method, the Content-Type must be application/json or application/json; charset=utf-8, and the body must contain a field called data for the data to be passed to the method.
Example body:
{
"data": {
"aString": "some string",
"anInt": 57,
"aFloat": 1.23,
}
}
If you are calling a function by creating your own http request, you may find it more flexible to use a regular HTTPS function instead.
Click Here for more information
Example with the Cloud Function default Hello_World that is inserted automatically whenever you create a new Cloud Function:
def hello_world(request):
"""Responds to any HTTP request.
Args:
request (flask.Request): HTTP request object.
Returns:
The response text or any set of values that can be turned into a
Response object using
`make_response <http://flask.pocoo.org/docs/1.0/api/#flask.Flask.make_response>`.
"""
request_json = request.get_json()
if request.args and 'message' in request.args:
return request.args.get('message')
elif request_json and 'message' in request_json:
return request_json['message']
else:
return f'Hello World!'
Must be tested with a json as the input args:
{
"message": "Hello Sun!"
}
Out in the Testing Tab:
Hello Sun!
In the Testing tab editor: since we give the function the args in form of a json as we would elsewise write them like in python3 -m main.py MY_ARG, and since "message" is a key of that json, it is found by the elif and returns the value of the dictionary key as the message, instead of "Hello World". If we run the script without json args, else: is reached in the code, and output is "Hello World!":
This looks to be the same as gcloud functions call, with the JSON required being the same as the --data provided in the CLI.
You can check the docs for examples using the CLI, and the CLI documentation itself for further details.
There are multiple ways you could test you cloud function.
1) Use a google emulator locally if you want to test your code before deployment.
https://cloud.google.com/functions/docs/emulator.
This would give you a similar localhost HTTP endpoint that you can send request to for testing your function.
2) Using GUI on deployed function: The triggering event is the json object that the function expects in the request body. For example:
{
"key": "value"
}
Based on your function code dependency for the request it should trigger the function.
Simple Tests for Cloud Pub/Sub:
{"data":"This is data"}
Base64 'Hello World !' message :
{"data":"SGVsbG8gV29ybGQgIQ=="}

Custom error message json object with flask-restful

It is easy to propagate error messages with flask-restful to the client with the abort() method, such as
abort(500, message="Fatal error: Pizza the Hutt was found dead earlier today
in the back seat of his stretched limo. Evidently, the notorious gangster
became locked in his car and ate himself to death.")
This will generate the following json output
{
"message": "Fatal error: Pizza the Hutt was found dead earlier today
in the back seat of his stretched limo. Evidently, the notorious gangster
became locked in his car and ate himself to death.",
"status": 500
}
Is there a way to customise the json output with additional members? For example:
{
"sub_code": 42,
"action": "redirect:#/Outer/Space"
"message": "You idiots! These are not them! You've captured their stunt doubles!",
"status": 500
}
People tend to overuse abort(), while in fact it is very simple to generate your own errors. You can write a function that generates custom errors easily, here is one that matches your JSON:
def make_error(status_code, sub_code, message, action):
response = jsonify({
'status': status_code,
'sub_code': sub_code,
'message': message,
'action': action
})
response.status_code = status_code
return response
Then instead of calling abort() do this:
#route('/')
def my_view_function():
# ...
if need_to_return_error:
return make_error(500, 42, 'You idiots!...', 'redirect...')
# ...
I don't have 50 reputation to comment on #dappiu, so I just have to write a new answer, but it is really related to "Flask-RESTful managed to provide a cleaner way to handle errors" as very poorly documented here
It is such a bad document that took me a while to figure out how to use it. The key is your custom exception must inherit from flask_restful import HTTPException. Please note that you cannot use Python Exception.
from flask_restful import HTTPException
class UserAlreadyExistsError(HTTPException):
pass
custom_errors = {
'UserAlreadyExistsError': {
'message': "A user with that username already exists.",
'status': 409,
}
}
api = Api(app, errors=custom_errors)
Flask-RESTful team has done a good job to make custom exception handling easy but documentation ruined the effort.
As #Miguel explains, normally you shouldn't use exceptions, just return some error response. However, sometimes you really need an abort mechanism that raises an exception. This may be useful in filter methods, for example. Note that flask.abort accepts a Response object (check this gist):
from flask import abort, make_response, jsonify
json = jsonify(message="Message goes here")
response = make_response(json, 400)
abort(response)
I disagree with #Miguel on the pertinence of abort(). Unless you're using Flask to build something other than an HTTP app (with the request/response paradigm), I believe that you should use as much of the HTTPExceptions as possible (see the werkzeug.exceptions module). It also means using the aborting mechanism (which is just a shortcut to these exceptions). If instead you opt to explicitly build and return your own errors in views, it leads you into a pattern where you need to check values with a series of if/else/return, which are often unnecessary. Remember, your functions are more than likely operating in the context of a request/response pipeline. Instead of having to travel all the way back to the view before making a decision, just abort the request at the failing point and be done with it. The framework perfectly understands and has contingencies for this pattern. And you can still catch the exception in case you need to (perhaps to supplement it with additional messages, or to salvage the request).
So, similar to #Miguel's but maintaining the intended aborting mechanism:
def json_abort(status_code, data=None):
response = jsonify(data or {'error': 'There was an error'})
response.status_code = status_code
abort(response)
# then in app during a request
def check_unique_username(username):
if UserModel.by__username(username):
json_abort(409, {'error': 'The username is taken'})
def fetch_user(user_id):
try:
return UserModel.get(user_id)
except UserModel.NotFound:
json_abort(404, {'error': 'User not found'})
I had to define attribute code to my subclassed HttpException for this custom error handling to work properly:
from werkzeug.exceptions import HTTPException
from flask_restful import Api
from flask import Blueprint
api_bp = Blueprint('api',__name__)
class ResourceAlreadyExists(HTTPException):
code = 400
errors = {
'ResourceAlreadyExists': {
'message': "This resource already exists.",
'status': 409,
},
}
api = Api(api_bp, errors=errors)
and then later, raise the exception
raise ResourceAlreadyExists
It's obviously late, but in the meanwhile Flask-RESTful managed to provide a cleaner way to handle errors, as pointed out by the docs.
Also the issue opened to suggest the improvement can help.
Using Flask-RESTful (0.3.8 or higher)
from flask_restful import Api
customErrors = {
'NotFound': {
'message': "The resource that you are trying to access does not exist",
'status': 404,
'anotherMessage': 'Another message here'
},
'BadRequest': {
'message': "The server was not able to handle this request",
'status': 400,
'anotherMessage': 'Another message here'
}
}
app = Flask(__name__)
api = Api(app, catch_all_404s=True, errors=customErrors)
The trick is to use the exceptions from Werkzeug Docs
So for instance, if you want to handle a 400 request, you should add BadRequest to the customErrors json object.
Or if you want to handle 404 errors, then use NotFound in your json object
and so on

Amazon SQS and celery events (not JSON serializable)

I was looking today into Amazon SQS as an alternative approach to installing my own RabbitMQ on EC2 instance.
I have followed the documentation as described here
Within a paragraph it says:
SQS does not yet support events, and so cannot be used with celery
events, celerymon or the Django Admin monitor.
I am a bit confused what events means here. e.g. in the scenario below I have a periodic task every minute where I call the sendEmail.delay(event) asynchronously.
e.g.
#celery.task(name='tasks.check_for_events')
#periodic_task(run_every=datetime.timedelta(minutes=1))
def check_for_events():
now = datetime.datetime.utcnow().replace(tzinfo=utc,second=00, microsecond=00)
events = Event.objects.filter(reminder_date_time__range=(now - datetime.timedelta(minutes=5), now))
for event in events:
sendEmail.delay(event)
#celery.task(name='tasks.sendEmail')
def sendEmail(event):
event.sendMail()
When running it with Amazon SQS I get this error message:
tasks.check_for_events[7623fb2e-725d-4bb1-b09e-4eee24280dc6] raised
exception: TypeError(' is
not JSON serializable',)
So is that the limitation of SQS as pointed out in the documentation or am I doing something fundamentally wrong?
Many thanks for advice,
I might have found the solution. Simply refactor the sendMail() function inside event into the main task therefore there won't be any need to parse the object into json:
#celery.task(name='tasks.check_for_events')
#periodic_task(run_every=datetime.timedelta(minutes=1))
def check_for_events():
now = datetime.datetime.utcnow().replace(tzinfo=utc,second=00, microsecond=00)
events = list(Event.objects.filter(reminder_date_time__range=(now - datetime.timedelta(minutes=5), now)))
for event in events:
subject = 'Event Reminder'
link = None
message = ...
sendEmail.delay(subject, message, event.user.email)
#celery.task(name='tasks.sendEmail')
def sendEmail(subject, message, email):
send_mail(subject, message, settings.DEFAULT_FROM_EMAIL, [email])
This works both with Rabbitmq and Amazon SQS
For someone returning to this post,
This happens when the serializer defined in your celery runtime config is not able to process objects passed to the celery task.
For example: if the config says JSON as required format and some Model object is supplied, above mentioned exception might be raised.
(Q): Is it explicitly necessary to define these parameters
# CELERY_ACCEPT_CONTENT=['json', ],
# CELERY_TASK_SERIALIZER='json',
# CELERY_RESULT_SERIALIZER='json',