I followed this tutorial (https://aws.amazon.com/blogs/compute/building-an-aws-iot-core-device-using-aws-serverless-and-an-esp32/) and it's working in the option "Test MQTT" inside Amazon console.
But I don't quite get it how to do requests inside an Alexa skill, I tried to use the code below but appear this error:
ERROR: An error occurred (ForbiddenException) when calling the Publish
operation: None
Code:
def handle(self, handler_input):
client = boto3.client('iot-data', region_name='sa-east-1')
response = client.publish(
topic='esp32/sub',
qos=1,
payload=json.dumps({"sequence": "2","delay": "2000"})
)
handler_input.response_builder.speak("ok")
return handler_input.response_builder.response
Anyone know if there's something more to do it before the request or another tutorial that complement the other one?
Related
I am trying to create an endpoint via sagemaker. The status of the endpoint goes to failed with the message "The primary container for production variant variantName did not pass the ping health check. Please check CloudWatch logs for this endpoint".
But there are no logs created to check.
Blocked on this from quite some time, is anyone aware why this could be happening
You have missed defining the ping() method in your model_handler.py file.
The model_handler.py file must define two methods, like this -
custom_handler = CustomHandler()
# define your own health check for the model over here
def ping():
return "healthy"
def handle(request, context): # context is necessary input otherwise Sagemaker will throw exception
if request is None:
return "SOME DEFAULT OUTPUT"
try:
response = custom_handler.predict_fn(request)
return [response] # Response must be a list otherwise Sagemaker will throw exception
except Exception as e:
logger.error('Prediction failed for request: {}. \n'
.format(request) + 'Error trace :: {} \n'.format(str(e)))
You should look at the reference code in the accepted answer here.
Let me frame the issue. I have trained a blazingtext model and have an endpoint deployed.
Within my Notebook instance I can call model.predict and get inferences from the endpoint.
I am now trying to set up a lambda and an API gateway for the endpoint. I am having trouble trying to figure out what the payload is supposed to be for Invoke_endpoint(endpoint_name = mymodel,
body = payload)
I keep getting invalid payload format errors
This is what my payload looks like when testing the lambda
{"instances":"string of text"}
the documentation says the body take b'bytes or file like objects. i have tinkered around with IO with no luck. No good blogs or tutorials out there for this particular issue. Only a bunch of videos going over the cookie cutter examples that are out there.
import io
import boto3
import json
import csv
# grab environment variables
ENDPOINT_NAME = os.environ['ENDPOINT_NAME']
runtime= boto3.client('runtime.sagemaker')
def lambda_handler(event, context):
print("Received event: " + json.dumps(event, indent=2))
data = json.loads(json.dumps(event))
payload = data["instances"]
print(data)
#print(payload)
response = runtime.invoke_endpoint(EndpointName=ENDPOINT_NAME,
ContentType='application/json',
Body=payload.getvalue())
#print(response)
#result = json.loads(response['Body'].read().decode())
#print(result)
#pred = int(result['predictions'][0]['score'])
#predicted_label = 'M' if pred == 1 else 'B'
return ```
"errorMessage": "An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (406) from model with message \"Invalid payload format\"
If your payload is what you describe, i.e.:
payload = {"instances":"string of text"}
then you can get it in the form of json string using:
json.dumps(payload)
# which gives:
'{"instances": "string of text"}'
If you want it in bate array, then you can do as follows:
json.dumps(payload).encode()
# which gives:
b'{"instances": "string of text"}'
I can able to send the text to my lambda function from lex bot and getting the response. But how to send the voice from bot to lambda and getting a response in voice or text format. Please suggest.
The following blog written by AWS engineering team will definitely by helpful to address your problem.
https://aws.amazon.com/blogs/machine-learning/capturing-voice-input-in-a-browser/
Following lambda function code returns voice from Bot.
`public Object handleRequest(Map<String,Object> input, Context context) {
context.getLogger().log("input" +input);
LexRequest lexRequest = LexRequestFactory.createLexRequest(input);
String content = String.format("<speak>Hi! Request came from:"+lexRequest.getBotName()+"</speak>",
lexRequest.getIntentName(),lexRequest.getCrust(),lexRequest.getPizzaKind(),lexRequest.getSize()
);
SessionAttributes sessionAttributes = new SessionAttributes();
Message message = new Message("SSML",content);
DialogAction dialogAction = new DialogAction("Close", "Fulfilled", message);
return new LexRespond(sessionAttributes,dialogAction);
}`
I am working on a twitter bot using tweepy. The bot auto-replies to specific users whose tweet handle it receives as input. The bot was working fine for weeks and then suddenly started throwing this 'Bad Authentication Data' or the following to be more precise :
tweepy.error.TweepError: [{'message': 'Bad Authentication data.', 'code': 215}]
Apparently the problem is in this particular part of the code :
auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_key, access_secret)
api = tweepy.API(auth)
ts=api.user_timeline(screen_name=s,count=1)
I have entered the correct keys for the twitter application. I read about this problem on blogs where people say that it is an issue with POSTFIELDS and that it can be fixed by passing the status as URL in the api.update_status function. Is that right? If yes, please give me an example of how it can be done. I'm passing the message and tweet reply id in the update_status function. Thanks in advance.
I wrote
auth.secure = True
and it fixed in my case, hope it helps!
I am facing and issue while requesting a cab through Uber Api with Python.
These are the steps I have followed :
Creating a session with my server_token.
Authorizing with my credentials.
Got authorization_url and user authentication is done.
Created an object with a session I got from user authentication.
Got the credentials using the method :
credentials = new_session.oauth2credential
Estimation for the ride :
estimate = client.estimate_ride(product_id=PRODUCT_ID,
start_latitude=xx.xxx, start_longitude=xx.xxx, end_latitude=xx.xxx, end_longitude=xx.xxx)
Fetching fare Amount :
fare = estimate.json.get('fare')
I try to request a ride with below code and get the exception :
response = client.request_ride(product_id=Product_ID,
start_latitude=xx.xxx, start_longitude=xx.xxx, end_latitude=xx.xxx, end_longitude=xx.xxx, fare_id=fare.get('fare_id'))
Exception :
ClientError: 401: This endpoint requires at least one of the following scopes: request.delegate.tos_accept, request, request.delegate
Please let me know where am I going wrong. Did I miss any step ?
Thanks in advance.
The issue is you need to add the 'request' privileged scope when creating the token.
from uber_rides.auth import AuthorizationCodeGrant
auth_flow = AuthorizationCodeGrant(
<CLIENT_ID>,
<SCOPES>,
<CLIENT_SECRET>,
<REDIRECT_URI>
)
auth_url = auth_flow.get_authorization_url()
See more details in the python ride requests tutorial.