I'm working on Amazon Echo (Alexa) now days, and quite new to it. I'm using AWS-Lambda function as endpoint, and testing my custom skill on simulator or Echoism.io. The skills without audios are working fine in this scenerio
The problem is, i'm creating an audio list, and want echo/alexa to play that. I've read that simulator doesn't support audio streaming at the moment. but i'm unable to stream it on Echoism.io as well.
I'm writing the simplest possible code on lambda from this link
But audio is not streaming. I've updated audio link and also have added logs (cloud watch). Function is being called and returning the response. but no audio.
Please help. Can we do it?
So, simple answer to my question in NO.
i asked the same question on Alexa developer forum, and got this email from Alexa team:
Hello Faiza, audio streaming is not supported on the service simulator or
echosim. You will need to use an Echo devices.
Kim C.
Alexa Skills Team
I tested my skill on Echo device, and it worked fine.
I don't know the exact answer to your question (yet). I just thought this may help you. We built a tool for local skill development and testing.
BST Tools
Requests and responses from Alexa will be sent directly to your local server, so that you can quickly code and debug without having to do any deployments. I have found this to be very useful for our own development.
We also have a sample project Streamer with audio streaming to demonstrate BST features.
Take a look at this tutorial: BST Emulator
If you are on Python we also have this: BSTPy. This will proxy your Python lambda (expose it as a local http service).
Let me know if you have any questions or need more help.
Unfortunately I confirm #Fayza Nawaz answer is correct (upvoted).
The test simulator doesn't support audio player (via AudioPlayer). The web test simulator (ironically today have been launched the new interface: https://developer.amazon.com/blogs/alexa/post/8914b24e-8546-4775-858c-becd800a3c2f/the-new-alexa-skills-kit-developer-console-is-now-generally-available) doesn't support both finite length audio files or continuous audio streaming :(
I opened a similar issue here:
Alexa Skill AudioPlayer: Console test Support poor support/bugs
BTW, I tested also EchoSim and I confirm it doesn't works.
Another minus is that I can NOT test any (audio-based) Alexa skill with a physical device (Amazon Echo) because I'm from Italy and Amazon not allow me to purchase a device from Italy, even if I'm perfectly aware Amazon Alexa doesn't support now Italian language and my skill is in English.
That's very sad ...
Related
I have successfully connected my jetson nano to my AWS account following this guide:
https://github.com/awslabs/aws-iot-core-integration-with-nvidia-deepstream
according to this guide, I can see MQTT messages from deepstream test-app 4 and 5 on my AWS test.
I now want to take these MQTT messages to preferably an website/webapp. Is there anyone who knows how to do it?
I am sorry I am a novice about these things and it is absolutely possible I have missed some easy points somewhere.
Thanks
I'm not sure about the example you provided, but the overall best way to do this is to utilize WebSockets, which AWS IOT supports. You can use an API to do most of the work for you, the best one is the Paho-MQTT API for Javascript (also comes in python and PHP flavors).
Also just a helpful note, AWS IOT was designed to support extremely complex and demanding IoT networks on a commercial scale. It's incredibly robust but fairly complicated to use if you are just testing/developing. A well-secured small server with the Mosquitto broker on it is much easier to begin learning things with. You will spend lots of time learning how AWS does things and very little time learning about MQTT!
Try this tutorial out and learn about WebSockets, then try the API.
I am a beginner at cloud computing, and I'm hoping to get some guidance or advice as to how I can set up a cloud connected to IoT devices and a running application to control the behavior of these devices.
Firstly, there are 5 devices that have to connected via 3G or LTE because of the distance among the devices, so the way I am thinking of is connecting them to the internet using dynamic public ip addresses and using a dynamic DNS server. It seems like I should be using AWS-IoT service to manage these devices. How should I go about doing that, or is there a better approach? The devices all use MQTT and/or REST API.
The next step is to write an application and I was suggested to use AWS Lambda, am I heading towards the correct direction? How do I link the connected devices on AWS-IoT to AWS Lambda?
I know the question may sound vague but I am still new and exploring different solutions. Any guidance or recommendations for the right step forward is appreciated.
I assume your devices (or, one of them) has 64-bit CPU (x86 or Arm) that run Linux.
It's a kind of 70:30 balance where:
- 70% of the work needs to focus on building and testing edge-logic.
- 30% of the work on the rest (IoT Cloud, Lambda etc).
Here is what I suggest.
1/ Code your edge-logic first! (the piece of code that you want to execute ultimately on your devices).
2/ Test it on-the-edge by logging on to the devices (if you can) via SSH and running it.
3/ Once you have that done, 70% of the job is over.
4/ Rest 30% is to complete the jigsaw in cloud. Best place to start: Lambda and Greengrass.
5/ To summarize it all, you will create greengrass components on cloud, install AWS Greengrass Core software on your device, followed by deploying your configuration on your device over-the-air (OTA).
Now, you can use any MQTT client (or) biult-in MQTTTester of AWS IoT -> Test wizard to send a message to your topic to trigger your edge-logic on the device!
Good luck!
cheers,
ram
I am following this tutorial
https://codelabs.developers.google.com/codelabs/iotcore-heartrate/index.html?index=..%2F..index#0
Now i am able to send heart rate sensor data to Google Cloud BigQuery, Cloud storage etc, as described in the tutorial clearly and I am able to visualise it as well
But my next question is, how do we get access to data in real-time. For example, say if the heart rate data from Raspberry Pi (3B+) goes up over 75, i want to trigger and turn on the LED of the ESP32 that is connected at the receiving end.
In a nutshell, I want to do some actuation (like LED blinking as I told earlier) on ESP32, based on the sensor data from Raspberry Pi that goes to Google Cloud. I am only successful in sending, storing, and visualising sensor data in Google Cloud. Your help in enabling me to complete the actuation step is so valuable as I am pretty much clueless, how it can be done
Thanking you
There's a couple options here. The easiest to stand up, is Cloud Functions. The function can be triggered by Pub/Sub messages. It can also be authenticated with the IoT Core Admin SDK (via service accounts) to then send a configuration/command back down to the device you want to light up with the LED.
I wrote a blog post about setting up the Cloud to device communication piece:
https://medium.com/google-cloud/cloud-iot-step-by-step-cloud-to-device-communication-655a92d548ca
It covers how to setup the function to do it, although the function code itself in the example is an HTTP function, which means it triggers by hitting a URL endpoint instead of Pub/Sub, but that part's easy enough.
The big piece you'll need to investigate is pulling the Pub/Sub message in the function that triggered it. There's good docs on that here:
https://cloud.google.com/functions/docs/calling/pubsub
If you have super high throughput, then Cloud Functions can get expensive, and at that point you'd want to switch over to using something like Dataflow (https://cloud.google.com/dataflow/docs/). Then either having that job when it runs react to telemetry and hit an endpoint Function when it hits the target condition, or go through authenticating the job itself with the IoT Admin SDK. I haven't done that before, so I actually don't know how easy/hard that might be to do.
Upon using the Google Cloud IoT Core platform, it seems to be built around the idea of sending configurations down to the device and receiving states back from it.
Google's own documentation suggests using that approach instead of building around sending commands down (as a config) and getting responses back (as a state).
However in the very end of the documentation they show an example of exactly that.
I am struggling to understand how does one support both approaches? I can see the benefit of how it was designed but I am also struggling to understand how would one be able to talk to the device using such an idiom of values and results as the config.
Has anybody implemented a command/response flow? Is it possible to subscribe to the state topic to retrieve the state of the device in my own application?
Edit based on clarifying comment below:
We've got a beta feature we're calling "Commands" which will do the reboot you're talking about. So the combination of config messages (for persistent configuration that you want to send a device on startup/connect to IoT Core) and Commands for fire and forget like a reboot message can do what your'e talking about. Current state is a bit trickier, in that you could either have a callback mechanism where you send a command to ask, and listen on the events/ channel for a response, or have the device report state (/state/ MQTT topic) and just ask IoT Core's admin SDK rather than the device.
Commands just went open beta, you should have access to it now. If you're using the gcloud SDK from command line, you'll need to do a gcloud components update and then gcloud beta iot devices --help will show the commands group. If you're using the console, when you drill down to a single device, you should now see "Send Command" next to "Update Configuration" on the top bar.
Old Answer: As a stab at answering, it sounds like rather than using the state topic, you could/should just use the standard /events/ topic and subscribe to the Pub/Sub topic the devices go into instead?
It really depends on the volume and number of devices we're talking about in terms of keeping that state machine in sync.
Without knowing what specifically you're implementing, I'd probably do something like send configs down, respond from device on the /events/ topic, and have a Cloud Function that tracks the Pub/Sub topic and updates something like a Firestore instance with the state of the devices, rather than using the /state/ topic. Especially if you're doing something in response directly to the state reporting of the device.
Send command to device
To send a command to a device, you will need to use the sendCommandToDevice API call.
Receive command from device
To receive a command from a device, subscribe to the /devices/<your-device-d>/commands/# topic.
Full examples will eventually be published to the Google Cloud IoT Core samples repos:
Java
NodeJS
Python
I'm in testing stage of launching an online radio. I'm using AWS CloudFormation stack with Adobe Media Server.
My existing instance type is m1.large and my Flash Media Live Encoder is streaming mp3 at 128kbps which i think is pretty normal but it's producing a stream that isn't smooth & stable at all and seems to have a lot of breaks.
Should i pick an instance type with higher specs?
I'm running my test directly off of LiveHLSManifest link that opens on my iPhone's Safari and plays on browser's build-in player..which doesn't set any buffering on client side - could this be the issue?
Testing HLS/HDS links directly on iPhone's Safari was a bad idea. I relied on built-in players already having some sort of buffering configuration by default but noo... I was able to receive stable & smooth stream when i used players like Strobe Media Playback, FlowPlayer etc.. Hopefully, this answer will save someone some time.