while writing custom skills for Alexa using Lambda functions, I was just wondering if there is code available for built in skill sets.
Does anyone have an idea about it?
As far as I know the code is not available.
Also some of the built in skills seem to be able to do things that not all third party apps can do so you may come up short if you are trying to replicate functionality.
If you are just looking for sample code the best place to check out is the Alexa Github samples. They show you how to do a variety of thing from setting up your Lambda function to interacting with a few different models.
Related
I have installed CakePHP 3 using directions from this tutorial:
https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/php-cakephp-tutorial.html
It is working perfectly and actually installation was quite easy. There is PHP, CakePHP, MySQL working and also I noticed that the newest AWS SDK as whole is installed in vendor directory. So I am fully set to use also DynamoDB as my data source. You might ask why I should use DynamoDb since I am already using MySQL/MarianDB, this is because we have an application that is already in production and it is using DynamoDB. But we should be able to write admin application using CakePHP in top of DynamoDB. This is not technical decision but coming from business side.
I found good tutorial written by StarTutorial how to use DynamoDB as session handler in CakePHP 3:
https://www.startutorial.com/articles/view/using-amazon-dynamodb-as-session-handler-in-cakephp-3
Well, there is not long way to using DynamoDB for putting data, getting data and doing scans, isn't there? Do you have any simple example how to do it, how to write data to DynamoDB or do scan?
I have also read the article:
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/GettingStarted.PHP.html
and this is working fine, no problem. But I would like to all the advantages of the CakePHP 3, templating, security and so on, thousands of hours time saved with well written code and very fast to start coding for example admin console :)
Thank you,
You could create a Lambda function (in case you want to go serverless) or any other microservice to abstract communication with your DynamoDB. This will definitely simplify your PHP code. You may call Lambda functions directly (via API Gateway), or post messages to SQS for better decoupling. I would recommend the use of SQS -- you'll need some kind of microservice anyway to consume messages and deal with your DynamoDB in a CQRS fashion. Hope it helps!
Thank you for your answer, I was looking for a example how to use the AWS SDK for DynamoDB without creating more complexity to this environment as it is. This way I would have to create yet another layer without using the SDK that already exists. Can you please give wokring example how AWS SDK is used from CakePHP 3 so that it can use DynamoDB as a data source for its applications without losing it´s own resources an capabilities (MVC, security etc).
Thank you,
After a hard debug and found bugs I was able to get it working with only using AWS SDK in CakePHP 3.
I like fast code execution (because of that I switched from Python to Go) and I do not like dependencies. Amazon recommends using SDK for simpler authentication (but in Lambda I can get tokens from IAM from environment variables) and because of built into SDK retry on errors (few lines of code, as I think). Yes it is faster to write my code using SDK, but what additional caveats about using pure HTTP API instead of SDK? Am I too crazy about milliseconds? Such optimizations worth it?
Anything you do with AWS is the result of an API call, whether executed by CLI, Web console, or SDK.
The SDKs make it easier to interact with those APIs. While you may be able to come up with some minor improvements for some calls, overall you will spend a lot of time doing it to very little benefit.
I think the stated focus on performance belies real trade-offs.
Consider that someone will have to maintain your code -- if you use an API, the test area is small, but AWS APIs might change or be deprecated; if you an SDK, next programmer will plug in new SDK version and hope that it works, but if it doesn't they'd be bogged down by sheer weight of the SDK.
Likewise, imagine someone needs to do a security review of this app, or to introduce something not yet covered by SDK (let's imagine propagating accounting group from caller role to underlying storage).
I don't think there is a clear answer.
Here are my suggestions:
keep it consistent -- either API or SDK (within given app)
consider the bigger picture (how many apps do you plan to write?)
don't be afraid to switch to the other approach later
I've had to decide on something similar in the past, with Docker (much nicer APIs and SDKs/libs). Here's how it played out:
For testing, we ended up using beta version of Docker Python bindings: prod version was not enough, and bindings (your SDK) were overall pretty good and clear.
For log scraping, I used HTTP calls (your API), "because performance", in reality comparative mental load using API vs SDK, and because bindings (SDK) did not support asyncio.
I'm building an Alexa skill that will allow Alexa users to interact with a consumer facing e-commerce site. There is functionality to call a representative that already exists on the site. Now, I want to build out a voice app as a side project that extends that same option via a conversation. There will be a need for slots like location, category of call, etc. It's basically an Application/Transactional bot.
In the future, if this is successful, I'd like that same general app to be accessible on different IoT devices (like Google Home Assistant, etc.) Therefore, I'd like to abstract out the voice interactions and have the same (general) flow and API to interact with.
This leaves me doing some research on different technologies like api.ai, wit.ai, Lex, etc.
But, since this is an app for Alexa and I already rely on AWS and Amazon in general, I think I'd prefer to use Lex or just write a native Alexa app for now.
I'm having a hard time understanding the differences between the two. I understand that Alexa was built using Lex and I see that they have similar concepts like intent, slots, etc.
But, I'm looking for any differences between the two services:
Would using Lex allow me to more easily integrate with other devices? Or is there any benefit?
Would using Lex allow me greater flexibility in designing/modifying the flow of a conversation? It seems like Lex is a little more complex and, therefore, might allow greater functionality.
Or is it just that Lex offers nearly the exact same functionality and is just meant for devices that aren't Alexa?
Does Lex offer any more analytics processing than Alexa? In Alexa I can only see intents/slots, but if I could see the actual text in Lex, that would be ideal.
Alexa Skills Kit (ASK) is used to build skills for use in the Alexa ecosystem and devices and lets developers take advantage of all Alexa capabilities such as the Smart Home and Flash Briefing API, streaming audio and rich GUI experiences. Amazon Lex bots support both voice and text and can be deployed across mobile and messaging platforms.
Lex Faqs
In my view (very limited Alexa dev experience) AWS Lex allows greater control over the bot dialog. It defines separate validation and fulfilment code hooks, enables specific prompts for slots on the UI, supports programmatic transitions between intents, gives proper versioning and alias handling, etc... so it seems it's more of an enterprise offering as opposed to "consumer-level" Alexa skills.
But surprisingly it lacks a few important features, e.g. it does not have a built in "boolean" slot type, so you have to code around yes/no questions. Or there are no Cloudwatch logs for lex at all. Also the (growing) list of integrations will make it more generic.
But despite being a huge AWS fan, I have to say that api.ai seems to be a reasonably more polished, feature rich proposition at least for now.
With regards to integrations with other devices, I do not think any of these platforms promise that. It seems that if you target Google home, than it's their platform, if you target Alexa, then hmm it's alexa or api.ai (not sure if Google will push this in the future). But if you plan to integrate with chat platforms, or directly into web applications, then I think all major platforms can give you that, or in the near future.
By the way, have you checked IBM Watson or Microsoft Bot framework (with LUIS)? They are also very capable, complete frameworks, too, don't discount them!
There is a risk using an external NLP service to process raw text delivered by Alexa over its native hobbled interaction model. Amazon may not certify your skill. This is unfortunate to hear, but their excuse is the threat of exposing private user data users may not realize they're sending. This is sickening because to do anything robust you must avoid Alexa's native NLP system. And I don't believe LEX is advanced much beyond it. You're caught in a bind. This is what will set Alexa back perhaps in the long run with respect to natural conversation. We've been preparing our skis in stealth mode, and an Amazon rep said our approach was a "hack" and may not get certification when published. I'm not yet sure what the answer is. Does this raw text issue exist with Google Home or other voice platforms? Beware.
"Alexa for Business is intended to enable organizations to take advantage of Amazon’s voice enabled assistant, Alexa. Alexa for Business provides Alexa capabilities that make workers more productive, while working alongside all of the other capabilities that Alexa has today like music, smart home controls, shopping, and thousands of third party skills.
Amazon Lex is intended to help build custom conversational interfaces and chat bots for use cases like call centers or application based bots. Bots built with Lex can be highly customized and exist separately from Alexa but they do not take advantage of Alexa’s built in capabilities or third party skills. Both Alexa for Business and Amazon Lex use Amazon’s deep learning capabilities that provide Automatic Speech Recognition (ASR) and Natural Language Understanding (NLU)."
My group and I are first-year computer science students and have been working on a project for school the last two months or so. We want to create a faculty directory for our university using the Amazon Echo. We already have an API (http://moonlight.cs.sonoma.edu/api/v1/directory/person/). What we want to do is have the user ask Alexa what is a faculty member's phone number, email, building name, and office, and she will return with the answer from this API.
We do not know how to do this, unfortunately. How do we write a code that reads from this API and how do we implement it? Since our skill is written in Javascript, I think we would prefer to stick with that. We are completely stuck, however. I apologize if this is a vague.
Cheers!
You would need to split your solution up into a few different things... you'll need to configure an "Intent Schema" on the Amazon Developer Platform -- this defines the functions that your skill can perform.
For each intent you will then need some sample utterances, which will tell Alexa what sort of phrases to listen for. You may also need some Custom Slot Type definitions depending on what you are doing... for example if you want to have Alexa answer questions such as "Alexa ask what time is the next train from "... would be a Custom Slot with values containing all the station names that your API can get times for.
You can implement your Skill using JavaScript with AWS Lambda, or provide your own backend (e.g. Node JS running on AWS Elastic Beanstalk or Heroku or anywhere you want to put it). I have a blog post that will walk you through the process -- this uses Python as a demonstration but the majority of the setup and configuration will be exactly the same if implementing in JavaScipt.
If implementing in JavaScript, I recommend looking at the Alexa Skills Kit for Node which is provided by Amazon.
I am building a dashboard that will monitor production data, and am able to access this data via web services. The data changes every 1 minute, so I would like to have a page with 4 charts/gauges (the number of systems I am monitoring) that would get the data pushed to them with a successive web service call.
Can anyone suggest a good charting kit that would work well with C#? And would SignalR be a good fit here do you think? I have read that node.js and socket.io are options, but I have no experience with node yet. I would like something along the lines of DevExpress. Perhaps jquery and something on the front end works here as well? Thanks!
For such a dashboard SignalR is definitely a good fit if you already work with .NET and ASP.NET. For a web dashboard in particular, a good graphic library is Raphael, which is open source and pure JavaScript. It's simple and straight to the point, but often less is more. You can build interesting kinds of charts with it.
This project is maybe interesting for you as a sample of those 2 technologies together. If you press the skulls to raise errors, they will be triggered on a backend simulator and pushed to the dashboard using SignalR. You will notice a piechart graph there, which is done using Raphael and updates live when new errors are received.
The code of the project is here, it's a bit complex but maybe you want to have a look anyway. It's based on SignalR 1.x, but overall concepts are still the same.