Can pepper emulator take audio for speech? - pepper

I don't have an actual robot right now. I would like to work on a project for educational purposes. I got the QiSDK tutorials running and have a Pepper emulator with Android Studio.
While testing I realized that I can't actually speak to the emulator. I can only simulate speech by actually typing it into the Dialogue window. Is this the limitation of the pepper emulator?
Can I really not test the speech recognition features of Pepper with the listen action?!?

Yes, this is a limitation of the speech recognition, which providers are licensed only on real robots. So yes, you cannot test speech recognition without a real robot, only dialogue management.

Related

C++ Microsoft Speech Platform DTMF emulateRecognition Blocks Speech Recognition

I have an IVR application that can accept speech recognition and DTMF for use with VXML. The application was originally running on a Windows 2003 server as a service with (what looks like) SAPI 5.2 written in C++.
I have been tasked with updating this application to run on Windows 2012 server. To do that I have switched to using Microsoft Speech Platform 10.2 (version 11 wouldn't work at all). The voice recognition works when the service starts and the DTMF works all the time.
The issue arises when you try to use the speech recognition after DTMF has been used. Speech recognition will not work until the service is restarted. When I say will not work, the application recognizes no speech and therefore doesn't try to identify it. The DTMF continues to work.
I have narrowed the problem down to one line of code that calls out to ISpRecognizer::EmulateRecognition. If I comment this call out then the speech recognition continues to work but the DTMF doesn't process.
I can include code samples if desired but the application is rather large so just let me know what you would like to see.
Has anyone had similar issues?

How to create an application for receiving a video stream from a (WiFi) camera?

I am kind of a beginner in programming, learning it in school, so I really don't know much.
But I want to make a program for PC and maybe app for iPhone/Android that receives a video-stream from a camera and displays it, nothing more.
How do I do this in C++/C# in Visual Studio?
Camera -> WiFi -> PC/Phone
A good library for working with images/video in c++ is OpenCV. I would recommend taking a look at their examples.
http://docs.opencv.org/3.0.0/index.html
Check out the highgui module!
You should first check for cameras that come with a programming API so that it is easy to write programs to communicate with it.
If the camera drivers can make it communicate with standard chat apps like Sykpe, you should be able to use C++ and OpenCV to capture a stream from it. But you can choose the language and tools according to what you want to do with the video stream.

Develope an application for windows and Mac OS

I need to develop an application that can be run on both Windows and Mac OS X, is a application "monitor" that needs to display data in real time, connection over ethernet. I'm interested in the performance and graphics. I know very well the c++. can you help me in choosing a development tool?thank you
JUCE is not just about for Music industry. Its for all. I have used it in Music softwares, Image processing and only GUI applications too.
Its a well built library which supports all platform.
you need not to create different project files for the application. JUCE creates it for you.
And its pure C++.
I would say your two choices are Juce or Qt. Juce is geared toward audio and graphics, letting you get your hands on creating fast and powerful DSP algorithms. Although Juce's largest following is with developers making music software, it's fully capable of making general purpose applications with the same ease as Qt. Qt does have advantages resulting from it's greater adoption; you will find plenty of tutorials, books and courses on Qt but hardly much on Juce at the moment.
Hopefully that will change soon as Juce was bought out by ROLI and will likely have more resources soon.

Testing cocos2d iPhone using Calabash-iOS

Some people suggested https://github.com/calabash/calabash-ios for iOS tests. I am using cocos2d-iphone, can I write tests for it using calabash?
Still calabash turned me off because it says it will only work with the simulator which is not enough and also if devices I have to use a service which just sounds like trouble. Any clean solution?
After completing this guide you will be able to run tests locally
against the iOS Simulator. You can also interactively explore and
interact with your application using the Calabash console.
Finally, you will be able to test your app on real, non-jailbroken iOS
devices via the LessPainful service.
Edit: https://github.com/calabash/calabash-ios/wiki/07-Testing-on-physical-iDevices suggests that device testing is possible... I guess I'll just have to try it myself...
Edit 2: I did get it all working. Even hooking up to a remote device that is not even plugged into the computer (just needs to be on same WIFI). When hooking up to the device it helps using the UDID of the device, this was not stated on calabash-ios docs, or I missed that.
For anyone coming here later, this is the final command that will work:
DEVICE_ENDPOINT=http://192.168.36.180:37265 BUNDLE_ID=build/tabeyou-cal DEVICE_TARGET=thelongudidofyourdevicegoeshere OS=ios6 cucumber
Just replace the ip to the ip of your device, the udid, and BUNDLE_ID which should be the target name (I think).
My current question is, how would I identify cocos2d stuff like CCMenu, CCSprite etc? These all seem to support accessibility identifiers and I'm sure the Ruby-iOS part could find anything under the hood - in turn that should make it possible to write tests interacting with cocos2d elements.

How to display video in/with Adobe Alchemy?

I want to display a video from a not supported USB camera in Air (or Flash).
There is a SDK (of the camera) to display the video stream.
My question:
How should the C/C++ routine build to compile it with Adobe Alchemy?
I want only to display the video stream in Adobe Air (or Flash).
No audio or something special is needed - only video.
I am working on Linux.
Some ideas?
If you cannot already use the camera from Flash, Alchemy is not going to help you.
Alchemy can only do things that ActionScript can do-- it does not help you "get around" the Flash sandbox. The reason people use Alchemy is so that they can compile large legacy codebases and/or open-source libraries.