Chromecast can't load mp3? - mp3

I've an issue with a Chromecast app I'm working on. It's an app for Chrome (not for iOS or Android).
I can't figure out why mp3 files aren't loaded by my chromecast device while mp4 are (for the record, I'm using the default receiver).
I based my work on this official sample : https://github.com/googlecast/CastVideos-chrome
Here you can see the code : http://pastebin.com/nJGeT7xP
Here is how I send the mp3 file :
if (session) {
var mediaInfo = new chrome.cast.media.MediaInfo("http://localhost/tom_www/CastTom/music.mp3", "audio/mpeg");
var request = new chrome.cast.media.LoadRequest(mediaInfo);
session.loadMedia(
request,
onMediaDiscovered.bind(this, 'loadMedia'),
onMediaError
);
} else {
console.log("No session available, connect first");
}
I hope you can help me.
Thank you.

You will need to put the mp3 in a location accessible to the chromecast. You can do this by getting the the ipaddress of your chrome device to send to the chromecast or placing the mp3 on a publicly accessible server.

Related

GRPC create channel link for Google Assistant API

I'm trying to write a C++ code to create Google Assistant on UDOO x86 board (See https://developers.google.com/assistant/sdk/).
I have tried to follow all the steps regarding in the tutorial, but I'm not sure what URL I should have when I call the creatChannel method?
I do have the credentials JSON downloaded and using it through the "GOOGLE_APPLICATION_CREDENTIALS" environment variable
Object creation:
GAssistantClient greeter(grpc::CreateChannel(
"google.com", grpc::GoogleDefaultCredentials()));
Contructor:
GAssistantClient(std::shared_ptr<Channel> channel)
: stub_(Greeter::NewStub(channel)) {}
When I put google.com in there, I get the following error:
E0505 18:30:34.959710444 7635 ssl_transport_security.c:1226] Invalid toplevel subdomain: com
E0505 18:30:34.959836517 7635 ssl_transport_security.c:1226] Invalid toplevel subdomain: com
You have to authenticate using your credentials and then create a channel, after that you have send audio config request about your audio and then send an audio data request which contains your audio captured etc.
You can use ALSA sound API library to capture and playback audio.
and then send and receive in api requests and responses.
You should follow the following order in this doc
The Service name for the Google Assistant API is embeddedassistant.googleapis.com. So your object creation call would look like:
GAssistantClient greeter(grpc::CreateChannel(
"embeddedassistant.googleapis.com", grpc::GoogleDefaultCredentials()));

Host web UI with executable application?

This may be a very stupid question but I have spent nearly 5 hours doing research on the web and found nothing to fully clarify my doubts.
In few words I have been asked for a possible employer to develop certain executable application as part of a "Technical Test". Supposedly they're measuring my expertise working with WCF. I was given two days to develop such App and all the information about it is the following:
Deliverable:
                - An executable that
                                * When APP is ran, it should host a WCF service (SERVICE) as well as a
web UI (UI) accessible by web browsers.
                                * Through the UI, user should be able to add or delete messages stored in a
database (DB).
                                * The UI should also display the current list of messages stored in the DB.
                                * If changes are made to the DB, those changes should show up in the UI
without the need to reload the page.
                - All of the project source code.
               
Additional notes:
Use of existing libraries is allowed as long as they are clearly referenced
Now, I understand that you can host a WCF Web Service using a Console Application (among other options) and the Service will be alive as long as the application is running. I also know that any Web Application can access this service by just adding a Service Reference, creating a client of its type and calling its methods. My confusion begins when they ask me to put all together in one executable application:
When APP is ran, it should host a WCF service (SERVICE) as well as a web UI (UI) accessible by web browsers.
What is that supposed to mean?? How can I host a Web UI using an executable?? Am I supposed to develop something like IIS and at the same time somehow define the html and server side code on the APP?
I did some research and I found a class(HttpListener) that allows you to open an http port, listen and then send back some html thru it. A very simple class. If this is a solution I can't see how to implement it. Other than that I couldn't find anything else on the web.
I would appreciate any opinion on the matter, even if I'm not able to develop the solution in time I would like to know how to do it. And if I'm missing some important basic concept regarding WCF or Web Hosting please I would greatly appreciate some clarification. Thanks in advance.
You can use OWIN to "self host" web apps.
An overview and further information can be found here - http://codeopinion.com/self-host-asp-net-web-api/
I solved the problem by hosting the web UI in the service itself. A service operation can return anything, even a Stream of bytes with the html for the browser to render. Here's the code.
[WebGet(BodyStyle = WebMessageBodyStyle.WrappedRequest)]
public Stream GetUserInterface()
{
var appDirectoryName = Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().Location);
var htmlFilePath = appDirectoryName + "\\UI.html";
var buffer = File.ReadAllBytes(htmlFilePath);
if (WebOperationContext.Current != null)
WebOperationContext.Current.OutgoingResponse.ContentType = "text/html";
return new MemoryStream(buffer);
}
As you can see on the same directory than the executable I placed a UI.html file, this file contains all my UI html, javascript and css. Then I convert it to an array of bytes and return that to the browser.
So the only thing I have to do to access the UI is run the application and then browse to this operation. Eg: http://localhost:8080/MyService/GetUserInterface.
For the database part I used SQlite, in this way the application became a standalone that can be installed in a PC and run immediately without the need of Database or Web hosting. Exactly what the test requested.
Alternatively the class that I mentioned in my question (HttpListener) can also be used to host the Web UI instead of the service. This is another solution.
private static void HostUI()
{
while (true)
{
using (var listener = new HttpListener())
{
listener.Prefixes.Add("http://localhost:7070/");
listener.Start();
var context = listener.GetContext();
var response = context.Response;
//The .html file will be in the same folder where the .exe is
var appDirectoryName = Path.GetDirectoryName(System.Reflection.Assembly.GetExecutingAssembly().Location);
var htmlFilePath = appDirectoryName + "\\UI.html";
var buffer = File.ReadAllBytes(htmlFilePath);
response.ContentType = "text/html";
response.ContentLength64 = buffer.Length;
Console.WriteLine(buffer.Length);
var output = response.OutputStream;
output.Write(buffer, 0, buffer.Length);
output.Close();
listener.Close();
}
}
}
The reason why I used an infinite loop is because the HttpListener class implementation processes only one order by loop, so in order to able to request the UI multiple times you need to do this.
Then you can browse to http://localhost:7070/ and you'll see the UI too.
You can put this code in an independent thread to host the Web UI without affecting the main thread.

Open a url from a gdk card or invoke the browser

I'm trying to select a url from a card to be opened by the glass browser. Is there a way to set or invoke this?
The Card API for 'getting a uri' from my examination is for rendering images.
https://developers.google.com/glass/develop/gdk/reference/com/google/android/glass/app/Card#getImage(int)
You can open a URL in the built-in browser by starting an activity with an ACTION_VIEW intent. For example:
String url = "http://www.whatever.com";
Intent intent = new Intent(Intent.ACTION_VIEW);
intent.setData(Uri.parse(url));
startActivity(intent);

Access cookies with a Firefox Addon content script?

I am trying to translate an Addon from Chrome that someone else created. It has a content script that has chrome.cookies.get in it. I can't find a suitable way to fix this for Firefox. Is there any way that I can access the cookies from a content script in the addon sdk?
Here's the original code:
function getCookies(domain, name, callback) {
chrome.cookies.get({"url": domain, "name": name},
function(cookie) {
if (callback) {
if (cookie) {
callback(cookie.value);
} else {
callback(null);
}
}
}
);
}
A content script doesn't have the necessary privileges to use any advanced API - neither in Firefox nor in Chrome. It can get the cookies for the current page via document.cookie however. Same restrictions apply as for the web page itself - HTTPOnly cookies won't be visible.
In the extension modules however you can use nsICookieManager2 interface to access cookies. See Access specific cookies by domain/name in Firefox extension for details. If you want that information from a content script you will have to send a message from the content script to the extension to make the extension retrieve it for you.

Unity3d and Yii webserivce

I'm building a server for game code by Unity3d, my server is built with Yii, but when I see guide about webservice of Yii tutorial I saw it use soapClient to call function in server. But in Unity3d i know just WWW and WWWForm to request to server. So, anybody know how to use webservice in Unity3d to communicate with Yii?
Thank you so much.
You just send data through WWWForm
http://unity3d.com/support/documentation/ScriptReference/WWWForm.html
var highscore_url = "http://www.my-site.com/?r=MyGame/highscore";
And in Yii's Controller: \protected\controller\MyGameController.php
class MyGame extends Controller
{
public function actionHighscore()
{
// Here you get data from $_REQUEST ($_POST or $_GET)
// and use Yii's power for output some data
// like perl example in link upthere
}
}