Pepper robot: subscribeCamera's name argument - pepper

I want to capture an image from the camera of Pepper, so first I subscribe to a camera using subscribeCamera method. I have seen the documentation.
So the function needs some parameters:
std::string ALVideoDeviceProxy::subscribeCamera(
const std::string& Name,
const int& CameraIndex,
const int& Resolution,
const int& ColorSpace,
const int& Fps)
Parameters:
Name – Name of the subscribing module.
CameraIndex – Index of the camera in the video system (see Camera Indexes).
Resolution – Resolution requested (see Supported resolutions).
ColorSpace – Colorspace requested (see Supported colorspaces).
Fps – Fps (frames per second) requested to the video source (see Supported framerates).
My question is about the first parameter: name, because the documentation says:
Warning
The same Name could be used only six time.
Why the name can be used only 6 times? after the six times the function stops to return a value. So I have to change the name every 6 times?

I think the point is more something like "you couldn't use more than 6 times, without unsusbscribing first".
The subscribe process return you a name to refer to.
If this name exists already it will give you another one.
like:
subscribe( "toto") => toto
subscribe( "toto") => toto_2
subscribe( "toto") => toto_3
...
But only 6 times (lazy programmer, but not only, you should have a design problem in this case, eg: forget to unsubscribe).
So I think the "normal way" is to unsubscribe, and then it should do that:
subscribe( "toto") => "toto"
unsubscribe( "toto") ( "toto" is not used anymore, so the system can use it later)
subscribe( "toto") => "toto"

Related

librdkafka custom logger, function signature

I'm using the librdkafka c++ API and I would like to change the default behavior of the logger.
In the c API there is this function rd_kafka_conf_set_log_cb() to set the log callback. It takes a function with the signature:
void(*)(const rd_kafka_t *rk, int level, const char *fac, const char *buf)
However I can't figure out what const char *fac does in the function signature. I can see that strings such as "FAIL" or "BGQUEUE" are passed when using it, but I can't find any documentation on what they mean or how to use them.
What is the const char *fac used for, and are there docs on its use or a dictionary for their definitions?
The facility string is a semi-unique name for the context where the log is emitted. It is mainly there to help librdkafka maintainers identify the source of a log line, but can also be used for filtering purposes.
It was originally inspired by Cisco IOS like system logs which came in the form of:
FAC-LVL-SUBFAC: Message...
The librdkafka counterpart would be:
RDKAFKA-7-JOIN: Joining consumer group xyx
where JOIN is the librdkafka logging facility.

Tensorflow Serving C++ Syntax

Now, I'm studying Tensorflow Serving and try to create a Custom Servable.
So, I read the code about hashmap_source_adaptor (This is example code in Tensorflow Serving).
But, there is some code, i can't understand.
HashmapSourceAdapter::HashmapSourceAdapter(
const HashmapSourceAdapterConfig& config)
: SimpleLoaderSourceAdapter<StoragePath, Hashmap>(
[config](const StoragePath& path, std::unique_ptr<Hashmap>* hashmap) {
return LoadHashmapFromFile(path, config.format(), hashmap);
},
// Decline to supply a resource footprint estimate.
SimpleLoaderSourceAdapter<StoragePath,
Hashmap>::EstimateNoResources()) {}
HashmapSourceAdapter::~HashmapSourceAdapter() { Detach(); }
What is [config] means in line 4?
Give me a idea or hint to search about that.
The origin code is in this link. and I can't understand line 70.
https://github.com/tensorflow/serving/blob/master/tensorflow_serving/servables/hashmap/hashmap_source_adapter.cc#L70
Thanks.
The [config] is a capture list for the lambda expression. Since it's not specified otherwise, it captures config by value. This copies whatever config refers to, and makes it visible inside the lambda.
Capturing config is needed, because the code in the lambda expression uses config:
return LoadHashmapFromFile(path, config.format(), hashmap);
For config to mean something inside the lambda expression, it has to be captured. In particular, a lambda expression is basically a short-cut for creating a class. Anything in the capture list (that's actually used inside the lambda expression) becomes a parameter that's passed to the ctor for that class (and the body of the lambda expression becomes the body of an overload of operator()() for that class).

Cast video to TV using MediaPlayerElement

I have the following XAML
<MediaPlayerElement x:Name="EmbeddedPlayer" AreTransportControlsEnabled="True" HorizontalAlignment="Stretch" IsDoubleTapEnabled="True" DoubleTapped="OnEmbeddedPlayerDoubleTapped"> </MediaPlayerElement>
According to this official documentation, I should be able to use the available Cast button to cast the video to my TV. The Movies & TV app can does that: When I clicked the cast button in that app, it lists the available targets. But when I do the same thing for my app, it asks me to make sure that the devices are discoverable and no progress ring indicating device searching/discovery was going on. (I am on a Lumia 635.) Once again, I feel the frustration of mismatching between documentation and reality!
Is there a complete working example for video/audio casting?
EDIT: I added the simplified code following the third method for device discovery given in the article:
using namespace Windows::Devices::Enumeration;
MainPage::MainPage()
{
// Other set up
DeviceWatcher ^deviceWatcher;
CastingConnection ^castingConnection;
//Create our watcher and have it find casting devices capable of video casting
deviceWatcher = DeviceInformation::CreateWatcher(CastingDevice::GetDeviceSelector(CastingPlaybackTypes::Video));
//Register for watcher events
deviceWatcher->Added += ref new TypedEventHandler<DeviceWatcher^, DeviceInformation^>(this, &MainPage::DeviceWatcher_Added);
deviceWatcher->Start();
}
void MainPage::DeviceWatcher_Added(Windows::Devices::Enumeration::DeviceWatcher^ sender, Windows::Devices::Enumeration::DeviceInformation^ args)
{
Dispatcher->RunAsync(Windows::UI::Core::CoreDispatcherPriority::Normal, ref new DispatchedHandler([args]()
{
//Add each discovered device to our listbox
create_task(CastingDevice::FromIdAsync(args->Id)).then([](CastingDevice^ addedDevice)
{
OutputDebugString(("Found cast device " + addedDevice->FriendlyName + "\n")->Data());
}, task_continuation_context::use_current());
//castingDevicesListBox.Items.Add(addedDevice);
}));
}
As I anticipated, there is no device discovered. There might probably be some extra steps to take care of permission (allow app to discover & cast to devices) etc. that are never specified in the documentation.
Contrary to the documentation, one MUST NOT use MediaPlayerElement but the deprecated MediaElement for casting on mobile. This hint is taken from https://social.msdn.microsoft.com/Forums/en-US/0c37a74f-1331-4fb8-bfdf-3df11d953098/uwp-mediaplayerelement-mediacasting-is-broken-?forum=wpdevelop
This also solves a problem I previously asked about showing video in fullscreen: Fullscreen works like a charm for MediaElement on mobile; but not the supposely upgraded MediaPlayerElement.
I should have realized this obvious fact given that Microsoft already abandoned Windows 10 Mobile.

Extract/Identify NodeType by Name (or string - identifier)

Hi!
I'm writing a "simple" Maya command in C++, in witch I need to select from the scene (like the ls command in MEL).
But I don't know how to identify an MFn::Type data based on a string name like "gpuCache".
Actually my (very stupid) parser does a simple if that identify the MFn::Type based on two options: if the node name is "gpuCache" sets the filter using MFn::Type::kPluginShape, otherwise use kDagNode (or kShape, or whatever fits my needs for a broad identification for as many nodes as possible, for a later use of the typeName() of the MFnDagNode class).
This is the "filterByType" function, that I want to use to convert a type defined by String in a type defined by MFn::Type.
MFn::Type Switch::filterByType( MString type )
{
MFn::Type object_type;
object_type = MFn::Type::kDagNode;
MNodeClass node_class( type );
MGlobal::displayInfo( MString("Type Name: " + node_class.typeName()) );
return object_type;
}
Can someone help me, or I need to call a MEL/Python command from C++ (a thing that I really don't want to do) to get this thing done?
Thanks!

Adobe Adam and Eve (C++ ASL): how to bind Eve variable so to get it updated inside C++ application?

So we know how to compile it, we have seen its demos and loved it. We have seen probably only one real life opensource project based on it. So I look at the samples and see only 3 quite long C++ applications that can be ofmy intrest ASL\test\adam_tutorial\, ASL\test\adam_smoke\, ASL\test\eve_smoke\. But I still do not get how htving simple Eve file with:
dialog(name: "Clipping Path")
{
column(child_horizontal: align_fill)
{
popup(name: "Path:", bind: #path, items:
[
{ name: "None", value: empty },
{ name: "Path 1", value: 1 },
{ name: "Path 2", value: 2 }
]);
edit_number(name: "Flatness:", digits: 9, bind: #flatness);
}
button(name: "OK", default: true, bind: #result);
}
in it, Adam file bound to it (theoretically, because I do not quite get how to bind Eve to adam and see no tutorialon how to do this), with
sheet clipping_path
{
output:
result <== { path: path, flatness: flatness };
interface:
unlink flatness : 0.0 <== (path == empty) ? 0.0 : flatness;
path : 1;
}
in it, make each time flatness variableis changed some C++ function of mine called (A simple one couting new flatness value for example)
So How to implement such thing with Adobe Adam and Eve and Boost ofcourse?
Update
We have tried to do it here and it worked but not in a live feedback way - only on dialog close action. And than here but due to our compile evrething on linux absession we have paused our development in ASL programming and started investing time into ASL compilation on Linux OS.
A good place to ask questions about ASL is on the ASL developer mailing list: http://sourceforge.net/mail/?group_id=132417.
You might want to look at the "Begin" test app . Although this only runs Mac and Win it does demonstrate how to wire things up.
The basic idea is that when a layout description (Eve) is parsed it will call your add_view_proc http://stlab.adobe.com/structadobe_1_1eve__callback__suite__t.html#a964b55af7417ae24aacbf552d1efbda4 with the arguments expression. Normally you use bind_layout_proc for the callback which will handle the argument evaluation for your and call a simplified callback that takes a dictionary with the arguments.
When your callback is invoked, you would typically create an appropriate widget and associate the dictionary to the widget or extract the arguments of interest from the dictionary and store them in a struct. Using the bind argument, you can setup callbacks with the associated sheet (Adam), using the monitor_xxxx functions on sheet_t. Usually you'll use monitor_value and monitor_enabled. When called, you set the value or enabled state on the widget. When the widgets value is changed by the user, and widget is invoked (it may be through an event handler, or a callback, or whatever mechanism your UI toolkit supports) you call sheet_t::set() to set the value of the cell and then sheet_t::update() to cause the sheet to recalculate.
That's about it - When trying to get Adam/Eve going with a new UI framework - start small. I usually start with just a window containing two checkboxs and wire up Eve first. Once that is going add Adam and a simple sheet connecting two boolean cells so you can see if things are happening correctly. Once you have that going you'll find it's pretty simple to get much more complex UIs wired up.