gRPC-only Tensorflow Serving client in C++ - c++

There seems to be a bit of information out there for creating a gRPC-only client in Python (and even a few other languages) and I was able to successfully get a working client that uses only gRPC in Python that works for our implementation.
What I can't seem to find is a case where someone has successfully written the client in C++.
The constraints of the task are as follows:
The build system cannot be bazel, because the final application already has its own build system.
The client cannot include Tensorflow (which requires bazel to build against in C++).
The application should use gRPC and not HTTP calls for speed.
The application ideally won't call Python or otherwise execute shell commands.
Given the above constraints, and assuming that I extracted and generated the gRPC stubs, is this even possible? If so, can an example be provided?

Turns out, this isn't anything new if you have already done it in Python. Assuming the model has been named "predict" and the input to the model is called "inputs," the following is the Python code:
import logging
import grpc
from grpc import RpcError
from types_pb2 import DT_FLOAT
from tensor_pb2 import TensorProto
from tensor_shape_pb2 import TensorShapeProto
from predict_pb2 import PredictRequest
from prediction_service_pb2_grpc import PredictionServiceStub
class ModelClient:
"""Client Facade to work with a Tensorflow Serving gRPC API"""
host = None
port = None
chan = None
stub = None
logger = logging.getLogger(__name__)
def __init__(self, name, dims, dtype=DT_FLOAT, version=1):
self.model = name
self.dims = [TensorShapeProto.Dim(size=dim) for dim in dims]
self.dtype = dtype
self.version = version
#property
def hostport(self):
"""A host:port string representation"""
return f"{self.host}:{self.port}"
def connect(self, host='localhost', port=8500):
"""Connect to the gRPC server and initialize prediction stub"""
self.host = host
self.port = int(port)
self.logger.info(f"Connecting to {self.hostport}...")
self.chan = grpc.insecure_channel(self.hostport)
self.logger.info("Initializing prediction gRPC stub.")
self.stub = PredictionServiceStub(self.chan)
def tensor_proto_from_measurement(self, measurement):
"""Pass in a measurement and return a tensor_proto protobuf object"""
self.logger.info("Assembling measurement tensor.")
return TensorProto(
dtype=self.dtype,
tensor_shape=TensorShapeProto(dim=self.dims),
string_val=[bytes(measurement)]
)
def predict(self, measurement, timeout=10):
"""Execute prediction against TF Serving service"""
if self.host is None or self.port is None \
or self.chan is None or self.stub is None:
self.connect()
self.logger.info("Creating request.")
request = PredictRequest()
request.model_spec.name = self.model
if self.version > 0:
request.model_spec.version.value = self.version
request.inputs['inputs'].CopyFrom(
self.tensor_proto_from_measurement(measurement))
self.logger.info("Attempting to predict against TF Serving API.")
try:
return self.stub.Predict(request, timeout=timeout)
except RpcError as err:
self.logger.error(err)
self.logger.error('Predict failed.')
return None
The following is a working (rough) C++ translation:
#include <iostream>
#include <memory>
#include <string>
#include <grpcpp/grpcpp.h>
#include "grpcpp/create_channel.h"
#include "grpcpp/security/credentials.h"
#include "google/protobuf/map.h"
#include "types.grpc.pb.h"
#include "tensor.grpc.pb.h"
#include "tensor_shape.grpc.pb.h"
#include "predict.grpc.pb.h"
#include "prediction_service.grpc.pb.h"
using grpc::Channel;
using grpc::ClientContext;
using grpc::Status;
using tensorflow::TensorProto;
using tensorflow::TensorShapeProto;
using tensorflow::serving::PredictRequest;
using tensorflow::serving::PredictResponse;
using tensorflow::serving::PredictionService;
typedef google::protobuf::Map<std::string, tensorflow::TensorProto> OutMap;
class ServingClient {
public:
ServingClient(std::shared_ptr<Channel> channel)
: stub_(PredictionService::NewStub(channel)) {}
// Assembles the client's payload, sends it and presents the response back
// from the server.
std::string callPredict(const std::string& model_name,
const float& measurement) {
// Data we are sending to the server.
PredictRequest request;
request.mutable_model_spec()->set_name(model_name);
// Container for the data we expect from the server.
PredictResponse response;
// Context for the client. It could be used to convey extra information to
// the server and/or tweak certain RPC behaviors.
ClientContext context;
google::protobuf::Map<std::string, tensorflow::TensorProto>& inputs =
*request.mutable_inputs();
tensorflow::TensorProto proto;
proto.set_dtype(tensorflow::DataType::DT_FLOAT);
proto.add_float_val(measurement);
proto.mutable_tensor_shape()->add_dim()->set_size(5);
proto.mutable_tensor_shape()->add_dim()->set_size(8);
proto.mutable_tensor_shape()->add_dim()->set_size(105);
inputs["inputs"] = proto;
// The actual RPC.
Status status = stub_->Predict(&context, request, &response);
// Act upon its status.
if (status.ok()) {
std::cout << "call predict ok" << std::endl;
std::cout << "outputs size is " << response.outputs_size() << std::endl;
OutMap& map_outputs = *response.mutable_outputs();
OutMap::iterator iter;
int output_index = 0;
for (iter = map_outputs.begin(); iter != map_outputs.end(); ++iter) {
tensorflow::TensorProto& result_tensor_proto = iter->second;
std::string section = iter->first;
std::cout << std::endl << section << ":" << std::endl;
if ("classes" == section) {
int titer;
for (titer = 0; titer != result_tensor_proto.int64_val_size(); ++titer) {
std::cout << result_tensor_proto.int64_val(titer) << ", ";
}
} else if ("scores" == section) {
int titer;
for (titer = 0; titer != result_tensor_proto.float_val_size(); ++titer) {
std::cout << result_tensor_proto.float_val(titer) << ", ";
}
}
std::cout << std::endl;
++output_index;
}
return "Done.";
} else {
std::cout << "gRPC call return code: " << status.error_code() << ": "
<< status.error_message() << std::endl;
return "RPC failed";
}
}
private:
std::unique_ptr<PredictionService::Stub> stub_;
};
Note that the dimensions here have been specified within the code instead of passed in.
Given the above class, execution can then be as follows:
int main(int argc, char** argv) {
float measurement[5*8*105] = { ... data ... };
ServingClient sclient(grpc::CreateChannel(
"localhost:8500", grpc::InsecureChannelCredentials()));
std::string model("predict");
std::string reply = sclient.callPredict(model, *measurement);
std::cout << "Predict received: " << reply << std::endl;
return 0;
}
The Makefile used was borrowed from the gRPC C++ examples, with the PROTOS_PATH variable set relative to the Makefile and the following build target (assuming the C++ application is named predict.cc):
predict: types.pb.o types.grpc.pb.o tensor_shape.pb.o tensor_shape.grpc.pb.o resource_handle.pb.o resource_handle.grpc.pb.o model.pb.o model.grpc.pb.o tensor.pb.o tensor.grpc.pb.o predict.pb.o predict.grpc.pb.o prediction_service.pb.o prediction_service.grpc.pb.o predict.o
$(CXX) $^ $(LDFLAGS) -o $#

Related

Azure IoT Edge C++ module not sending device to cloud telemetry

I have written an IoT Edge C++ module that is sending the event output using the following function call:
bool IoTEdgeClient::sendMessageAsync(std::string message)
{
bool retVal = false;
LOGGER_TRACE(IOT_CONNECTION_LOG, className + "::sendMessageAsync(...) START");
LOGGER_DEBUG(IOT_CONNECTION_LOG, className + "::sendMessageAsync(...) message : " << message);
Poco::Mutex::ScopedLock lock(_accessMutex);
MESSAGE_INSTANCE *messageInstance = CreateMessageInstance(message);
IOTHUB_CLIENT_RESULT clientResult = IoTHubModuleClient_LL_SendEventToOutputAsync(_iotHubModuleClientHandle, messageInstance->messageHandle, "output1", SendConfirmationCallback, messageInstance);
if (clientResult != IOTHUB_CLIENT_OK)
{
LOGGER_ERROR(IOT_CONNECTION_LOG, className + "::sendMessageAsync(...) ERROR : " << message << " Message id: " << messageInstance->messageTrackingId);
retVal = false;
}
else
{
retVal = true;
}
LOGGER_TRACE(IOT_CONNECTION_LOG, className + "::sendMessageAsync(...) END");
return retVal;
}
The result of the function call IoTHubModuleClient_LL_SendEventToOutputAsync is always coming as IOTHUB_CLIENT_OK. My module name is MicroServer and the route configured is:
FROM /messages/modules/MicroServer/outputs/output1 INTO $upstream
I do not see the SendConfirmationCallback function being called. Also, I do not see any device to cloud message appearing in the IoT hub. Any ideas why this is happening and how to fix it?
It turned out that I need to call this function atleast a couple of times per second
IoTHubModuleClient_LL_DoWork(_iotHubModuleClientHandle);
which I was not doing. As a result the code was not working properly. Once I started doing it. The code just worked.

tensorflow and tflearn c++ API

At first I am new on both tensorflow and python to start with.
I have a python code that contains a TFlearn DNN network. I need to convert that code to C++ to later on convert it into a library to be used in mobile application development.
I read about the C++ API for tensorflow (of which documentations are real vague and not clear). so I took the code line by line to try converting it.
The first step was loading the saved model that was was previously trained and saved in python (I don't need training to be done in c++ so just loading the tflearn model is enough)
The python code to save the file was as follows:
network = input_data(shape=[None, 100, 100, 1], name='input')
network = conv_2d(network, 32, 5, activation='relu')
network = avg_pool_2d(network, 2)
network = conv_2d(network, 64, 5, activation='relu')
network = avg_pool_2d(network, 2)
network = fully_connected(network, 128, activation='relu')
network = fully_connected(network, 64, activation='relu')
network = fully_connected(network, 2, activation='softmax',restore=False)
network = regression(network, optimizer='adam', learning_rate=0.0001,
loss='categorical_crossentropy', name='target')
model = tflearn.DNN(network, tensorboard_verbose=0)
model.fit(X, y.toarray(), n_epoch=3, validation_set=0.1, shuffle=True,
show_metric=True, batch_size=32, snapshot_step=100,
snapshot_epoch=False, run_id='model_finetuning')
model.save('model/my_model.tflearn')
To load the model python code was:
network = input_data(shape=[None, 100, 100, 1], name='input')
network = conv_2d(network, 32, 5, activation='relu')
network = avg_pool_2d(network, 2)
network = conv_2d(network, 64, 5, activation='relu')
network = avg_pool_2d(network, 2)
network = fully_connected(network, 128, activation='relu')
network = fully_connected(network, 64, activation='relu')
network = fully_connected(network, 2, activation='softmax')
network = regression(network, optimizer='adam', learning_rate=0.001,
loss='categorical_crossentropy', name='target')
model = tflearn.DNN(network, tensorboard_verbose=0)
model.load('model/my_model.tflearn')
and this code worked like a charm in python, yet the model save file was actually 4 files inside the model folder as follows:
model
|------------checkpoint
|------------my_model.tflearn.data-00000-of-00001
|------------my_model.tflearn.index
|------------my_model.tflearn.meta
now I come to the c++ part of it. After a lot of research I came up with the following code:
#include "tensorflow/core/public/session.h"
#include "tensorflow/core/platform/env.h"
#include <iostream>
using namespace tensorflow;
using namespace std;
int main()
{
Session* session;
Status status = NewSession(SessionOptions(), &session);
if (!status.ok())
{
cerr << status.ToString() << "\n";
return 1;
}
else
{
cout << "Session created successfully" << endl;
}
tensorflow::Tensor input_tensor(tensorflow::DT_FLOAT, tensorflow::TensorShape({1,100,100,1}));
GraphDef graph_def;
status = ReadBinaryProto(Env::Default(), "/home/user/PycharmProjects/untitled/model/my_model.tflearn", &graph_def);
if (!status.ok())
{
cerr << status.ToString() << "\n";
return 1;
}
else
{
cout << "Read Model File" << endl;
}
return 0;
}
And now for my questions, the code compile correctly (with no faults) using the bazel build (as described in the "Short" explanation of tensorflow C++ API. but when I tried to run it the model file is not found.
Is what I did in c++ correct? Is this the correct way to load the saved model (which I don't know why 4 files are generated during save)? or is there another approach to do it?
Is there any "Full and descent" manual for the tensorflow c++ API?
If you just want to load an already trained model, a c++ loader already exists. Directly on tensorflow look here and here
Patwie also got a really good example for loading a saved model Code from Patwie.
tensorflow::Status LoadModel(tensorflow::Session *sess, std::string graph_fn, std::string checkpoint_fn = "") {
tensorflow::Status status;
// Read in the protobuf graph we exported
tensorflow::MetaGraphDef graph_def;
status = ReadBinaryProto(tensorflow::Env::Default(), graph_fn, &graph_def);
if (status != tensorflow::Status::OK())
return status;
// create the graph in the current session
status = sess->Create(graph_def.graph_def());
if (status != tensorflow::Status::OK())
return status;
// restore model from checkpoint, iff checkpoint is given
if (checkpoint_fn != "") {
const std::string restore_op_name = graph_def.saver_def().restore_op_name();
const std::string filename_tensor_name = graph_def.saver_def().filename_tensor_name();
tensorflow::Tensor filename_tensor(tensorflow::DT_STRING, tensorflow::TensorShape());
filename_tensor.scalar<std::string>()() = checkpoint_fn;
tensor_dict feed_dict = {{filename_tensor_name, filename_tensor}};
status = sess->Run(feed_dict,
{},
{restore_op_name},
nullptr);
if (status != tensorflow::Status::OK())
return status;
} else {
// virtual Status Run(const std::vector<std::pair<string, Tensor> >& inputs,
// const std::vector<string>& output_tensor_names,
// const std::vector<string>& target_node_names,
// std::vector<Tensor>* outputs) = 0;
status = sess->Run({}, {}, {"init"}, nullptr);
if (status != tensorflow::Status::OK())
return status;
}
Unfortunatly there isn't a "full and descent" manual for tensorflow c++ API yet (AFAIK)
I wrote the steps how to save a TFLearn checkpoint correctly:
...
model = tflearn.DNN(network)
class MonitorCallback(tflearn.callbacks.Callback):
# Create an other session to clone the model and avoid effecting the training process
with tf.Session() as second_sess:
# Clone the current model
model2 = model
# Delete the training ops
del tf.get_collection_ref(tf.GraphKeys.TRAIN_OPS)[:]
# Save the checkpoint
model2.save('checkpoint_'+str(training_state.step)+".ckpt")
# Write a text protobuf to have a human-readable form of the model
tf.train.write_graph(second_sess.graph_def, '.', 'checkpoint_'+str(training_state.step)+".pbtxt", as_text = True)
return
mycb = MonitorCallback()
model.fit({'input': X}, {'target': Y}, n_epoch=500, run_id="mymodel", callbacks=mycb)
...
After you have the checkpoint, you can load in C++:
https://github.com/kecsap/tensorflow_cpp_packaging#load-a-checkpoint-in-c
...and you it for inference:
https://github.com/kecsap/tensorflow_cpp_packaging#inference-in-c
You can also find example code for C and how to freeze a model then load in C++.

How to handle Configure file Windows-8

i have a configure file named "Example.CFG" in which arguments for main function are mentioned. When i run the code in visual studio 2013, it runs successfully, but configuration file does not supply any arguments to main and each time execute the else portion. By googling I know the reason as . /Configure only works in Linux. My problem is that how to by pass the configure file and supply arguments directly to main . My code is below (The code is not mine but downloaded from Github)
int main(int argc, void** argv) {
tricrf::MaxEnt *model;
vector<string> model_file, train_file, dev_file, test_file, output_file;
string initialize_method, estimation_method;
size_t max_iter, init_iter;
double l1_prior, l2_prior;
enum {MaxEnt = 0, CRF, TriCRF1, TriCRF2, TriCRF3} model_type;
bool train_mode = false, testing_mode = false;
bool confidence = false;
////////////////////////////////////////////////////////////////
/// Reading the configuration file
////////////////////////////////////////////////////////////////
char config_filename[128];
if (argc > 1) {
strcpy_s(config_filename, (char*)argv[1]);
}
else {
cout << MAX_HEADER;
cout << "[Usage] max config_file \n\n";
exit(1);
}
My Configur file having the following entries:
# sample configuration file
model_type = TriCRF3
# {MaxEnt CRF TriCRF1 TriCRF2 TriCRF3}
mode = both
# {train test both}
train_file = example.data
test_file = example.data
model_file = example.model
cutoff = 1
# feature cutoff by count
true_label = first
# if 'first' is on, it reads first columns as true labels
outside_label = NONE
# it would be used for F1 calculation
binary_model = false
# currently, not support
estimation = LBFGS-L2
# {LBFGS-L1 LBFGS-L2}

net-snmp is not changing auth and priv protocol correctly

I'm using the net-snmp library (version 5.7.1) in a c++ programm under Linux. I have a Web-Frontend where a user can choose a SNMP-Version and configure it. SNMPv1 and SNMPv2 are working just fine, but I got some issues with SNMPv3.
Here is a picture of the frontend: Screenshot of Webinterface (Sorry for not uploading it directly here, but I need at least 10 reputation to do this)
When I start the c++ backend and enter all needed SNMPv3 credentials correctly, everything is working fine and the device is reachable. If I change for example the Auth Protocol from MD5 to SHA, but leave the rest of the credentials the same, I would expect that the device is not reachable anymore. In real it stays reachable. After restarting the backend the device is (as expected) not reachable anymore with the same settings.
After discovering this issue, I ran some tests. For the test I used different users and different settings. They were run with three different devices of different vendors and I got every time the same result. So it can not be device realated issue. The results can be seen here: Test results
My conclusion after testing was, that net-snmp seems to cache the selected auth and priv protocol for one user name. This can be seen very good at Test 2. The first time I use an user name with a specific protocol I get the expected result. After changing the protocol
a different result is expected, but I get still the same result as before.
At the end some information how the SNMP-calls are made:
There is a class called SNMPWrapper, which handels the whole SNMP-communication
Inside the constructor I call init_snmp() to init net-snmp
From the outside I can call only get(), set() and walk(). Every time one of these methods is called, a new SNMP-Session is created (First I create a new Session with snmp_sess_init(), than I set up the things needed and finally I open the session with snmp_sess_open()
After I made the request and received my answer I close the session with snmp_sess_close()
Question: Do I have to do any other clean up before changing a protocol in order to get it work correctly?
Edit: I added some code, that shows the described behaviour
int main(int argc, char** argv) {
struct snmp_session session, session1, *ss, *ss1;
struct snmp_pdu *pdu, *pdu1;
struct snmp_pdu *response, *response1;
oid anOID[MAX_OID_LEN];
size_t anOID_len = MAX_OID_LEN;
struct variable_list *vars;
int status, status1;
init_snmp("snmpapp");
const char* user = "md5";
string authpw = "123123123";
string privpw = "";
string ipString = "192.168.15.32";
char ip[16];
memset(&ip, 0, sizeof (ip));
ipString.copy(ip, sizeof (ip) - 1, 0);
/*
* First request: AuthProto is MD5, no PrivProto is used. The snmp-get
* request is successful
*/
snmp_sess_init(&session); /* set up defaults */
session.peername = ip;
session.version = SNMP_VERSION_3;
/* set the SNMPv3 user name */
session.securityName = strdup(user);
session.securityNameLen = strlen(session.securityName);
// set the authentication method to MD5
session.securityLevel = SNMP_SEC_LEVEL_AUTHNOPRIV;
session.securityAuthProto = usmHMACMD5AuthProtocol;
session.securityAuthProtoLen = USM_AUTH_PROTO_MD5_LEN;
session.securityAuthKeyLen = USM_AUTH_KU_LEN;;
if (generate_Ku(session.securityAuthProto,
session.securityAuthProtoLen,
(u_char *) authpw.c_str(), strlen(authpw.c_str()),
session.securityAuthKey,
&session.securityAuthKeyLen) != SNMPERR_SUCCESS) {
//if code reaches here, the creation of the security key was not successful
}
cout << "SecurityAuthProto - session: " << session.securityAuthProto[9] << " / SecurityAuthKey - session: " << session.securityAuthKey << endl;
ss = snmp_open(&session); /* establish the session */
if (!ss) {
cout << "Couldn't open session1 correctly";
exit(2);
}
cout << "SecurityAuthProto - ss: " << ss->securityAuthProto[9] << " / SecurityAuthKey - ss: " << ss->securityAuthKey << endl;
//send message
pdu = snmp_pdu_create(SNMP_MSG_GET);
read_objid(".1.3.6.1.2.1.1.1.0", anOID, &anOID_len);
snmp_add_null_var(pdu, anOID, anOID_len);
status = snmp_synch_response(ss, pdu, &response);
/*
* Process the response.
*/
if (status == STAT_SUCCESS && response->errstat == SNMP_ERR_NOERROR) {
cout << "SNMP-read success" << endl;
} else {
cout << "SNMP-read fail" << endl;
}
if (response)
snmp_free_pdu(response);
if (!snmp_close(ss))
cout << "Snmp closing failed" << endl;
/*
* Second request: Only the authProto is changed from MD5 to SHA1. I expect,
* that the snmp-get fails, but it still succeeds.
*/
snmp_sess_init(&session1);
session1.peername = ip;
session1.version = SNMP_VERSION_3;
/* set the SNMPv3 user name */
session1.securityName = strdup(user);
session1.securityNameLen = strlen(session1.securityName);
// set the authentication method to SHA1
session1.securityLevel = SNMP_SEC_LEVEL_AUTHNOPRIV;
session1.securityAuthProto = usmHMACSHA1AuthProtocol;
session1.securityAuthProtoLen = USM_AUTH_PROTO_SHA_LEN;
session1.securityAuthKeyLen = USM_AUTH_KU_LEN;
if (generate_Ku(session1.securityAuthProto,
session1.securityAuthProtoLen,
(u_char *) authpw.c_str(), strlen(authpw.c_str()),
session1.securityAuthKey,
&session1.securityAuthKeyLen) != SNMPERR_SUCCESS) {
//if code reaches here, the creation of the security key was not successful
}
cout << "SecurityAuthProto - session1: " << session1.securityAuthProto[9] << " / SecurityAuthKey - session1: " << session1.securityAuthKey << endl;
ss1 = snmp_open(&session1); /* establish the session */
if (!ss1) {
cout << "Couldn't open session1 correctly";
exit(2);
}
cout << "SecurityAuthProto - ss1: " << ss1->securityAuthProto[9] << " / SecurityAuthKey - ss1: " << ss1->securityAuthKey << endl;
//send message
pdu1 = snmp_pdu_create(SNMP_MSG_GET);
read_objid(".1.3.6.1.2.1.1.1.0", anOID, &anOID_len);
snmp_add_null_var(pdu1, anOID, anOID_len);
status1 = snmp_synch_response(ss1, pdu1, &response1);
/*
* Process the response.
*/
if (status1 == STAT_SUCCESS && response1->errstat == SNMP_ERR_NOERROR) {
cout << "SNMP-read success" << endl;
} else {
cout << "SNMP-read fail" << endl;
}
if (response1)
snmp_free_pdu(response1);
snmp_close(ss1);
return 0;
}
I found the solution by myself:
net-snmp caches for every EngineId (device) the users. If there is an existing user for an engineID and you try to open a new session with this user, net-snmp will use the cached one. So the solution was to clear the list with cached users.
With this code snippet I could resolve my problem:
usmUser* actUser = usm_get_userList();
while (actUser != NULL) {
usmUser* dummy = actUser;
usm_remove_user(actUser);
actUser = dummy->next;
}
I hope I can help somebody else with this.
You can also update password for an existing user:
for (usmUser* actUser = usm_get_userList(); actUser != NULL; actUser = actUser->next) {
if (strcmp(actUser->secName, user) == 0) {
//this method calls generate_Ku with previous security data but with specified password
usm_set_user_password(actUser, "userSetAuthPass", authpw.c_str());
break;
}
}

NetworkManager and Qt Problem

I am still new to using Qt4/Dbus, and i am trying to get a list of acccess points with Qt API to send/receive Dbus messeges.
I got the following error:
org.freedesktop.DBus.Error.UnknownMethod
Method "GetAccessPoint" with signature "" on interface "org.freedesktop.NetworkManager.Device.Wireless" doesn't exist
The code is:
QStringList *netList = new QStringList();
QDBusConnection sysbus = QDBusConnection::systemBus();
QDBusInterface callNM("org.freedesktop.NetworkManager","/org/freedesktop/NetworkManager","org.freedesktop.NetworkManager.Device.Wireless",sysbus);
if(callNM.isValid())
{
QDBusMessage query= callNM.call("GetAccessPoints");
if(query.type() == QDBusMessage::ReplyMessage)
{
QDBusArgument arg = query.arguments().at(0).value<QDBusArgument>();
arg.beginArray();
while(!arg.atEnd())
{
QString element = qdbus_cast<QString>(arg);
netList->append(element);
}
arg.endArray();
}else{
std::cout<< query.errorName().toStdString() << std::endl;
std::cout<< query.errorMessage().toStdString() << std::endl;
}
int x= netList->size();
for(int y=0; y< x ;y++)
{
widget.avail_nets->addItem(netList->at(y)); // just print it to my gui from the stringlist array
}
}else{
std::cout<<"fail" << std::endl;
}
Whats wrong?My naming was correct and I am following the exact specs from here
The method name is GetAccessPoints.
While your error is:
org.freedesktop.DBus.Error.UnknownMethod
Method "GetAccessPoint" with signature
"" on interface
"org.freedesktop.NetworkManager.Device.Wireless"
doesn't exist
Highlight on "GetAccessPoint". Thus you might have misspelled the method name in the code, although the code you pasted here uses the correct method name, maybe you fixed it and forgot to rebuild or clean the project?
I had the same issue, but then I noticed that it only happened when I called the GetAccessPoints method on a wired device. Make sure the device is a wireless device (i.e. DeviceType equals NM_DEVICE_TYPE_WIFI), and everything should work fine.
i modify this and works for me
QDBusInterface callNM("org.freedesktop.NetworkManager","/org/freedesktop/NetworkManager/Devices/0","org.freedesktop.NetworkManager.Device.Wireless",sysbus);
and the result is
"/org/freedesktop/NetworkManager/AccessPoint/2"
"/org/freedesktop/NetworkManager/AccessPoint/1"
i think /org/freedesktop/NetworkManager is not correct path for specific device (wireless devices).
QDBusInterface dbus_iface("org.freedesktop.NetworkManager", "/org/freedesktop/NetworkManager/Devices/0", "org.freedesktop.NetworkManager.Device.Wireless", bus);
QDBusMessage query = dbus_iface.call("GetAccessPoints");
if(query.type() == QDBusMessage::ReplyMessage) {
QDBusArgument arg = query.arguments().at(0).value<QDBusArgument>();
arg.beginArray();
while(!arg.atEnd()) {
QString element = qdbus_cast<QString>(arg);
netList->append(element);
showAccessPointProperties(element);
}
arg.endArray();
} else {
qDebug() << "got dbus error: " << query.errorName();
qDebug() << "check the parameters like service, path, interface and method name !!!";
}
Hope this will help.