undefined symbol: kms_element_get_type - gstreamer

I'm creating custom Kurento modules with GStreamer plugins, I have created a new module named "RtmpEndpoint" which extends Endpoint.
I was able to build and install the module and generate client js API to use.
However the module could not be loaded, the error log shows:
(gst-plugin-scanner:3379): GStreamer-WARNING *: Failed to load plugin '/usr/lib/x86_64-linux-gnu/gstreamer-1.5/librtmpendpoint.so': /usr/lib/x86_64-linux-gnu/gstreamer-1.5/librtmpendpoint.so: undefined symbol: kms_element_get_type
I did defined the kms_rtmp_endpoint_get_type() function in the source and header file, I'm confused why this error is happening, please help , thanks.
the header file:
typedef struct _KmsRtmpEndpoint KmsRtmpEndpoint;
typedef struct _KmsRtmpEndpointClass KmsRtmpEndpointClass;
struct _KmsRtmpEndpoint
{
KmsElement element;
GstElement *h264depay;
GstElement *pcmudepay;
GstElement *flvmuxer;
GstElement *rtmpsink;
GstPad *videoPad, *audioPad;
gboolean silent;
};
struct _KmsRtmpEndpointClass
{
KmsElementClass parent_class;
};
GType kms_rtmp_endpoint_get_type (void);
and part of the source file:
static GstStaticPadTemplate video_sink = GST_STATIC_PAD_TEMPLATE ("video",
GST_PAD_SINK,
GST_PAD_ALWAYS,
GST_STATIC_CAPS ("application/x-rtp, "
"media = (string) \"video\", "
"clock-rate = (int) 90000, " "encoding-name = (string) \"H264\"")
);
static GstStaticPadTemplate audio_sink = GST_STATIC_PAD_TEMPLATE ("audio",
GST_PAD_SINK,
GST_PAD_ALWAYS,
GST_STATIC_CAPS ("application/x-rtp, "
"media = (string) \"audio\", "
"payload = (int) " GST_RTP_PAYLOAD_PCMU_STRING ", "
"clock-rate = (int) 8000; "
"application/x-rtp, "
"media = (string) \"audio\", "
"encoding-name = (string) \"PCMU\", clock-rate = (int) [1, MAX ]")
);
#define kms_rtmp_endpoint_parent_class parent_class
G_DEFINE_TYPE (KmsRtmpEndpoint, kms_rtmp_endpoint, KMS_TYPE_ELEMENT);

Maybe you shouldn't define kms_rtmp_endpoint_get_type() in the source file, G_DEFINE_TYPE will auto generate it.

Related

Protobuf: Serialize/DeSerialize C++ to Js

I'm using protobuf to send/receive binary data from Cpp to Js and vice-versa and I'm using QWebChannel to communicate with the HTML client.
Question: How to deserialize binary data in cpp which is serialized and sent from Js?
Following I tried:
//Serialization: Cpp to JS - WORKING
tutorial::PhoneNumber* ph = new tutorial::PhoneNumber();
ph->set_number("555-515-135");
ph->set_type(500);
QByteArray bytes = QByteArray::fromStdString(ph->SerializeAsString());
QString args = "\"" + bytes.toBase64(QByteArray::Base64Encoding) + "\"";
QString JsFunctionCall = QString("DeserializePhoneNumber(%1);").arg(args);
m_pWebView->page()->runJavaScript(JsFunctionCall);
//Deserialization In JS - Js Code - WORKING
var obj = phone_msg.PhoneNumber.deserializeBinary(data);
console.log("PhoneNumber: " + obj.getNumber());
console.log("Type: " + obj.getType());
//Serialization in Js - WORKING
var phNum = new phone_msg.PhoneNumber;
phNum.setNumber("555-515-135");
phNum.setId(500);
var base64Str = btoa(phNum.serializeBinary());
console.log("base64Str: " + base64Str);
//Call Cpp function from Js
MainChannel.SendMsgToCpp(base64Str);
Deserialization in Cpp - NOT WORKING
bool WebRelay::ReceiveMsgFromJs(QVariant data)
{
QString str = data.toString();
QByteArray bytedata = str.toLatin1();
QByteArray base64data = QByteArray::fromBase64(bytedata);
std::string stdstr = base64data.toStdString();
tutorial::PhoneNumber cppPhNum;
//THIS IS NOT WORKING. Text and id are invalid
cppPhNum.ParseFromArray(base64data.constData(), base64data.size());
qDebug() << "Text:" << itemData.number();
qDebug() << "id:" << cppPhNum.id();
}
Found the problem.
I was getting comma-separated bytes from Js like:
10,7,74,115,73,116,101,109,49,16,45
I split the strings by ',' and created a QByteArray
QStringList strList = str.split(',');
QByteArray bytedata;
foreach(const QString & str, strList)
{
bytedata+= (str.toUInt());
}
std::string stdstr = bytedata.toStdString();
itemData.ParseFromString(stdstr);
It works.
Also in JS, I removed convertion of the binary string to base64:
var base64Str = phNum.serializeBinary();

Uploading a SQLite database file to an S3 bucket

I'm currently working on a project in Android Studio that implements AWS S3. The use of S3 is to upload/store my local SQLite database file on it so that I can download and use it on command. I realize that this is by no means the most optimal way to be using databases with S3.
I'm able to upload and download files to and from my S3 bucket. But for some reason, the database file is corrupted according to logcat. The SQLite database file is stored in the device's database folder when it is downloaded.
Here's the implemented code:
public void uploadFile(String fileName) {
File exampleFile = new File(getApplicationContext().getDatabasePath("Login.db").getPath());
try {
BufferedWriter writer = new BufferedWriter(new FileWriter(exampleFile));
writer.append("Example file contents");
writer.close();
} catch (Exception exception) {
Log.e("MyAmplifyApp", "Upload failed", exception);
}
Amplify.Storage.uploadFile(
fileName,
exampleFile,
result -> Log.i("MyAmplifyApp", "Successfully uploaded: " + result.getKey()),
storageFailure -> Log.e("MyAmplifyApp", "Upload failed", storageFailure)
);
}
public void downloadFile() {
Amplify.Storage.downloadFile(
"Login.db",
new File(getApplicationContext().getDatabasePath("Login.db") + ""),
result -> Log.i("MyAmplifyApp", "Successfully downloaded: " + result.getFile().getName()),
error -> Log.e("MyAmplifyApp", "Download Failure", error)
);
}
I'm looking for some insight on this matter. I'm just not sure what is causing the file corruption. I was thinking it could be the file path but I believe that is being navigated correctly.
I suspect that your issue is that the file is in fact corrupted and probably due to you using writer.append("Example file contents"); and saving a file that contains just that.
You could perhaps add code to do some checks before uploading and after downloading a file.
The following is a snippet from code that does as such (in this case checking that an asset is a valid sqlite file, it's perhaps a little over the top) :-
/**
* DBHEADER
* The the header string, the first 16 bytes, of the SQLite file
* This should never be changed.
*/
private static final String DBHEADER = "SQLite format 3\u0000"; // SQLite File header first 16 bytes
/**
* Constants that represent the various stages of the copy
*/
private static final int
STAGEOPENINGASSETFILE = 0,
STAGEOPENINGDATABASEFILE = 1,
STAGECOPYING = 3,
STAGEFLUSH = 4,
STAGECLOSEDATABSE = 5,
STAGECLOSEASSET = 6,
STAGEALLDONE = 100
;
/**
* Constants for message codes
*/
private static final int
MSGCODE_EXTENDASSETFILE_ADDEDSUBDIRECTORY = 20,
MSGCODE_EXTEANDASSETFILE_EXTENDEDFILENAME = 21,
MSGCODE_CHECKASSETFILEVALIDTY_OPENEDASSET = 30,
MSGCODE_CHECKASSETFILEVALIDITY_OPENFILED = 31,
MSGCODE_CHECKASSETFILEVALIDITY_NOTSQLITEFILE = 32,
MSGCODE_CHECKASSETFILEVALIDITY_VALIDSQLITEFILE = 33,
MSGCODE_COPY_FAILED = 40,
MSGCODE_COPY_OK = 41
;
/**
* The default buffer size, can be changed
*/
private static final int DEFAULTBUFFERSIZE = 1024 * 32; // 32k buffer
....
/**
* Check that the asset file to be copied exists and optionally is a valid
* SQLite file
* #param cntxt The Context
* #param extendedAssetFilename The asset file name including subdirectories
* #param showstacktrace true id to show the stack-trace if an exception as trapped
* #param checkheader true if the SQLite file header should be checked
* #return true if the checks are ok
*/
public static boolean checkAssetFileValidity(Context cntxt, String extendedAssetFilename, boolean showstacktrace, boolean checkheader) {
boolean rv = true;
InputStream is;
try {
is = cntxt.getAssets().open(extendedAssetFilename);
messages.add(
new Msg(
MSGCODE_CHECKASSETFILEVALIDTY_OPENEDASSET,
Msg.MESSAGETYPE_INFORMATION,
"Successfully Opened asset file " + extendedAssetFilename
)
);
if (checkheader) {
byte[] fileheader = new byte[DBHEADER.length()];
is.read(fileheader,0,fileheader.length);
if (!(new String(fileheader)).equals(DBHEADER)) {
messages.add(
new Msg(
MSGCODE_CHECKASSETFILEVALIDITY_NOTSQLITEFILE,
Msg.MESSAGETYPE_ERROR,
"Asset file " +
extendedAssetFilename +
" is NOT an SQlite Database, instead found " + (new String(fileheader))
)
);
is.close();
return false;
} else {
messages.add(
new Msg(
MSGCODE_CHECKASSETFILEVALIDITY_VALIDSQLITEFILE,
Msg.MESSAGETYPE_INFORMATION,
"Successfully validated asset file " + extendedAssetFilename +
" . It has a valid SQLite Header."
)
);
}
}
is.close();
} catch (IOException e) {
messages.add(
new Msg(
MSGCODE_CHECKASSETFILEVALIDITY_OPENFILED,
Msg.MESSAGETYPE_ERROR,
"Unable to open asset " + extendedAssetFilename + "."
)
);
if (showstacktrace) {
e.printStackTrace();
}
return false;
}
return rv;
}
the Msg class and associated methods haven't been included, they are just for logging messages. Shout if you want them added.
The main thing is checking the header.

Why Oracle cannot resolve Service Name?

I want to use OCILIB
https://vrogier.github.io/ocilib/doc/html/group___ocilib_cpp_api_demo_list_application.html
using this first example code, running on Visual Studio 2019
// dbconnect.cpp : This file contains the 'main' function. Program execution begins and ends there.
//
#include <iostream>
#include <stdio.h>
#include "ocilib.hpp"
using namespace ocilib;
//Declaration:
int test_odbc()
{
try
{
Environment::Initialize();
Connection con("ORCL", "test", "test");
Statement st(con);
st.Execute("select intcol, strcol from table");
Resultset rs = st.GetResultset();
while (rs.Next())
{
std::cout << rs.Get<int>(1) << " - " << rs.Get<ostring>(2) << std::endl;
}
}
catch (std::exception& ex)
{
std::cout << ex.what() << std::endl;
}
Environment::Cleanup();
return EXIT_SUCCESS;
}
int main()
{
test_odbc();
}
for connection to a locally installed Oracle 12 database.
At Connection con... I get:
ORA-12154: TNS:could not resolve the connect identifier specified
Same when I use orcl.ad001.siemens.net instead of ORCL.
I can connect via SQLDeveloper as follows:
The tns.ora is as follows:
# tnsnames.ora Network Configuration File: C:\app\atw11a92\virtual\product\12.2.0\dbhome_1\network\admin\tnsnames.ora
# Generated by Oracle configuration tools.
LISTENER_ORCL =
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
ORACLR_CONNECTION_DATA =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1521))
)
(CONNECT_DATA =
(SID = CLRExtProc)
(PRESENTATION = RO)
)
)
ORCL =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = orcl.ad001.siemens.net)
)
)
I can tnsping ORCL:
C:\Users\atw11a92>tnsping ORCL
TNS Ping Utility for 64-bit Windows: Version 12.2.0.1.0 - Production on 12-DEC-2019 11:12:09
Copyright (c) 1997, 2016, Oracle. All rights reserved.
Used parameter files:
C:\app\atw11a92\virtual\product\12.2.0\dbhome_1\network\admin\sqlnet.ora
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = orcl.ad001.siemens.net)))
OK (0 msec)
C:\Users\atw11a92>
Any Idea? I'm completely discouraged...

gRPC-only Tensorflow Serving client in C++

There seems to be a bit of information out there for creating a gRPC-only client in Python (and even a few other languages) and I was able to successfully get a working client that uses only gRPC in Python that works for our implementation.
What I can't seem to find is a case where someone has successfully written the client in C++.
The constraints of the task are as follows:
The build system cannot be bazel, because the final application already has its own build system.
The client cannot include Tensorflow (which requires bazel to build against in C++).
The application should use gRPC and not HTTP calls for speed.
The application ideally won't call Python or otherwise execute shell commands.
Given the above constraints, and assuming that I extracted and generated the gRPC stubs, is this even possible? If so, can an example be provided?
Turns out, this isn't anything new if you have already done it in Python. Assuming the model has been named "predict" and the input to the model is called "inputs," the following is the Python code:
import logging
import grpc
from grpc import RpcError
from types_pb2 import DT_FLOAT
from tensor_pb2 import TensorProto
from tensor_shape_pb2 import TensorShapeProto
from predict_pb2 import PredictRequest
from prediction_service_pb2_grpc import PredictionServiceStub
class ModelClient:
"""Client Facade to work with a Tensorflow Serving gRPC API"""
host = None
port = None
chan = None
stub = None
logger = logging.getLogger(__name__)
def __init__(self, name, dims, dtype=DT_FLOAT, version=1):
self.model = name
self.dims = [TensorShapeProto.Dim(size=dim) for dim in dims]
self.dtype = dtype
self.version = version
#property
def hostport(self):
"""A host:port string representation"""
return f"{self.host}:{self.port}"
def connect(self, host='localhost', port=8500):
"""Connect to the gRPC server and initialize prediction stub"""
self.host = host
self.port = int(port)
self.logger.info(f"Connecting to {self.hostport}...")
self.chan = grpc.insecure_channel(self.hostport)
self.logger.info("Initializing prediction gRPC stub.")
self.stub = PredictionServiceStub(self.chan)
def tensor_proto_from_measurement(self, measurement):
"""Pass in a measurement and return a tensor_proto protobuf object"""
self.logger.info("Assembling measurement tensor.")
return TensorProto(
dtype=self.dtype,
tensor_shape=TensorShapeProto(dim=self.dims),
string_val=[bytes(measurement)]
)
def predict(self, measurement, timeout=10):
"""Execute prediction against TF Serving service"""
if self.host is None or self.port is None \
or self.chan is None or self.stub is None:
self.connect()
self.logger.info("Creating request.")
request = PredictRequest()
request.model_spec.name = self.model
if self.version > 0:
request.model_spec.version.value = self.version
request.inputs['inputs'].CopyFrom(
self.tensor_proto_from_measurement(measurement))
self.logger.info("Attempting to predict against TF Serving API.")
try:
return self.stub.Predict(request, timeout=timeout)
except RpcError as err:
self.logger.error(err)
self.logger.error('Predict failed.')
return None
The following is a working (rough) C++ translation:
#include <iostream>
#include <memory>
#include <string>
#include <grpcpp/grpcpp.h>
#include "grpcpp/create_channel.h"
#include "grpcpp/security/credentials.h"
#include "google/protobuf/map.h"
#include "types.grpc.pb.h"
#include "tensor.grpc.pb.h"
#include "tensor_shape.grpc.pb.h"
#include "predict.grpc.pb.h"
#include "prediction_service.grpc.pb.h"
using grpc::Channel;
using grpc::ClientContext;
using grpc::Status;
using tensorflow::TensorProto;
using tensorflow::TensorShapeProto;
using tensorflow::serving::PredictRequest;
using tensorflow::serving::PredictResponse;
using tensorflow::serving::PredictionService;
typedef google::protobuf::Map<std::string, tensorflow::TensorProto> OutMap;
class ServingClient {
public:
ServingClient(std::shared_ptr<Channel> channel)
: stub_(PredictionService::NewStub(channel)) {}
// Assembles the client's payload, sends it and presents the response back
// from the server.
std::string callPredict(const std::string& model_name,
const float& measurement) {
// Data we are sending to the server.
PredictRequest request;
request.mutable_model_spec()->set_name(model_name);
// Container for the data we expect from the server.
PredictResponse response;
// Context for the client. It could be used to convey extra information to
// the server and/or tweak certain RPC behaviors.
ClientContext context;
google::protobuf::Map<std::string, tensorflow::TensorProto>& inputs =
*request.mutable_inputs();
tensorflow::TensorProto proto;
proto.set_dtype(tensorflow::DataType::DT_FLOAT);
proto.add_float_val(measurement);
proto.mutable_tensor_shape()->add_dim()->set_size(5);
proto.mutable_tensor_shape()->add_dim()->set_size(8);
proto.mutable_tensor_shape()->add_dim()->set_size(105);
inputs["inputs"] = proto;
// The actual RPC.
Status status = stub_->Predict(&context, request, &response);
// Act upon its status.
if (status.ok()) {
std::cout << "call predict ok" << std::endl;
std::cout << "outputs size is " << response.outputs_size() << std::endl;
OutMap& map_outputs = *response.mutable_outputs();
OutMap::iterator iter;
int output_index = 0;
for (iter = map_outputs.begin(); iter != map_outputs.end(); ++iter) {
tensorflow::TensorProto& result_tensor_proto = iter->second;
std::string section = iter->first;
std::cout << std::endl << section << ":" << std::endl;
if ("classes" == section) {
int titer;
for (titer = 0; titer != result_tensor_proto.int64_val_size(); ++titer) {
std::cout << result_tensor_proto.int64_val(titer) << ", ";
}
} else if ("scores" == section) {
int titer;
for (titer = 0; titer != result_tensor_proto.float_val_size(); ++titer) {
std::cout << result_tensor_proto.float_val(titer) << ", ";
}
}
std::cout << std::endl;
++output_index;
}
return "Done.";
} else {
std::cout << "gRPC call return code: " << status.error_code() << ": "
<< status.error_message() << std::endl;
return "RPC failed";
}
}
private:
std::unique_ptr<PredictionService::Stub> stub_;
};
Note that the dimensions here have been specified within the code instead of passed in.
Given the above class, execution can then be as follows:
int main(int argc, char** argv) {
float measurement[5*8*105] = { ... data ... };
ServingClient sclient(grpc::CreateChannel(
"localhost:8500", grpc::InsecureChannelCredentials()));
std::string model("predict");
std::string reply = sclient.callPredict(model, *measurement);
std::cout << "Predict received: " << reply << std::endl;
return 0;
}
The Makefile used was borrowed from the gRPC C++ examples, with the PROTOS_PATH variable set relative to the Makefile and the following build target (assuming the C++ application is named predict.cc):
predict: types.pb.o types.grpc.pb.o tensor_shape.pb.o tensor_shape.grpc.pb.o resource_handle.pb.o resource_handle.grpc.pb.o model.pb.o model.grpc.pb.o tensor.pb.o tensor.grpc.pb.o predict.pb.o predict.grpc.pb.o prediction_service.pb.o prediction_service.grpc.pb.o predict.o
$(CXX) $^ $(LDFLAGS) -o $#

net-snmp is not changing auth and priv protocol correctly

I'm using the net-snmp library (version 5.7.1) in a c++ programm under Linux. I have a Web-Frontend where a user can choose a SNMP-Version and configure it. SNMPv1 and SNMPv2 are working just fine, but I got some issues with SNMPv3.
Here is a picture of the frontend: Screenshot of Webinterface (Sorry for not uploading it directly here, but I need at least 10 reputation to do this)
When I start the c++ backend and enter all needed SNMPv3 credentials correctly, everything is working fine and the device is reachable. If I change for example the Auth Protocol from MD5 to SHA, but leave the rest of the credentials the same, I would expect that the device is not reachable anymore. In real it stays reachable. After restarting the backend the device is (as expected) not reachable anymore with the same settings.
After discovering this issue, I ran some tests. For the test I used different users and different settings. They were run with three different devices of different vendors and I got every time the same result. So it can not be device realated issue. The results can be seen here: Test results
My conclusion after testing was, that net-snmp seems to cache the selected auth and priv protocol for one user name. This can be seen very good at Test 2. The first time I use an user name with a specific protocol I get the expected result. After changing the protocol
a different result is expected, but I get still the same result as before.
At the end some information how the SNMP-calls are made:
There is a class called SNMPWrapper, which handels the whole SNMP-communication
Inside the constructor I call init_snmp() to init net-snmp
From the outside I can call only get(), set() and walk(). Every time one of these methods is called, a new SNMP-Session is created (First I create a new Session with snmp_sess_init(), than I set up the things needed and finally I open the session with snmp_sess_open()
After I made the request and received my answer I close the session with snmp_sess_close()
Question: Do I have to do any other clean up before changing a protocol in order to get it work correctly?
Edit: I added some code, that shows the described behaviour
int main(int argc, char** argv) {
struct snmp_session session, session1, *ss, *ss1;
struct snmp_pdu *pdu, *pdu1;
struct snmp_pdu *response, *response1;
oid anOID[MAX_OID_LEN];
size_t anOID_len = MAX_OID_LEN;
struct variable_list *vars;
int status, status1;
init_snmp("snmpapp");
const char* user = "md5";
string authpw = "123123123";
string privpw = "";
string ipString = "192.168.15.32";
char ip[16];
memset(&ip, 0, sizeof (ip));
ipString.copy(ip, sizeof (ip) - 1, 0);
/*
* First request: AuthProto is MD5, no PrivProto is used. The snmp-get
* request is successful
*/
snmp_sess_init(&session); /* set up defaults */
session.peername = ip;
session.version = SNMP_VERSION_3;
/* set the SNMPv3 user name */
session.securityName = strdup(user);
session.securityNameLen = strlen(session.securityName);
// set the authentication method to MD5
session.securityLevel = SNMP_SEC_LEVEL_AUTHNOPRIV;
session.securityAuthProto = usmHMACMD5AuthProtocol;
session.securityAuthProtoLen = USM_AUTH_PROTO_MD5_LEN;
session.securityAuthKeyLen = USM_AUTH_KU_LEN;;
if (generate_Ku(session.securityAuthProto,
session.securityAuthProtoLen,
(u_char *) authpw.c_str(), strlen(authpw.c_str()),
session.securityAuthKey,
&session.securityAuthKeyLen) != SNMPERR_SUCCESS) {
//if code reaches here, the creation of the security key was not successful
}
cout << "SecurityAuthProto - session: " << session.securityAuthProto[9] << " / SecurityAuthKey - session: " << session.securityAuthKey << endl;
ss = snmp_open(&session); /* establish the session */
if (!ss) {
cout << "Couldn't open session1 correctly";
exit(2);
}
cout << "SecurityAuthProto - ss: " << ss->securityAuthProto[9] << " / SecurityAuthKey - ss: " << ss->securityAuthKey << endl;
//send message
pdu = snmp_pdu_create(SNMP_MSG_GET);
read_objid(".1.3.6.1.2.1.1.1.0", anOID, &anOID_len);
snmp_add_null_var(pdu, anOID, anOID_len);
status = snmp_synch_response(ss, pdu, &response);
/*
* Process the response.
*/
if (status == STAT_SUCCESS && response->errstat == SNMP_ERR_NOERROR) {
cout << "SNMP-read success" << endl;
} else {
cout << "SNMP-read fail" << endl;
}
if (response)
snmp_free_pdu(response);
if (!snmp_close(ss))
cout << "Snmp closing failed" << endl;
/*
* Second request: Only the authProto is changed from MD5 to SHA1. I expect,
* that the snmp-get fails, but it still succeeds.
*/
snmp_sess_init(&session1);
session1.peername = ip;
session1.version = SNMP_VERSION_3;
/* set the SNMPv3 user name */
session1.securityName = strdup(user);
session1.securityNameLen = strlen(session1.securityName);
// set the authentication method to SHA1
session1.securityLevel = SNMP_SEC_LEVEL_AUTHNOPRIV;
session1.securityAuthProto = usmHMACSHA1AuthProtocol;
session1.securityAuthProtoLen = USM_AUTH_PROTO_SHA_LEN;
session1.securityAuthKeyLen = USM_AUTH_KU_LEN;
if (generate_Ku(session1.securityAuthProto,
session1.securityAuthProtoLen,
(u_char *) authpw.c_str(), strlen(authpw.c_str()),
session1.securityAuthKey,
&session1.securityAuthKeyLen) != SNMPERR_SUCCESS) {
//if code reaches here, the creation of the security key was not successful
}
cout << "SecurityAuthProto - session1: " << session1.securityAuthProto[9] << " / SecurityAuthKey - session1: " << session1.securityAuthKey << endl;
ss1 = snmp_open(&session1); /* establish the session */
if (!ss1) {
cout << "Couldn't open session1 correctly";
exit(2);
}
cout << "SecurityAuthProto - ss1: " << ss1->securityAuthProto[9] << " / SecurityAuthKey - ss1: " << ss1->securityAuthKey << endl;
//send message
pdu1 = snmp_pdu_create(SNMP_MSG_GET);
read_objid(".1.3.6.1.2.1.1.1.0", anOID, &anOID_len);
snmp_add_null_var(pdu1, anOID, anOID_len);
status1 = snmp_synch_response(ss1, pdu1, &response1);
/*
* Process the response.
*/
if (status1 == STAT_SUCCESS && response1->errstat == SNMP_ERR_NOERROR) {
cout << "SNMP-read success" << endl;
} else {
cout << "SNMP-read fail" << endl;
}
if (response1)
snmp_free_pdu(response1);
snmp_close(ss1);
return 0;
}
I found the solution by myself:
net-snmp caches for every EngineId (device) the users. If there is an existing user for an engineID and you try to open a new session with this user, net-snmp will use the cached one. So the solution was to clear the list with cached users.
With this code snippet I could resolve my problem:
usmUser* actUser = usm_get_userList();
while (actUser != NULL) {
usmUser* dummy = actUser;
usm_remove_user(actUser);
actUser = dummy->next;
}
I hope I can help somebody else with this.
You can also update password for an existing user:
for (usmUser* actUser = usm_get_userList(); actUser != NULL; actUser = actUser->next) {
if (strcmp(actUser->secName, user) == 0) {
//this method calls generate_Ku with previous security data but with specified password
usm_set_user_password(actUser, "userSetAuthPass", authpw.c_str());
break;
}
}