How to get the request-id from snmp response - django

Using wireshark i can see a request-id goes on when i run this program (How to get the value of OID in Python using PySnmp), can i get the request-id number using python program similarly, when i run this program i basically give community:public and version v2c, but in get-response i get request-id, this is what is need to fetch. Please help me out how to do it. Here is the image of snmp response in wireshark.

Well, request-id is something very private to the SNMP mechanics. But you still can get it if you want.
With pysnmp, you can use the built-in hooks to have it calling back your function and passing it the internal data belonging to SNMP engine processing the request.
...
# put this in your script's initialization section
context = {}
def request_observer(snmpEngine, execpoint, variables, context):
pdu = variables['pdu']
request_id = pdu['request-id']
# this is just a way to pass fetched data from the callback out
context['request-id'] = request_id
snmpEngine.observer.registerObserver(
request_observer,
'rfc3412.receiveMessage:request',
cbCtx=context
)
...
Now, once you get SNMP message the normal way, request-id for its PDU should be stored in context['request-id']. You can get practically everything about underlying SNMP data structures that way.

Related

How to specify the database in an ArangoDb AQL query?

If have multiple databases defined on a particular ArangoDB server, how do I specify the database I'd like an AQL query to run against?
Running the query through the REST endpoint that includes the db name (substituted into [DBNAME] below) ie:
/_db/[DBNAME]/_api/cursor
doesn't seem to work. The error message says 'unknown path /_db/[DBNAME]/_api/cursor'
Is this something I have to specify in the query itself?
Also: The query I'm trying to run is:
FOR col in COLLECTIONS() RETURN col.name
Fwiw, I haven't found a way to set the "current" database through the REST API. Also, I'm accessing the REST API from C++ using fuerte.
Tom Regner deserves primary credit here for prompting the enquiry that produced this answer. I am posting my findings here as an answer to help others who might run into this.
I don't know if this is a fuerte bug, shortcoming or just an api caveat that wasn't clear to me... BUT...
In order for the '/_db/[DBNAME/' prefix in an endpoint (eg full endpoint '/_db/[DBNAME/_api/cursor') to be registered and used in the header of a ::arangodb::fuerte::Request, it is NOT sufficient (as of arangodb 3.5.3 and the fuerte version available at the time of this answer) to simply call:
std::unique_ptr<fuerte::Request> request;
const char *endpoint = "/_db/[DBNAME/_api/cursor";
request = fuerte::createRequest(fuerte::RestVerb::Post,endpoint);
// and adding any arguments to the request using a VPackBuilder...
// in this case the query (omitted)
To have the database name included as part of such a request, you must additionally call the following:
request->header.parseArangoPath(endpoint);
Failure to do so seems to result in an error about an 'unknown path'.
Note 1: Simply setting the database member variable, ie
request->header.database = "[DBNAME]";
does not work.
Note 2: that operations without the leading '/_db/[DBNAME]/' prefix, seem to work fine using the 'current' database. (which at least for me, seems to be stuck at '_system' since as far as I can tell, there doesn't seem to be an endpoint to change this via the HTTP REST Api.)
The docs aren't very helpful right now, so just incase someone is looking for a more complete example, then please consider the following code.
EventLoopService eventLoopService;
// adjust the connection for your environment!
std::shared_ptr<Connection> conn = ConnectionBuilder().endpoint("http://localhost:8529")
.authenticationType(AuthenticationType::Basic)
.user(?) // enter a user with access
.password(?) // enter the password
.connect(eventLoopService);
// create the request
std::unique_ptr<Request> request = createRequest(RestVerb::Post, ContentType::VPack);
// enter the database name (ensure the user has access)
request->header.database = ?;
// API endpoint to submit AQL queries
request->header.path = "/_api/cursor";
// Create a payload to be submitted to the API endpoint
VPackBuilder builder;
builder.openObject();
// here is your query
builder.add("query", VPackValue("for col in collections() return col.name"));
builder.close();
// add the payload to the request
request->addVPack(builder.slice());
// send the request (blocking)
std::unique_ptr<Response> response = conn->sendRequest(std::move(request));
// check the response code - it should be 201
unsigned int statusCode = response->statusCode();
// slice has the response data
VPackSlice slice = response->slices().front();
std::cout << slice.get("result").toJson() << std::endl;

Apache Thrift for just processing, not server

I hope I don't have misunderstood the Thrift concept, but what I see from (example) questions like this, this framework is composed by different modular layers that can be enabled or disabled.
I'm mostly interesed in the "IDL part" of Thrift, so that I can create a common interface between my C++ code and an external Javascript application. I would like to call C++ functions using JS, with Binary data transmission, and I've already used the compiler for this.
But both my C++ (the server) and JS (client) application already exchange data using a C++ Webserver with Websockets support, it is not provided by Thrift.
So I was thinking to setup the following items:
In JS (already done):
TWebSocketTransport to send data to my "Websocket server" (with host ws://xxx.xxx.xxx.xxx)
TBinaryProtocol to encapsulate the data (using this JS implementation)
The compiled Thrift JS library with the correspondent C++ functions to call (done with the JS compiler)
In C++ (partial):
TBinaryProtocol to encode/decode the data
A TProcessor with handler to get the data from the client and process it
For now, the client is already able to sent requests to my websocket server, I see receiving them in binary form and I just need Thrift to:
Decode the input
Call the appropriate C++ function
Encode the output
My webserver will send the response to the client. So no "Thrift server" is needed here. I see there is the TProcessor->process() function, I'm trying to use it when I receive the binary data but it needs an in/out TProtocol. No problem here... but in order to create the TBinaryProtocol I also need a TTransport! If no Thrift server is expected... what Transport should I use?
I tried to set TTransport to NULL in TBinaryProtocol constructor, but once I use it it gives nullptr exception.
Code is something like:
Init:
boost::shared_ptr<MySDKServiceHandler> handler(new MySDKServiceHandler());
thriftCommandProcessor = boost::shared_ptr<TProcessor>(new MySDKServiceProcessor(handler));
thriftInputProtocol = boost::shared_ptr<TBinaryProtocol>(new TBinaryProtocol(TTransport???));
thriftOutputProtocol = boost::shared_ptr<TBinaryProtocol>(new TBinaryProtocol(TTransport???));
When data arrives:
this->thriftInputProtocol->writeBinary(input); // exception here
this->thriftCommandProcessor->process(this->thriftInputProtocol, this->thriftOutputProtocol, NULL);
this->thriftOutputProtocol->readBinary(output);
I've managed to do it using the following components:
// create the Processor using my compiled Thrift class (from IDL)
boost::shared_ptr<MySDKServiceHandler> handler(new MySDKServiceHandler());
thriftCommandProcessor = boost::shared_ptr<TProcessor>(new ThriftSDKServiceProcessor(handler));
// Transport is needed, I use the TMemoryBuffer so everything is kept in local memory
boost::shared_ptr<TTransport> transport(new apache::thrift::transport::TMemoryBuffer());
// my client/server data is based on binary protocol. I pass the transport to it
thriftProtocol = boost::shared_ptr<TProtocol>(new TBinaryProtocol(transport, 0, 0, false, false));
/* .... when the message arrives through my webserver */
void parseMessage(const byte* input, const int input_size, byte*& output, int& output_size)
{
// get the transports to write and read Thrift data
boost::shared_ptr<TTransport> iTr = this->thriftProtocol->getInputTransport();
boost::shared_ptr<TTransport> oTr = this->thriftProtocol->getOutputTransport();
// "transmit" my data to Thrift
iTr->write(input, input_size);
iTr->flush();
// make the Thrift work using the Processor
this->thriftCommandProcessor->process(this->thriftProtocol, NULL);
// the output transport (oTr) contains the called procedure result
output = new byte[MAX_SDK_WS_REPLYSIZE];
output_size = oTr->read(output, MAX_SDK_WS_REPLYSIZE);
}
My webserver will send the response to the client. So no "Thrift server" is needed here. I see there is the TProcessor->process() function, I'm trying to use it when I receive the binary data but it needs an in/out TProtocol. No problem here... but in order to create the TBinaryProtocol I also need a TTransport! If no Thrift server is expected... what Transport should I use?
The usual pattern is to store the bits somewhere and use that buffer or data stream as the input, same for the output. For certain languages there is a TStreamTransport available, for C++ the TBufferBase class looks promising to me.

How to return Transaction id, time stamp on execution of invoke function in chaincode?

I need guidance in returning transaction id, time stamp on the client interface after each invoke function call.
I have found that stub.GetTxID() is used to for getting transaction id, but peer.response only take one argument, so i am not able to return the TxID on the client interface.
You can create a response object to capture relevant information, marshal it into json and return it back, something like this:
type ChaincodeResponse struct {
txID string
time *timestamp.Timestamp
}
and then
// rest of the invoke code skipped, here is
// the relevant part:
resp, err := json.Marshal(ChaincodeResponse{
txID: stub.GetTxID(),
time: stub.GetTxTimestamp(),
})
// return json representation of relevant information
// in response
return shim.Success(resp)
I'm working on something at the moment that requires all of our transactions to be timestamped. I tried some things based on your code above but I think the api has moved on considerably since 2017.
Currently, I'm adding a created: stub.GetTxTimestamp() field to all of the things we're putting on the ledger and then reading them later in any queries. Though I'm wondering if the timestamps are already generated and stored, therefore making this unnecessary - do you know if a timestamp is still automatically stored on each item put on the ledger?

IBMIOTF/BlueMix Publish Command Syntax

I am attempting to assemble a small proof of concept system on IBM's Bluemix/Internet of Things. Currently this comprises of a Raspberry Pi feeding events up to the cloudbased app, which currently stores those events away, and periodically attempts to send down a command, using the following code block:
def sendCmd(command, payload, device="raspberrypi" ):
deviceId = #Fixed value
global cmdCount
client.publishCommand("raspberrypi", deviceId, str(command), "json", payload)
print "Sending '%s' cmd, payload '%s' to device %s" % (command, payload, deviceId)
cmdCount = cmdCount + 1
As far as the documentation is concerned this appears to be the correct syntax, as described by the documentation :
client.connect()
commandData={'rebootDelay' : 50}
client.publishCommand(myDeviceType, myDeviceId, "reboot", "json", myData)
No exceptions are thrown in this block of code, however the device is not receiving any commands; and the cloud foundry log is not throwing any errors. Is there a subtle point about the syntax I am missing?
This issue boiled down to having instantiated the wrong class on the Raspberry Pi. I had an instance of ibmiotf.application which registered a function to the variable self.client.commandCallback. However nothing appeared to be triggering the callback.
Once I instantiated the device with the ibmiotf.device import rather than ibmiotf.application, the command callback started to be called. This required a couple of the minor changes, to support slightly different function calls, but they were fairly self explanatory when trying to run the code.
The Device Class controls Events being published from the unit, and determines how to handle commands from upstream. Whereas the Application Class handles the receipt of Events and emission of Commands.

Retrieve service information from WFS GetCapabilities request with GeoExt

This is probably a very simple question but I just can't seem to figure it out.
I am writing a Javascript app to retrieve layer information from a WFS server using a GetCapabilities request using GeoExt. GetCapabilities returns information about the WFS server -- the server's name, who runs it, etc., in addition to information on the data layers it has on offer.
My basic code looks like this:
var store = new GeoExt.data.WFSCapabilitiesStore({ url: serverURL });
store.on('load', successFunction);
store.on('exception', failureFunction);
store.load();
This works as expected, and when the loading completes, successFunction is called.
successFunction looks like this:
successFunction = function(dataProxy, records, options) {
doSomeStuff();
}
dataProxy is a Ext.data.DataProxy object, records is a list of records, one for each layer on the WFS server, and options is empty.
And here is where I'm stuck: In this function, I can get access to all the layer information regarding data offered by the server. But I also want to extract the server information that is contained in the XML fetched during the store.load() (see below). But I can't figure out how to get it out of the dataProxy object, where I'm sure it must be squirreled away.
Any ideas?
The fields I want are contained in this snippet:
<ows:ServiceIdentification>
<ows:Title>G_WIS_testIvago</ows:Title>
<ows:Abstract/>
<ows:Keywords>
<ows:Keyword/>
</ows:Keywords>
<ows:ServiceType>WFS</ows:ServiceType>
<ows:ServiceTypeVersion>1.1.0</ows:ServiceTypeVersion>
<ows:Fees/>
<ows:AccessConstraints/>
Apparently,GeoExt currently discards the server information, undermining the entire premise of my question.
Here is a code snippet that can be used to tell GeoExt to grab it. I did not write this code, but have tested it, and found it works well for me:
https://github.com/opengeo/gxp/blob/master/src/script/plugins/WMSSource.js#L37