PyUSB has a little utility function called usb.util.find_descriptor which, as the name implies, allows me to search for a description with specific criteria, for example:
ep = usb.util.find_descriptor(
intf,
# match the first OUT endpoint
custom_match = \
lambda e: \
usb.util.endpoint_direction(e.bEndpointAddress) == \
usb.util.ENDPOINT_OUT)
I need the same functionality in a C++ application built with libusb. However I don't see anything similar in the libusb API specification. What would be the easiest way to implement the same functionality on top of libusb?
I'm on Linux, if that makes a difference. I'd however would rather not add any additional dependency, unless strictly required.
Update:
This is what I have so far:
libusb_config_descriptor* config;
int ret = libusb_get_config_descriptor(m_dev, 0 /* config_index */, &config);
if (ret != LIBUSB_SUCCESS) {
raise_exception(std::runtime_error, "libusb_get_config_descriptor() failed: " << usb_strerror(ret));
}
// Do not assume endpoint_in is 1
const libusb_interface *interface = config->interface;
const libusb_interface_descriptor configuration = interface->altsetting[0];
for(int i = 0; i < configuration.bNumEndpoints; ++i) {
const libusb_endpoint_descriptor endpoint = configuration.endpoint[i];
if ((endpoint.bEndpointAddress & LIBUSB_ENDPOINT_IN) == LIBUSB_ENDPOINT_IN) {
endpoint_in = endpoint.bEndpointAddress >> 8; // 3 first bytes
log_debug("endpoint_in: " << endpoint_in);
}
}
Iteration works this way, albeit it looks quite ugly and it's mostly non-reusable. Also, extracting the endpoint number with:
endpoint_in = endpoint.bEndpointAddress >> 8; // 3 first bytes
does not seem to work.
Related
I have created a simple Protobuf based config file and have been using it without any issues until now. The issue is that I added two new items to my settings (Config.proto) and now whatever value I set for the last variable is reflected in the previous one.
The following snapshot demonstrates this better. As you can see below, the value of fv_shape_predictor_path and fv_eyenet_path depend solely on order of being set. the one that is set last changes the others value.
I made sure the cpp files related to Config.proto are built afresh. I also tested this under Linux and there it works just fine. It seems its only happening in windows! it also doesn't affect any other items in the same settings. its just these two new ones.
I have no idea what is causing this or how to go about it. For reference this is how the protobuf looks like:
syntax = "proto3";
package FVConfig;
message Config {
FV Configuration = 4 ;
message FAN_MODELS_WEIGHTS{
string fan_2DFAN_4 = 1;
string fan_3DFAN_4 = 2;
string fan_depth = 3;
}
message S3FD_MODELS_WEIGHTS{
string s3fd = 1;
}
message DLIB_MODELS_WEIGHTS{
string dlib_default = 1;
}
message MTCNN_MODELS_WEIGHTS {
string mt_onet = 1;
string mt_pnet = 2;
string mt_rnet = 3;
}
message FV_MODEL_WEIGHTS {
string r18 = 1;
string r50 = 2;
string r101 = 3;
repeated ModelContainer new_models_weights = 4;
message ModelContainer{
string model_name = 1;
string model_weight_path = 2;
string description = 3;
}
}
message FV {
MTCNNDetectorSettings mtcnn = 1 ;
FaceVerificationSettings fv = 2 ;
}
message MTCNNDetectorSettings {
Settings settings = 1;
MTCNN_MODELS_WEIGHTS model_weights = 4;
message Settings {
string mt_device = 2;
int32 mt_webcam_source = 100;
int32 mt_upper_threshold = 600;
int32 mt_hop = 700;
}
}
message FaceVerificationSettings {
Settings settings = 1;
FV_MODEL_WEIGHTS model_weights = 2;
message Settings {
string fv_model_name = 1;
string fv_model_checkpoint_path = 2;
bool fv_rebuild_cache = 3;
bool fv_short_circut = 6;
bool fv_accumulate_score = 7;
string fv_config_file_path = 10;
string fv_img_bank_folder_root = 11;
string fv_cache_folder = 12;
string fv_postfix = 13;
string fv_device = 14;
int32 fv_idle_interval = 15;
bool fv_show_dbg_info = 16;
// these are the new ones
string fv_shape_predictor_path = 17;
string fv_eyenet_path = 18;
}
}
} //end of Config message
What am I missing here? How should I be going about this? Restarting Windows and Visual Studio didn't do any good either. I'm using protobuf 3.11.4 both on Linux and Windows.
This issue seems to only exist in the Windows version of Protobuf 3.11.4 (didn't test with any newer version though).
Basically what happened was that I use to first create a Config object and initialize it with some default values. When I added these two entries to my Config.proto, I forgot to also add an initialization entry like other entries, thinking I'm fine with the default (which I assumed would be "").
This doesn't pose any issues under Linux/G++ and the program builds and runs just fine and works as intended. However under Windows this results in the behavior you just witnessed i.e. setting any of those newly added entries, would also set the other entries values. So basically I either had to create a whole new Config object or had to explicitly set their values when using the load_default_config.
To be more concrete this is the snippet I used for setting some default values in my Protobuf configs.
These reside in a separate header called Utility.h:
inline FVConfig::Config load_default_config()
{
FVConfig::Config baseCfg;
auto config = baseCfg.mutable_configuration();
load_fv_default_settings(config->mutable_fv());
load_mtcnn_default_settings(config->mutable_mtcnn());
return baseCfg;
}
inline void load_fv_default_settings(FVConfig::Config_FaceVerificationSettings* fv)
{
fv->mutable_settings()->set_fv_config_file_path(fv::config_file_path);
fv->mutable_settings()->set_fv_device(fv::device);
fv->mutable_settings()->set_fv_rebuild_cache(fv::rebuild_cache);
...
// these two lines were missing previously and to my surprise, this was indeed
// the cause of the weird behavior.
fv->mutable_settings()->set_fv_shape_predictor_path(fv::shape_predictor_path);
fv->mutable_settings()->set_fv_eyenet_path(fv::eyenet_path);
auto new_model_list = fv->mutable_model_weights()->mutable_new_models_weights()->Add();
new_model_list->set_model_name("r18");
new_model_list->set_description("default");
new_model_list->set_model_weight_path(fv::model_weights_r18);
}
inline void load_mtcnn_default_settings(FVConfig::Config_MTCNNDetectorSettings* mt)
{
mt->mutable_settings()->set_mt_device(mtcnn::device);
mt->mutable_settings()->set_mt_hop(mtcnn::hop);
....
}
Not sure this counts as a bug in Protobuf, or my wrong approach here.
I'm currently trying to process the absolute value of a drawing tablet's touch ring, through the Wintab API. However, despite following instructions as they are described in the documentation, it seems like the WTOpen call doesn't set any extension settings. Using the touch ring after initializing Wintab still triggers the default events, while the default events for pen inputs are suppressed and all pen inputs related to my application instead.
Here are the relevant segments of code:
...
#include "wintab.h"
#define PACKETDATA (PK_X | PK_Y | PK_Z | PK_NORMAL_PRESSURE | PK_ORIENTATION | PK_TIME | PK_BUTTONS)
#define PACKETMODE 0
#define PACKETTOUCHSTRIP PKEXT_ABSOLUTE
#define PACKETTOUCHRING PKEXT_ABSOLUTE
#include "pktdef.h"
...
internal b32
InitWinTab(HWND Window, window_mapping *Map)
{
if(!LoadWintabFunctions())
return false;
LOGCONTEXT Tablet;
AXIS TabletX, TabletY, TabletZ, Pressure;
if(!gpWTInfoA(WTI_DEFCONTEXT, 0, &Tablet))
return false;
gpWTInfoA(WTI_DEVICES, DVC_X, &TabletX);
gpWTInfoA(WTI_DEVICES, DVC_Y, &TabletY);
gpWTInfoA(WTI_DEVICES, DVC_Z, &TabletZ);
gpWTInfoA(WTI_DEVICES, DVC_NPRESSURE, &Pressure);
UINT TouchStripOffset = 0xFFFF;
UINT TouchRingOffset = 0xFFFF;
for(UINT i = 0, ScanTag = 0; gpWTInfoA(WTI_EXTENSIONS + i, EXT_TAG, &ScanTag); i++)
{
if (ScanTag == WTX_TOUCHSTRIP)
TouchStripOffset = i;
if (ScanTag == WTX_TOUCHRING)
TouchRingOffset = i;
}
Tablet.lcOptions |= CXO_MESSAGES;
Tablet.lcPktData = PACKETDATA;
Tablet.lcPktMode = PACKETMODE;
Tablet.lcMoveMask = PACKETDATA;
Tablet.lcBtnUpMask = Tablet.lcBtnDnMask;
Tablet.lcInOrgX = 0;
Tablet.lcInOrgY = 0;
Tablet.lcInExtX = TabletX.axMax;
Tablet.lcInExtY = TabletY.axMax;
if(TouchStripOffset != 0xFFFF)
{
WTPKT DataMask;
gpWTInfoA(WTI_EXTENSIONS + TouchStripOffset, EXT_MASK, &DataMask);
Tablet.lcPktData |= DataMask;
}
if(TouchRingOffset != 0xFFFF)
{
WTPKT DataMask;
gpWTInfoA(WTI_EXTENSIONS + TouchRingOffset, EXT_MASK, &DataMask);
Tablet.lcPktData |= DataMask;
}
Map->AxisMax.x = (r32)TabletX.axMax;
Map->AxisMax.y = (r32)TabletY.axMax;
Map->AxisMax.z = (r32)TabletZ.axMax;
Map->PressureMax = (r32)Pressure.axMax;
if(!gpWTOpenA(Window, &Tablet, TRUE))
return false;
return(TabletX.axMax && TabletY.axMax && TabletZ.axMax && Pressure.axMax);
}
...
case WT_PACKET:
{
PACKET Packet;
if(gpWTPacket((HCTX)LParam, (UINT)WParam, &Packet))
{
...
}
} break;
case WT_PACKETEXT:
{
PACKETEXT Packet;
if(gpWTPacket((HCTX)LParam, (UINT)WParam, &Packet))
{
...
}
} break;
...
The bitmask for the packet data in the initialization have sensible bits set for both extensions and don't overlap with the existing bitmask. No stage of the initialization fails. WT_PACKET gets called only with valid packet data while WT_PACKETEXT never gets called. Furthermore, calling WTPacketsGet with a PACKETEXT pointer on the HCTX returned by WTOpen fills the packet with garbage from the regular packet queue. This leaves me with the conclusion that somehow WTOpen didn't receive notification that the extensions should be loaded and I'm unable to find what else I should define in the LOGCONTEXT data structure to change that.
Is there a mistake in my initialization? Or is there a way to get a better readout to why the extensions didn't load?
It turned out that a driver setting prevented the extension packets from being sent, in favor of using the touch ring for different function. Changing this setting resolved the issue. The code didn't contain any errors itself.
As a result of earlier parts of a pipeline, I have two .pb files containing a frozen optimized Tensorflow graph for slightly different architectures for the same production model. I would like to load them both in the same C++ program into the same session for inference, but of course the graph nodes have lots of conflicting names.
In Python, I was under the impression that you could load the two graphs under into the same session within different variable scopes, but in C++ I'm not sure how to do this.
So I've been doing essentially the following, which seems to work, but also seems a bit hacky and fragile to do manually like this, particularly handling the caret in the name for control edges. Is this a reasonable way to do this, and/or is there a pre-existing function in the C++ api that I can call instead that accomplishes the same thing?
Status status;
GraphDef graphDef1;
GraphDef graphDef2;
status = ReadBinaryProto(Env::Default(), string("frozen_graph_optimized1.pb"), &graphDef1);
CHECK_STATUS(status,"reading graph1");
status = ReadBinaryProto(Env::Default(), string("frozen_graph_optimized2.pb"), &graphDef2);
CHECK_STATUS(status,"reading graph2");
auto addPrefixToGraph = [](GraphDef& graphDef, const string& prefix) {
for(int i = 0; i<graphDef.node_size(); ++i)
{
auto node = graphDef.mutable_node(i);
string* name = node->mutable_name();
*name = prefix + *name;
int inputSize = node->input_size();
for(int j = 0; j<inputSize; ++j) {
string* inputName = node->mutable_input(j);
if(inputName->size() > 0 && (*inputName)[0] == '^')
*inputName = "^" + prefix + inputName->substr(1);
else
*inputName = prefix + *inputName;
}
}
};
addPrefixToGraph(graphDef1,"g1/");
addPrefixToGraph(graphDef2,"g2/");
status = session->Create(graphDef1);
CHECK_STATUS(status,"adding graph1 to session");
status = session->Extend(graphDef2);
CHECK_STATUS(status,"adding graph2 to session");
I am looking for a code which can generate V4 UUID using UTC Timestamp as input.
I want use this code in my Load Runner script to pass UUID in my Load Runner request.
Appreciate if the code is provided in C++
I recall using something like
int GenerateGuid()
{
typedef struct _GUID
{
unsigned long Group1;
unsigned short Group2;
unsigned short Group3;
unsigned char Group4[8];
} GUID;
GUID m_guid;
char msgId[msgIdSize];
lr_load_dll("ole32.dll");
CoCreateGuid(&m_guid);
sprintf(msgId, "%08lx-%04x-%04x-%02x%02x-%02x%02x%02x%02x%02x%02x",
m_guid.Group1, m_guid.Group2, m_guid.Group3,
m_guid.Group4[0], m_guid.Group4[1], m_guid.Group4[2], m_guid.Group4[3],
m_guid.Group4[4], m_guid.Group4[5], m_guid.Group4[6], m_guid.Group4[7]);
lr_save_string(msgId, "msgId");
return 0;
}
This is basically call of CoCreateuid function (will apply to Windows load generators only) and storing the result into msgid LoadRunner Parameter.
Actually it was the last time I used LoadRunner as as far as I remember it failed to produce required amount of large POST requests on the hardware (and I also had to work around several artificial limitations on request size) while Apache JMeter worked as a charm. Just to compare, you need just call a single function like: ${__UUID} and that's it. Check out Writing Your First JMeter Script article if interested.
LoadRunner is a C virtual User, not C++
I refer you to the built-in functions web_save_timestamp_param() or lr_save_timestamp() as options for your use
I am using the below code to generate UUID in loadrunner independent of the OS of the Loadgenerators. Please check this link as well - How to generate Universally Unique IDentifier, UUID from LoadRunner independent of the OS
int lr_guid_gen()
{
char GUID[40];
int t = 0;
char *szTemp = "xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx";
char *szHex = "0123456789abcdef-";
int nLen = strlen (szTemp);
for (t=0; t<nLen+1; t++)
{
int r = rand () % 16;
char c = ' ';
switch (szTemp[t])
{
case 'x' : { c = szHex [r]; } break;
case 'y' : { c = szHex [r & 0x03 | 0x08]; } break;
case '-' : { c = '-'; } break;
case '4' : { c = '4'; } break;
}
GUID[t] = ( t < nLen ) ? c : 0x00;
}
lr_save_string(GUID,"PAR_GUID");
return 0;
}
When calling WinAPI functions that take callbacks as arguments, there's usually a special parameter to pass some arbitrary data to the callback. In case there's no such thing (e.g. SetWinEventHook) the only way we can understand which of the API calls resulted in the call of the given callback is to have distinct callbacks. When we know all the cases in which the given API is called at compile-time, we can always create a class template with static method and instantiate it with different template arguments in different call sides. That's a hell of a work, and I don't like doing so.
How do I create callback functions at runtime so that they have different function pointers?
I saw a solution (sorry, in Russian) with runtime assembly generation, but it wasn't portable across x86/x64 archtectures.
You can use the closure API of libffi. It allows you to create trampolines each with a different address. I implemented a wrapping class here, though that's not finished yet (only supports int arguments and return type, you can specialize detail::type to support more than just int). A more heavyweight alternative is LLVM, though if you're dealing only with C types, libffi will do the job fine.
I've come up with this solution which should be portable (but I haven't tested it):
#define ID_PATTERN 0x11223344
#define SIZE_OF_BLUEPRINT 128 // needs to be adopted if uniqueCallbackBlueprint is complex...
typedef int (__cdecl * UNIQUE_CALLBACK)(int arg);
/* blueprint for unique callback function */
int uniqueCallbackBlueprint(int arg)
{
int id = ID_PATTERN;
printf("%x: Hello unique callback (arg=%d)...\n", id, arg);
return (id);
}
/* create a new unique callback */
UNIQUE_CALLBACK createUniqueCallback(int id)
{
UNIQUE_CALLBACK result = NULL;
char *pUniqueCallback;
char *pFunction;
int pattern = ID_PATTERN;
char *pPattern;
char *startOfId;
int i;
int patterns = 0;
pUniqueCallback = malloc(SIZE_OF_BLUEPRINT);
if (pUniqueCallback != NULL)
{
pFunction = (char *)uniqueCallbackBlueprint;
#if defined(_DEBUG)
pFunction += 0x256; // variable offset depending on debug information????
#endif /* _DEBUG */
memcpy(pUniqueCallback, pFunction, SIZE_OF_BLUEPRINT);
result = (UNIQUE_CALLBACK)pUniqueCallback;
/* replace ID_PATTERN with requested id */
pPattern = (char *)&pattern;
startOfId = NULL;
for (i = 0; i < SIZE_OF_BLUEPRINT; i++)
{
if (pUniqueCallback[i] == *pPattern)
{
if (pPattern == (char *)&pattern)
startOfId = &(pUniqueCallback[i]);
if (pPattern == ((char *)&pattern) + sizeof(int) - 1)
{
pPattern = (char *)&id;
for (i = 0; i < sizeof(int); i++)
{
*startOfId++ = *pPattern++;
}
patterns++;
break;
}
pPattern++;
}
else
{
pPattern = (char *)&pattern;
startOfId = NULL;
}
}
printf("%d pattern(s) replaced\n", patterns);
if (patterns == 0)
{
free(pUniqueCallback);
result = NULL;
}
}
return (result);
}
Usage is as follows:
int main(void)
{
UNIQUE_CALLBACK callback;
int id;
int i;
id = uniqueCallbackBlueprint(5);
printf(" -> id = %x\n", id);
callback = createUniqueCallback(0x4711);
if (callback != NULL)
{
id = callback(25);
printf(" -> id = %x\n", id);
}
id = uniqueCallbackBlueprint(15);
printf(" -> id = %x\n", id);
getch();
return (0);
}
I've noted an interresting behavior if compiling with debug information (Visual Studio). The address obtained by pFunction = (char *)uniqueCallbackBlueprint; is off by a variable number of bytes. The difference can be obtained using the debugger which displays the correct address. This offset changes from build to build and I assume it has something to do with the debug information? This is no problem for the release build. So maybe this should be put into a library which is build as "release".
Another thing to consider whould be byte alignment of pUniqueCallback which may be an issue. But an alignment of the beginning of the function to 64bit boundaries is not hard to add to this code.
Within pUniqueCallback you can implement anything you want (note to update SIZE_OF_BLUEPRINT so you don't miss the tail of your function). The function is compiled and the generated code is re-used during runtime. The initial value of id is replaced when creating the unique function so the blueprint function can process it.