generate V4 UUID using timestamp as input in C++ language - c++

I am looking for a code which can generate V4 UUID using UTC Timestamp as input.
I want use this code in my Load Runner script to pass UUID in my Load Runner request.
Appreciate if the code is provided in C++

I recall using something like
int GenerateGuid()
{
typedef struct _GUID
{
unsigned long Group1;
unsigned short Group2;
unsigned short Group3;
unsigned char Group4[8];
} GUID;
GUID m_guid;
char msgId[msgIdSize];
lr_load_dll("ole32.dll");
CoCreateGuid(&m_guid);
sprintf(msgId, "%08lx-%04x-%04x-%02x%02x-%02x%02x%02x%02x%02x%02x",
m_guid.Group1, m_guid.Group2, m_guid.Group3,
m_guid.Group4[0], m_guid.Group4[1], m_guid.Group4[2], m_guid.Group4[3],
m_guid.Group4[4], m_guid.Group4[5], m_guid.Group4[6], m_guid.Group4[7]);
lr_save_string(msgId, "msgId");
return 0;
}
This is basically call of CoCreateuid function (will apply to Windows load generators only) and storing the result into msgid LoadRunner Parameter.
Actually it was the last time I used LoadRunner as as far as I remember it failed to produce required amount of large POST requests on the hardware (and I also had to work around several artificial limitations on request size) while Apache JMeter worked as a charm. Just to compare, you need just call a single function like: ${__UUID} and that's it. Check out Writing Your First JMeter Script article if interested.

LoadRunner is a C virtual User, not C++
I refer you to the built-in functions web_save_timestamp_param() or lr_save_timestamp() as options for your use

I am using the below code to generate UUID in loadrunner independent of the OS of the Loadgenerators. Please check this link as well - How to generate Universally Unique IDentifier, UUID from LoadRunner independent of the OS
int lr_guid_gen()
{
char GUID[40];
int t = 0;
char *szTemp = "xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx";
char *szHex = "0123456789abcdef-";
int nLen = strlen (szTemp);
for (t=0; t<nLen+1; t++)
{
int r = rand () % 16;
char c = ' ';
switch (szTemp[t])
{
case 'x' : { c = szHex [r]; } break;
case 'y' : { c = szHex [r & 0x03 | 0x08]; } break;
case '-' : { c = '-'; } break;
case '4' : { c = '4'; } break;
}
GUID[t] = ( t < nLen ) ? c : 0x00;
}
lr_save_string(GUID,"PAR_GUID");
return 0;
}

Related

libusb equivalent of PyUSB usb.util.find_descriptor

PyUSB has a little utility function called usb.util.find_descriptor which, as the name implies, allows me to search for a description with specific criteria, for example:
ep = usb.util.find_descriptor(
intf,
# match the first OUT endpoint
custom_match = \
lambda e: \
usb.util.endpoint_direction(e.bEndpointAddress) == \
usb.util.ENDPOINT_OUT)
I need the same functionality in a C++ application built with libusb. However I don't see anything similar in the libusb API specification. What would be the easiest way to implement the same functionality on top of libusb?
I'm on Linux, if that makes a difference. I'd however would rather not add any additional dependency, unless strictly required.
Update:
This is what I have so far:
libusb_config_descriptor* config;
int ret = libusb_get_config_descriptor(m_dev, 0 /* config_index */, &config);
if (ret != LIBUSB_SUCCESS) {
raise_exception(std::runtime_error, "libusb_get_config_descriptor() failed: " << usb_strerror(ret));
}
// Do not assume endpoint_in is 1
const libusb_interface *interface = config->interface;
const libusb_interface_descriptor configuration = interface->altsetting[0];
for(int i = 0; i < configuration.bNumEndpoints; ++i) {
const libusb_endpoint_descriptor endpoint = configuration.endpoint[i];
if ((endpoint.bEndpointAddress & LIBUSB_ENDPOINT_IN) == LIBUSB_ENDPOINT_IN) {
endpoint_in = endpoint.bEndpointAddress >> 8; // 3 first bytes
log_debug("endpoint_in: " << endpoint_in);
}
}
Iteration works this way, albeit it looks quite ugly and it's mostly non-reusable. Also, extracting the endpoint number with:
endpoint_in = endpoint.bEndpointAddress >> 8; // 3 first bytes
does not seem to work.

When using memory BIOs with OpenSSL, how can you find the 'needed size' for the input BIO?

Here's some sample code which shows how I'm using OpenSSL:
BIO *CreateMemoryBIO() {
if (BIO *bio = BIO_new(BIO_s_mem())) {
BIO_set_mem_eof_return(bio, -1);
return bio;
}
throw std::runtime_error("Could not create memory BIO");
}
m_readBIO = CreateMemoryBIO();
m_writeBIO = CreateMemoryBIO();
SSL_set_bio(m_ssl, m_readBIO, m_writeBIO);
Now, if I do an SSL_Read, and I get SSL_ERROR_WANT_READ, is there any way for me to find out how much it had tried to read internally (in other words, how much do I need to write with BIO_write to m_readBIO before SSL_Read would be satisfied?)
A good lower bound would work for me as well, my issue is that I need to report how much data to read to the layer above me, and it will not return control to me until it has read that much data (and I don't want to degenerate into 1-byte reads).
I'm aware that SSL_Read and SSL_Write may both alternately read & write due to handshaking and such, but I'm interested in the 'current' read that is being done internally.
If it's not possible to do with the standard BIO_s_mem, I assume it could be done if I wrote my own BIO which 'remembered' the size of the last read request which failed, so any pointers to documentation on writing custom BIOs (which, to my knowledge, is supported by OpenSSL) would also be appreciated.
Thanks to CristiFati for the suggesting BIO_set_callback, it seems to work. If you want to make your comment into an answer, I'll accept it, but I want to put the details here for posterity.
Inside my 'SSLSocket' class:
in the constructor:
BIO_set_callback(m_readBIO, &BIOCallback);
BIO_set_callback_arg(m_readBIO, reinterpret_cast<char*>(this));
long SSLSocket::BIOCallback(
BIO *in_bio,
int in_operation,
const char* in_arg1,
int in_arg2,
long in_arg3,
long in_returnValue)
{
// in_bio isn't provided for BIO_CB_FREE.
if (BIO_CB_FREE == in_operation)
{
return in_returnValue;
}
assert(in_arg1);
return reinterpret_cast<SSLSocket*>(BIO_get_callback_arg(in_bio))->DoBIOCallback(
in_bio,
in_operation,
in_arg1,
in_arg2,
in_arg3,
in_returnValue);
long SSLSocket::DoBIOCallback(
BIO *in_bio,
int in_operation,
const char* in_arg1,
int in_arg2,
long in_arg3,
long in_returnValue)
{
UNUSED(in_arg3);
// We only care about the return callback for BIO_read()
if ((BIO_CB_READ | BIO_CB_RETURN) == in_operation)
{
const int shouldRetry = BIO_should_retry(in_bio);
const int bytesRequested = in_arg2;
assert(bytesRequested > 0);
if ((in_returnValue <= 0) && shouldRetry)
{
m_needBytes = bytesRequested;
}
else if ((in_returnValue > 0) && (in_returnValue < bytesRequested) && shouldRetry)
{
m_needBytes = bytesRequested - in_returnValue;
}
else
{
m_needBytes = 0;
}
}
return in_returnValue;
}
Then I use m_needBytes to decide how much to write in BIO_write().

Create a function with unique function pointer in runtime

When calling WinAPI functions that take callbacks as arguments, there's usually a special parameter to pass some arbitrary data to the callback. In case there's no such thing (e.g. SetWinEventHook) the only way we can understand which of the API calls resulted in the call of the given callback is to have distinct callbacks. When we know all the cases in which the given API is called at compile-time, we can always create a class template with static method and instantiate it with different template arguments in different call sides. That's a hell of a work, and I don't like doing so.
How do I create callback functions at runtime so that they have different function pointers?
I saw a solution (sorry, in Russian) with runtime assembly generation, but it wasn't portable across x86/x64 archtectures.
You can use the closure API of libffi. It allows you to create trampolines each with a different address. I implemented a wrapping class here, though that's not finished yet (only supports int arguments and return type, you can specialize detail::type to support more than just int). A more heavyweight alternative is LLVM, though if you're dealing only with C types, libffi will do the job fine.
I've come up with this solution which should be portable (but I haven't tested it):
#define ID_PATTERN 0x11223344
#define SIZE_OF_BLUEPRINT 128 // needs to be adopted if uniqueCallbackBlueprint is complex...
typedef int (__cdecl * UNIQUE_CALLBACK)(int arg);
/* blueprint for unique callback function */
int uniqueCallbackBlueprint(int arg)
{
int id = ID_PATTERN;
printf("%x: Hello unique callback (arg=%d)...\n", id, arg);
return (id);
}
/* create a new unique callback */
UNIQUE_CALLBACK createUniqueCallback(int id)
{
UNIQUE_CALLBACK result = NULL;
char *pUniqueCallback;
char *pFunction;
int pattern = ID_PATTERN;
char *pPattern;
char *startOfId;
int i;
int patterns = 0;
pUniqueCallback = malloc(SIZE_OF_BLUEPRINT);
if (pUniqueCallback != NULL)
{
pFunction = (char *)uniqueCallbackBlueprint;
#if defined(_DEBUG)
pFunction += 0x256; // variable offset depending on debug information????
#endif /* _DEBUG */
memcpy(pUniqueCallback, pFunction, SIZE_OF_BLUEPRINT);
result = (UNIQUE_CALLBACK)pUniqueCallback;
/* replace ID_PATTERN with requested id */
pPattern = (char *)&pattern;
startOfId = NULL;
for (i = 0; i < SIZE_OF_BLUEPRINT; i++)
{
if (pUniqueCallback[i] == *pPattern)
{
if (pPattern == (char *)&pattern)
startOfId = &(pUniqueCallback[i]);
if (pPattern == ((char *)&pattern) + sizeof(int) - 1)
{
pPattern = (char *)&id;
for (i = 0; i < sizeof(int); i++)
{
*startOfId++ = *pPattern++;
}
patterns++;
break;
}
pPattern++;
}
else
{
pPattern = (char *)&pattern;
startOfId = NULL;
}
}
printf("%d pattern(s) replaced\n", patterns);
if (patterns == 0)
{
free(pUniqueCallback);
result = NULL;
}
}
return (result);
}
Usage is as follows:
int main(void)
{
UNIQUE_CALLBACK callback;
int id;
int i;
id = uniqueCallbackBlueprint(5);
printf(" -> id = %x\n", id);
callback = createUniqueCallback(0x4711);
if (callback != NULL)
{
id = callback(25);
printf(" -> id = %x\n", id);
}
id = uniqueCallbackBlueprint(15);
printf(" -> id = %x\n", id);
getch();
return (0);
}
I've noted an interresting behavior if compiling with debug information (Visual Studio). The address obtained by pFunction = (char *)uniqueCallbackBlueprint; is off by a variable number of bytes. The difference can be obtained using the debugger which displays the correct address. This offset changes from build to build and I assume it has something to do with the debug information? This is no problem for the release build. So maybe this should be put into a library which is build as "release".
Another thing to consider whould be byte alignment of pUniqueCallback which may be an issue. But an alignment of the beginning of the function to 64bit boundaries is not hard to add to this code.
Within pUniqueCallback you can implement anything you want (note to update SIZE_OF_BLUEPRINT so you don't miss the tail of your function). The function is compiled and the generated code is re-used during runtime. The initial value of id is replaced when creating the unique function so the blueprint function can process it.

libclang get primitive value

How can I get the value of a primitive literal using libclang?
For example, if I have a CXCursor of cursor kind CXCursor_IntegerLiteral, how can I extract the literal value.
UPDATE:
I've run into so many problems using libclang. I highly recommend avoiding it entirely and instead use the C++ interface clang provides. The C++ interface is highly useable and very well documented: http://clang.llvm.org/doxygen/annotated.html
The only purpose I see of libclang now is to generate the ASTUnit object for you as with the following code (it's not exactly easy otherwise):
ASTUnit * astUnit;
{
index = clang_createIndex(0, 0);
tu = clang_parseTranslationUnit(
index, 0,
clangArgs, nClangArgs,
0, 0, CXTranslationUnit_None
);
astUnit = static_cast<ASTUnit *>(tu->TUData);
}
Now you might say that libclang is stable and the C++ interface isn't. That hardly matters, as the time you spend figuring out the AST with libclang and creating kludges with it wastes so much of your time anyway. I'd just as soon spend a few hours fixing up code that does not compile after a version upgrade (if even needed).
Instead of reparsing the original, you already have all the information you need inside the translation unit :
if (kind == CXCursor_IntegerLiteral)
{
CXSourceRange range = clang_getCursorExtent(cursor);
CXToken *tokens = 0;
unsigned int nTokens = 0;
clang_tokenize(tu, range, &tokens, &nTokens);
for (unsigned int i = 0; i < nTokens; i++)
{
CXString spelling = clang_getTokenSpelling(tu, tokens[i]);
printf("token = %s\n", clang_getCString(spelling));
clang_disposeString(spelling);
}
clang_disposeTokens(tu, tokens, nTokens);
}
You will see that the first token is the integer itself, the next one is not relevant (eg. it's ; for int i = 42;.
If you have access to a CXCursor, you can make use of the clang_Cursor_Evaluate function, for example:
CXChildVisitResult var_decl_visitor(
CXCursor cursor, CXCursor parent, CXClientData data) {
auto kind = clang_getCursorKind(cursor);
switch (kind) {
case CXCursor_IntegerLiteral: {
auto res = clang_Cursor_Evaluate(cursor);
auto value = clang_EvalResult_getAsInt(res);
clang_EvalResult_dispose(res);
std::cout << "IntegerLiteral " << value << std::endl;
break;
}
default:
break;
}
return CXChildVisit_Recurse;
}
Outputs:
IntegerLiteral 42
I found a way to do this by referring to the original files:
std::string getCursorText (CXCursor cur) {
CXSourceRange range = clang_getCursorExtent(cur);
CXSourceLocation begin = clang_getRangeStart(range);
CXSourceLocation end = clang_getRangeEnd(range);
CXFile cxFile;
unsigned int beginOff;
unsigned int endOff;
clang_getExpansionLocation(begin, &cxFile, 0, 0, &beginOff);
clang_getExpansionLocation(end, 0, 0, 0, &endOff);
ClangString filename = clang_getFileName(cxFile);
unsigned int textSize = endOff - beginOff;
FILE * file = fopen(filename.c_str(), "r");
if (file == 0) {
exit(ExitCode::CANT_OPEN_FILE);
}
fseek(file, beginOff, SEEK_SET);
char buff[4096];
char * pBuff = buff;
if (textSize + 1 > sizeof(buff)) {
pBuff = new char[textSize + 1];
}
pBuff[textSize] = '\0';
fread(pBuff, 1, textSize, file);
std::string res(pBuff);
if (pBuff != buff) {
delete [] pBuff;
}
fclose(file);
return res;
}
You can actually use a combination of libclang and the C++ interface.
The libclang CXCursor type contains a data field which contains references to the underlying AST nodes.
I was able to successfully access the IntegerLiteral value by casting data[1] to the IntegerLiteral type.
I'm implementing this in Nim so I will provide Nim code, but you can likely do the same in C++.
let literal = cast[clang.IntegerLiteral](cursor.data[1])
echo literal.getValue().getLimitedValue()
The IntegerLiteral type is wrapped like so:
type
APIntObj* {.importcpp: "llvm::APInt", header: "llvm/ADT/APInt.h".} = object
# https://github.com/llvm-mirror/llvm/blob/master/include/llvm/ADT/APInt.h
APInt* = ptr APIntObj
IntegerLiteralObj* {.importcpp: "clang::IntegerLiteral", header: "clang/AST/Expr.h".} = object
IntegerLiteral* = ptr IntegerLiteralObj
proc getValue*(i: IntegerLiteral): APIntObj {.importcpp: "#.getValue()".}
# This is implemented by the superclass: https://clang.llvm.org/doxygen/classclang_1_1APIntStorage.html
proc getLimitedValue*(a: APInt | APIntObj): culonglong {.importcpp: "#.getLimitedValue()".}
Hope this helps someone :)

How to coerce a xlTypeNum to double in C++ using Excel 2007 SDK

Well I am attempting to make my way through developing an Excel Add-in. I am trying small functions with the sample code in Excel 2007 SDK as as a guide. I am having difficulty with attempting to display a double type data in Excel. Assuming the UDF is called DisplayDouble() when the sample code is executed and a call is placed with an argument of real type data such as DisplayDouble(12.3) the sample code works yet if I attempt to use an argument that references a real type data from cell such as DisplayDouble(A1) where cell A1 in the Excel worksheet has the value 12.3 the sample code does not work
You can see the sample code below this paragraph. Any hints will help me move along the learning ladder
_declspec(dllexport) LPXLOPER12 WINAPI DisplayDouble (LPXLOPER12 n)
{
static XLOPER12 xResult;
XLOPER12 xlt;
int error = -1;
double d;
switch (n->xltype)
{
case xltypeNum:
d = (double)n->val.num;
if (max < 0)
error = xlerrValue;
xResult.xltype = xltypeNum;
xResult.val.num = d;
break;
case xltypeSRef:
error = Excel12f(xlCoerce, &xlt, 2, n, TempNum12(xltypeNum));
if (!error)
{
error = -1;
d = xlt.val.w;
xResult.xltype = xltypeNum;
xResult.val.num = d;
}
Excel12f(xlFree, 0, 1, &xlt);
break;
default:
error = xlerrValue;
break;
}
if ( error != - 1 )
{
xResult.xltype = xltypeErr;
xResult.val.err = error;
}
//Word of caution - returning static XLOPERs/XLOPER12s is not thread safe
//for UDFs declared as thread safe, use alternate memory allocation mechanisms
return(LPXLOPER12) &xResult;
}
looks like you coerced the value to xltypeNum but are then taking the integer value, with d = xlt.val.w rather than d = xlt.val.num