How to write multiple nodes with OPC-UA at once using open62541? - c++

I am attempting to write multiple nodes in a single request, however I have not found any documentation or examples on how to do that, every time I find anything regarding the issue, a single node is written. Based on my understanding of the open62541 library (which is not much), I've attempted to do this like so:
void Write_from_3_to_5_piece_queue() {
char NodeID[128];
char NodeID_backup[128];
char aux[3];
bool bool_to_write = false;
strcpy(NodeID_backup, _BaseNodeID);
strcat(NodeID_backup, "POU.AT2.piece_queue["); // this is where I want to write, I need only to append the array index in which to write
UA_WriteRequest wReq;
UA_WriteValue my_nodes[3]; // this is where I start to make things up, I'm not sure this is the correct way to do it
my_nodes[0] = *UA_WriteValue_new();
my_nodes[1] = *UA_WriteValue_new();
my_nodes[2] = *UA_WriteValue_new();
strcpy(NodeID, NodeID_backup);
strcat(NodeID, "3]"); //append third index of array (will write to piece_queue[3])
my_nodes[0].nodeId = UA_NODEID_STRING_ALLOC(_nodeIndex, NodeID);
my_nodes[0].attributeId = UA_ATTRIBUTEID_VALUE;
my_nodes[0].value.hasValue = true;
my_nodes[0].value.value.type = &UA_TYPES[UA_TYPES_BOOLEAN];
my_nodes[0].value.value.storageType = UA_VARIANT_DATA_NODELETE;
my_nodes[0].value.value.data = &bool_to_write;
strcpy(NodeID, NodeID_backup);
strcat(NodeID, "4]");
my_nodes[1].nodeId = UA_NODEID_STRING_ALLOC(_nodeIndex, NodeID);
my_nodes[1].attributeId = UA_ATTRIBUTEID_VALUE;
my_nodes[1].value.hasValue = true;
my_nodes[1].value.value.type = &UA_TYPES[UA_TYPES_BOOLEAN];
my_nodes[1].value.value.storageType = UA_VARIANT_DATA_NODELETE;
my_nodes[1].value.value.data = &bool_to_write;
strcpy(NodeID, NodeID_backup);
strcat(NodeID, "5]");
my_nodes[2].nodeId = UA_NODEID_STRING_ALLOC(_nodeIndex, NodeID);
my_nodes[2].attributeId = UA_ATTRIBUTEID_VALUE;
my_nodes[2].value.hasValue = true;
my_nodes[2].value.value.type = &UA_TYPES[UA_TYPES_BOOLEAN];
my_nodes[2].value.value.storageType = UA_VARIANT_DATA_NODELETE;
my_nodes[2].value.value.data = &bool_to_write;
UA_WriteRequest_init(&wReq);
wReq.nodesToWrite = my_nodes;
wReq.nodesToWriteSize = 3;
UA_WriteResponse wResp = UA_Client_Service_write(_client, wReq);
UA_WriteResponse_clear(&wResp);
UA_WriteRequest_clear(&wReq);
return;
}
At first I didn't have much hope that this would work, but it turns out this actually writes the values that I wish. The problem is that on UA_WriteRequest_clear(&wReq); I trigger an exception in the open62541 library:
Also, I know I can write multiple values to arrays specifically, even though in this particular example that would fix my issue, that's not what I mean to do, this example is just to simplify my problem. Just suppose I have a multi-type structure and I want to write to it, all in a single request. I appreciate any help!

First of all, this is bad:
UA_WriteValue my_nodes[3];
my_nodes[0] = *UA_WriteValue_new();
my_nodes[1] = *UA_WriteValue_new();
my_nodes[2] = *UA_WriteValue_new();
my_nodes is already created on the stack, and then you are copying the content of a new object into it by dereferencing. This definitely leads to memory leaks. You probably want to use UA_WriteValue_init() instead.
Never ever dereference the return value of a new() function.
Let's go bottom up:
UA_WriteRequest_clear(&wReq) is recursively freeing all content of the wReq steucture.
This means that it will also call:
UA_Array_delete(wReq.nodesToWrite, wReq.nodesToWriteSize, ...)
which in turn calls UA_free(wReq.nodesToWrite)
And you have:
wReq.nodesToWrite = my_nodes;
with
UA_WriteValue my_nodes[3];
This means that you are assigning a variable, which lives on the stack to a pointer, and later this pointer is freed. free can only delete stuff which is on the heap and not stack, and therefore it fails.
You have two options now:
If you still want to use the stack trick the UA_clear in thinking that the variable is empty:
wReq.nodesToWrite = NULL;
wReq.nodesToWriteSize = 0;
UA_clear(&wReq);
Put the nodes on the heap:
Instead of
UA_WriteValue my_nodes[3]; use Something like UA_WriteValue *my_nodes = (UA_WriteValue*)UA_malloc(sizeof(UA_WriteValue)*3);
Also I strongly recommend that you either use valgrind or clang memory sanitizer to avoid all these memory issues.

Related

Create a checkerboard or "Interweave" two linked-lists. IE changing the pointers of two linked lists

So I have two linked lists, each holding a color:
1.black->2.black->3.black->4.black->5.black->NULL
1.red ->2.red ->3.red ->4.red ->5.red ->NULL
I want the function to return
1.black->2.red ->3.black->4.red ->5.black->NULL
1.red ->2.black->3.red ->4.black->5.red ->NULL.
Lets name the first pointers, firstBlack and firstRed. To achieve this "checkerboard" pattern, I switch the nodes that each first is pointing to with a simple swap to the other list, advance the pointer two spots, then repeat until I'm at the end of the list.
while(firstBlack->next != NULL && firstRed->next != NULL) {
Node * temp = firstBlack->next;
firstBlack->next = firstRed->next;
firstRed->next = temp;
firstBlack = firstBlack->next->next;
firstRed = firstRed->next->next;
}
However, the function isn't doing what it's supposed to although I'm fairly certain that my logic is correct. I am also getting seg faults :(
This is a simple enough code, please use a debugger and debug the code step by step.
Also please post the entire code not just what's in the while loop.
This code should work correctly.
//Some methods to create these linked lists.
pBlackHead = CreateBlackList();
pRedHead = CreateRedList();
firstBlack = pBlackHead;
firstRed = pRedHead;
while(firstBlack->next != NULL && firstRed->next != NULL){
Node * temp = firstBlack->next;
firstBlack->next = firstRed->next;
firstRed->next = temp;
firstBlack = firstBlack->next;
firstRed = firstRed->next;}
While printing the list to check the correctness use pBlackHead , pRedHead. A debugger is not currently available on the system I am using but this should work.
You are advancing two steps without checking end conditions. Because you have an odd number of items, you dereference a null pointer.
You don't need to care which tail originated in which list to swap them
for(; left->next && right->next; left = left->next, right = right->next) {
std::swap(left->next, right->next);
}

When creating threads using lambda expressions, how to give each thread its own copy of the lambda expression?

I have been working on a program that basically used brute force to work backward to find a method using a given set of operations to reach the given number. So, for example, if I gave in a set of operations +5,-7,*10,/3, and a given number say 100(*this example probably won't come up with a solution), and also a given max amount of moves to solve (let's say 8), it will attempt to come up with a use of these operations to get to 100. This part works using a single thread which I have tested in an application.
However, I wanted it to be faster and I came to multithreading. I have worked a long time to even get the lambda function to work, and after some serious debugging have realized that the solution "combo" is technically found. However, before it is tested, it is changed. I wasn't sure how this was possible considering the fact that I had thought that each thread was given its own copy of the lambda function and its variables to use.
In summary, the program starts off by parsing the information, then passes the information which is divided by the parser as paramaters into the array of an operation object(somewhat of a functor). It then uses an algorithm which generated combinations which are then executed by the operation objects. The algorithm, in simplicity, takes in the amount of operations, assigns it to a char value(each char value corresponds to an operation), then outputs a char value. It generates all possible combinations.
That is a summary of how my program works. Everything seems to be working fine and in order other than two things. There is another error which I have not added to the title because there is a way to fix it, but I am curious about alternatives. This way is also probably not good for my computer.
So, going back to the problem with the lambda expression inputted with the thread as seen is with what I saw using breakpoints in the debugger. It appeared that both threads were not generating individual combos, but more rather properly switching between the first number, but alternating combos. So, it would go 1111, 2211, rather than generating 1111, 2111.(these are generated as the previous paragraph showed, but they are done a char at a time, combined using a stringstream), but once they got out of the loop that filled the combo up, combos would get lost. It would randomly switch between the two and never test the correct combo because combinations seemed to get scrambled randomly. This I realized must have something to do with race conditions and mutual exclusion. I had thought I had avoided it all by not changing any variables changed from outside the lambda expression, but it appears like both threads are using the same lambda expression.
I want to know why this occurs, and how to make it so that I can say create an array of these expressions and assign each thread its own, or something similar to that which avoids having to deal with mutual exclusion as a whole.
Now, the other problem happens when I at the end delete my array of operation objects. The code which assigns them and the deleting code is shown below.
operation *operations[get<0>(functions)];
for (int i = 0; i < get<0>(functions); i++)
{
//creates a new object for each operation in the array and sets it to the corresponding parameter
operations[i] = new operation(parameterStrings[i]);
}
delete[] operations;
The get<0>(functions) is where the amount of functions is stored in a tuple and is the number of objects to be stored in an array. The paramterStrings is a vector in which the strings used as parameters for the constructor of the class are stored. This code results in an "Exception trace/breakpoint trap." If I use "*operations" instead I get a segmentation fault in the file where the class is defined, the first line where it says "class operation." The alternative is just to comment out the delete part, but I am pretty sure that it would be a bad idea to do so, considering the fact that it is created using the "new" operator and might cause memory leaks.
Below is the code for the lambda expression and where the corresponding code for the creation of threads. I readded code inside the lambda expression so it could be looked into to find possible causes for race conditions.
auto threadLambda = [&](int thread, char *letters, operation **operations, int beginNumber) {
int i, entry[len];
bool successfulComboFound = false;
stringstream output;
int outputNum;
for (i = 0; i < len; i++)
{
entry[i] = 0;
}
do
{
for (i = 0; i < len; i++)
{
if (i == 0)
{
output << beginNumber;
}
char numSelect = *letters + (entry[i]);
output << numSelect;
}
outputNum = stoll(output.str());
if (outputNum == 23513511)
{
cout << "strange";
}
if (outputNum != 0)
{
tuple<int, bool> outputTuple;
int previousValue = initValue;
for (int g = 0; g <= (output.str()).length(); g++)
{
operation *copyOfOperation = (operations[((int)(output.str()[g])) - 49]);
//cout << copyOfOperation->inputtedValue;
outputTuple = (*operations)->doOperation(previousValue);
previousValue = get<0>(outputTuple);
if (get<1>(outputTuple) == false)
{
break;
}
debugCheck[thread - 1] = debugCheck[thread - 1] + 1;
if (previousValue == goalValue)
{
movesToSolve = g + 1;
winCombo = outputNum;
successfulComboFound = true;
break;
}
}
//cout << output.str() << ' ';
}
if (successfulComboFound == true)
{
break;
}
output.str("0");
for (i = 0; i < len && ++entry[i] == nbletters; i++)
entry[i] = 0;
} while (i < len);
if (successfulComboFound == true)
{
comboFoundGlobal = true;
finishedThreads.push_back(true);
}
else
{
finishedThreads.push_back(true);
}
};
Threads created here :
thread *threadArray[numberOfThreads];
for (int f = 0; f < numberOfThreads; f++)
{
threadArray[f] = new thread(threadLambda, f + 1, lettersPointer, operationsPointer, ((int)(workingBeginOperations[f])) - 48);
}
If any more of the code is needed to help solve the problem, please let me know and I will edit the post to add the code. Thanks in advance for all of your help.
Your lambda object captures its arguments by reference [&], so each copy of the lambda used by a thread references the same shared objects, and so various threads race and clobber each other.
This is assuming things like movesToSolve and winCombo come from captures (it is not clear from the code, but it seems like it). winCombo is updated when a successful result is found, but another thread might immediately overwrite it right after.
So every thread is using the same data, data races abound.
You want to ensure that your lambda works only on two three types of data:
Private data
Shared, constant data
Properly synchronized mutable shared data
Generally you want to have almost everything in category 1 and 2, with as little as possible in category 3.
Category 1 is the easiest, since you can use e.g., local variables within the lambda function, or captured-by-value variables if you ensure a different lambda instance is passed to each thread.
For category 2, you can use const to ensure the relevant data isn't modified.
Finally you may need some shared global state, e.g., to indicate that a value is found. One option would be something like a single std::atomic<Result *> where when any thread finds a result, they create a new Result object and atomically compare-and-swap it into the globally visible result pointer. Other threads check this pointer for null in their run loop to see if they should bail out early (I assume that's what you want: for all threads to finish if any thread finds a result).
A more idiomatic way would be to use std::promise.

Setting pointer out of it's memory range

I'm writing some code to do bitmap blending and my function has a lot of options for it. I decided to use switch to handle those options, but then I needed to either put switch inside a loop (I read that it affects performance) or to assign loop for each switch case (makes code way too big). I decided to do this using third way (see below):
/* When I need to use static value */
BYTE *pointerToValue = (BYTE*)&blendData.primaryValue;
BYTE **pointerToReference = &pointerToValue;
*pointerToReference = *pointerToReference - 3;
/* When I need srcLine's 4th value (where srcLine is a pointer to BYTE array) */
BYTE **pointerToReference = &srcLine;
while (destY2 < destY1) {
destLine = destPixelArray + (destBytesPerLine * destY2++) + (destX1 * destInc);
srcLine = srcPixelArray + (srcBytesPerLine * srcY2++) + (srcX1 * srcInc);
for (LONG x = destX1; x < destX2; x++, destLine += destInc, srcLine += srcInc) {
BYTE neededValue = *(*pointerToReference + 3); //not yet implemented
destLine[0] = srcLine[0];
destLine[1] = srcLine[1];
destLine[2] = srcLine[2];
if (diffInc == BOTH_ARE_32_BIT)
destLine[3] = srcLine[3];
}
}
Sometimes I might need to use srcLine[3] or blendData.primaryValue. srcLine[3] can be accessed easily with *(*pointerToReference + 3), however to access blendData.primaryValue I need to reduce pointer by 3 in order to keep the same expression (*(*pointerToReference + 3)).
So here are my questions:
Is it safe to set pointer out of its memory range if later it is
going to brought back?
I'm 100% sure that it won't be used when it's out of range, but can
I be sure that it won't cause any kind of access violation?
Maybe there is some kind of similar alternative to use one variable
to capture a value of srcLine[3] or blendData.primaryValue
without if(), like it's done in my code sample?
Because of #2, no usage, the answer to #1 is yes, it is perfectly safe. Because of #1, then, there is no need for #3. :-)
An access violation could only happen if the pointer were actually used.

priority_queue becomes extremely slow in debug mode

I am currently writing an A* pathfinding algorithm for a game and came across a very strange performance problem regarding priority_queue's.
I am using a typical 'open nodes list', where I store found, but yet unprocessed nodes. This is implemented as an STL priority_queue (openList) of pointers to PathNodeRecord objects, which store information about a visited node. They are sorted by the estimated cost to get there (estimatedTotalCost).
Now I noticed that whenever the pathfinding method is called, the respective AI thread gets completely stuck and takes several (~5) seconds to process the algorithm and calculate the path. Subsequently I used the VS2013 profiler to see, why and where it was taking so long.
As it turns out, the pushing to and popping from the open list (the priority_queue) takes up a very large amount of time. I am no expert in STL containers, but I never had problems with their efficiency before and this is just weird to me.
The strange thing is that this only occurs while using VS's 'Debug' build configuration. The 'Release' conf. works fine for me and the times are back to normal.
Am I doing something fundamentally wrong here or why is the priority_queue performing so badly for me? The current situation is unacceptable to me, so if I cannot resolve it soon, I will need to fall back to using a simpler container and inserting it to the right place manually.
Any pointers to why this might be occuring would be very helpful!
.
Here is a snippet of what the profiler shows me:
http://i.stack.imgur.com/gEyD3.jpg
.
Code parts:
Here is the relevant part of the pathfinding algorithm, where it loops the open list until there are no open nodes:
// set up arrays and other variables
PathNodeRecord** records = new PathNodeRecord*[graph->getNodeAmount()]; // holds records for all nodes
std::priority_queue<PathNodeRecord*> openList; // holds records of open nodes, sorted by estimated rest cost (most promising node first)
// null all record pointers
memset(records, NULL, sizeof(PathNodeRecord*) * graph->getNodeAmount());
// set up record for start node and put into open list
PathNodeRecord* startNodeRecord = new PathNodeRecord();
startNodeRecord->node = startNode;
startNodeRecord->connection = NULL;
startNodeRecord->closed = false;
startNodeRecord->costToHere = 0.f;
startNodeRecord->estimatedTotalCost = heuristic->estimate(startNode, goalNode);
records[startNode] = startNodeRecord;
openList.push(startNodeRecord);
// ### pathfind algorithm ###
// declare current node variable
PathNodeRecord* currentNode = NULL;
// loop-process open nodes
while (openList.size() > 0) // while there are open nodes to process
{
// retrieve most promising node and immediately remove from open list
currentNode = openList.top();
openList.pop(); // ### THIS IS, WHERE IT GETS STUCK
// if current node is the goal node, end the search here
if (currentNode->node == goalNode)
break;
// look at connections outgoing from this node
for (auto connection : graph->getConnections(currentNode->node))
{
// get end node
PathNodeRecord* toNodeRecord = records[connection->toNode];
if (toNodeRecord == NULL) // UNVISITED -> path record needs to be created and put into open list
{
// set up path node record
toNodeRecord = new PathNodeRecord();
toNodeRecord->node = connection->toNode;
toNodeRecord->connection = connection;
toNodeRecord->closed = false;
toNodeRecord->costToHere = currentNode->costToHere + connection->cost;
toNodeRecord->estimatedTotalCost = toNodeRecord->costToHere + heuristic->estimate(connection->toNode, goalNode);
// store in record array
records[connection->toNode] = toNodeRecord;
// put into open list for future processing
openList.push(toNodeRecord);
}
else if (!toNodeRecord->closed) // OPEN -> evaluate new cost to here and, if better, update open list entry; otherwise skip
{
float newCostToHere = currentNode->costToHere + connection->cost;
if (newCostToHere < toNodeRecord->costToHere)
{
// update record
toNodeRecord->connection = connection;
toNodeRecord->estimatedTotalCost = newCostToHere + (toNodeRecord->estimatedTotalCost - toNodeRecord->costToHere);
toNodeRecord->costToHere = newCostToHere;
}
}
else // CLOSED -> evaluate new cost to here and, if better, put back on open list and reset closed status; otherwise skip
{
float newCostToHere = currentNode->costToHere + connection->cost;
if (newCostToHere < toNodeRecord->costToHere)
{
// update record
toNodeRecord->connection = connection;
toNodeRecord->estimatedTotalCost = newCostToHere + (toNodeRecord->estimatedTotalCost - toNodeRecord->costToHere);
toNodeRecord->costToHere = newCostToHere;
// reset node to open and push into open list
toNodeRecord->closed = false;
openList.push(toNodeRecord); // ### THIS IS, WHERE IT GETS STUCK
}
}
}
// set node to closed
currentNode->closed = true;
}
Here is my PathNodeRecord with the 'less' operator overloading to enable sorting in priority_queue:
namespace AI
{
struct PathNodeRecord
{
Node node;
NodeConnection* connection;
float costToHere;
float estimatedTotalCost;
bool closed;
// overload less operator comparing estimated total cost; used by priority queue
// nodes with a higher estimated total cost are considered "less"
bool operator < (const PathNodeRecord &otherRecord)
{
return this->estimatedTotalCost > otherRecord.estimatedTotalCost;
}
};
}
std::priority_queue<PathNodeRecord*> openList
I think the reason is that you have a priority_queue of pointers to PathNodeRecord.
and there is no ordering defined for the pointers.
try changing it to std::priority_queue<PathNodeRecord> first, if it makes a difference then all you need is passing on your own comparator that knows how to compare pointers to PathNodeRecord, it will just dereference the pointers first and then do the comparison.
EDIT:
taking a wild guess about why did you get an extremely slow execution time, I think the pointers were compared based on their address. and the addresses were allocated starting from one point in memory and going up.
and so that resulted in the extreme case of your heap (the heap as in data structure not the memory part), so your heap was actually a list, (a tree where each node had one children node and so on).
and so you operation took a linear time, again just a guess.
You cannot expect a debug build to be as fast as a release optimized one, but you seems to do a lot of dynamic allocation that may interact badly with the debug runtime.
I suggest you to add _NO_DEBUG_HEAP=1 in the environment setting of the debug property page of your project.

Why the functions doesn't execute completely?

When I try to debug the following function segment, the execution brakes (jumps out of the function) at line pCellTower->m_pCellTowerInfo = pCellInfo:
RILCELLTOWERINFO* pCellInfo = (RILCELLTOWERINFO*)lpData;
CCellTower *pCellTower = (CCellTower*)cbData;
if(pCellTower != NULL)
{
pCellTower->m_pCellTowerInfo = pCellInfo;
}
(the pointer pCellInfo is not set)
Then I tried to comment the line:
RILCELLTOWERINFO* pCellInfo = (RILCELLTOWERINFO*)lpData;
CCellTower *pCellTower = (CCellTower*)cbData;
if(pCellTower != NULL)
{
//pCellTower->m_pCellTowerInfo = pCellInfo;
}
and this way the function executes normally.
Does anyone know what could be wrong?
The most likely explanation is that pCellTower isn't set either. It could contain random bits, and end up pointing outside the memory allocated to your app. The OS cannot allow your program to write outside the space allocated to it, so it sends the program some kind of message (Windows:exception, Unix/Linux:signal) that the write was rejected.
If you trace backwards where the cbData value originates from, you'll probably find it is an uninitialized, random value.