When I try to debug the following function segment, the execution brakes (jumps out of the function) at line pCellTower->m_pCellTowerInfo = pCellInfo:
RILCELLTOWERINFO* pCellInfo = (RILCELLTOWERINFO*)lpData;
CCellTower *pCellTower = (CCellTower*)cbData;
if(pCellTower != NULL)
{
pCellTower->m_pCellTowerInfo = pCellInfo;
}
(the pointer pCellInfo is not set)
Then I tried to comment the line:
RILCELLTOWERINFO* pCellInfo = (RILCELLTOWERINFO*)lpData;
CCellTower *pCellTower = (CCellTower*)cbData;
if(pCellTower != NULL)
{
//pCellTower->m_pCellTowerInfo = pCellInfo;
}
and this way the function executes normally.
Does anyone know what could be wrong?
The most likely explanation is that pCellTower isn't set either. It could contain random bits, and end up pointing outside the memory allocated to your app. The OS cannot allow your program to write outside the space allocated to it, so it sends the program some kind of message (Windows:exception, Unix/Linux:signal) that the write was rejected.
If you trace backwards where the cbData value originates from, you'll probably find it is an uninitialized, random value.
Related
I am attempting to write multiple nodes in a single request, however I have not found any documentation or examples on how to do that, every time I find anything regarding the issue, a single node is written. Based on my understanding of the open62541 library (which is not much), I've attempted to do this like so:
void Write_from_3_to_5_piece_queue() {
char NodeID[128];
char NodeID_backup[128];
char aux[3];
bool bool_to_write = false;
strcpy(NodeID_backup, _BaseNodeID);
strcat(NodeID_backup, "POU.AT2.piece_queue["); // this is where I want to write, I need only to append the array index in which to write
UA_WriteRequest wReq;
UA_WriteValue my_nodes[3]; // this is where I start to make things up, I'm not sure this is the correct way to do it
my_nodes[0] = *UA_WriteValue_new();
my_nodes[1] = *UA_WriteValue_new();
my_nodes[2] = *UA_WriteValue_new();
strcpy(NodeID, NodeID_backup);
strcat(NodeID, "3]"); //append third index of array (will write to piece_queue[3])
my_nodes[0].nodeId = UA_NODEID_STRING_ALLOC(_nodeIndex, NodeID);
my_nodes[0].attributeId = UA_ATTRIBUTEID_VALUE;
my_nodes[0].value.hasValue = true;
my_nodes[0].value.value.type = &UA_TYPES[UA_TYPES_BOOLEAN];
my_nodes[0].value.value.storageType = UA_VARIANT_DATA_NODELETE;
my_nodes[0].value.value.data = &bool_to_write;
strcpy(NodeID, NodeID_backup);
strcat(NodeID, "4]");
my_nodes[1].nodeId = UA_NODEID_STRING_ALLOC(_nodeIndex, NodeID);
my_nodes[1].attributeId = UA_ATTRIBUTEID_VALUE;
my_nodes[1].value.hasValue = true;
my_nodes[1].value.value.type = &UA_TYPES[UA_TYPES_BOOLEAN];
my_nodes[1].value.value.storageType = UA_VARIANT_DATA_NODELETE;
my_nodes[1].value.value.data = &bool_to_write;
strcpy(NodeID, NodeID_backup);
strcat(NodeID, "5]");
my_nodes[2].nodeId = UA_NODEID_STRING_ALLOC(_nodeIndex, NodeID);
my_nodes[2].attributeId = UA_ATTRIBUTEID_VALUE;
my_nodes[2].value.hasValue = true;
my_nodes[2].value.value.type = &UA_TYPES[UA_TYPES_BOOLEAN];
my_nodes[2].value.value.storageType = UA_VARIANT_DATA_NODELETE;
my_nodes[2].value.value.data = &bool_to_write;
UA_WriteRequest_init(&wReq);
wReq.nodesToWrite = my_nodes;
wReq.nodesToWriteSize = 3;
UA_WriteResponse wResp = UA_Client_Service_write(_client, wReq);
UA_WriteResponse_clear(&wResp);
UA_WriteRequest_clear(&wReq);
return;
}
At first I didn't have much hope that this would work, but it turns out this actually writes the values that I wish. The problem is that on UA_WriteRequest_clear(&wReq); I trigger an exception in the open62541 library:
Also, I know I can write multiple values to arrays specifically, even though in this particular example that would fix my issue, that's not what I mean to do, this example is just to simplify my problem. Just suppose I have a multi-type structure and I want to write to it, all in a single request. I appreciate any help!
First of all, this is bad:
UA_WriteValue my_nodes[3];
my_nodes[0] = *UA_WriteValue_new();
my_nodes[1] = *UA_WriteValue_new();
my_nodes[2] = *UA_WriteValue_new();
my_nodes is already created on the stack, and then you are copying the content of a new object into it by dereferencing. This definitely leads to memory leaks. You probably want to use UA_WriteValue_init() instead.
Never ever dereference the return value of a new() function.
Let's go bottom up:
UA_WriteRequest_clear(&wReq) is recursively freeing all content of the wReq steucture.
This means that it will also call:
UA_Array_delete(wReq.nodesToWrite, wReq.nodesToWriteSize, ...)
which in turn calls UA_free(wReq.nodesToWrite)
And you have:
wReq.nodesToWrite = my_nodes;
with
UA_WriteValue my_nodes[3];
This means that you are assigning a variable, which lives on the stack to a pointer, and later this pointer is freed. free can only delete stuff which is on the heap and not stack, and therefore it fails.
You have two options now:
If you still want to use the stack trick the UA_clear in thinking that the variable is empty:
wReq.nodesToWrite = NULL;
wReq.nodesToWriteSize = 0;
UA_clear(&wReq);
Put the nodes on the heap:
Instead of
UA_WriteValue my_nodes[3]; use Something like UA_WriteValue *my_nodes = (UA_WriteValue*)UA_malloc(sizeof(UA_WriteValue)*3);
Also I strongly recommend that you either use valgrind or clang memory sanitizer to avoid all these memory issues.
I'm guessing this is a basic question: I'm using the f_readdir function in FatFs to read the contents of a directory from a command line, but I'm having trouble getting the function to run multiple times. It works fine the first go around: I type "directory" into the CLI and it displays every file in the directory on line at time. Asking it to repeat the f_readdir operation again, however, (i.e. by typing in the "directory" command again, after the first one has successfully completed) outputs nothing.
I believe this is because the file read object isn't being rewound back to zero at the end of f_readdir operation, and subsequent requests to read the directory are starting at a portion of the index that doesn't exist. That's the best explanation I can see at this point, at least. It says on Elm Chan's FatFs website that "When all directory items have been read and no item to read, a null string is stored into the fno->fname[] without any error. When a null pointer is given to the fno, the read index of the directory object is rewinded."
Here's my code. Ignore the function arguments, they're RTOS stuff to run the command:
void Cmd_directory::run(XString param, XRTOS::XCLI::userIO* io){
DIR dj; /*[IN] Directory search object */
FILINFO fno; /*[OUT] File information structure */
FRESULT res;
res = f_opendir(&dj, "0:");
while (res == FR_OK && fno.fname[0]) {
res = f_readdir(&dj, &fno);
io->print((const char*)fno.fname);
io->print("\r\n");
}
f_closedir(&dj);
}
This while loop was something I found on the web, so unfortunately I don't fully understand how the fname index works, as many times as I've read the detailed explanation. Perhaps it's not realizing it's hitting the end of the directory based on the conditions of my while loop, though it certainly completes and closes the directory successfully. When I run the function again, I can see that the fno object still has the file information stored in it from the previous go around.
Things I've tried (and at this point it's worth adding I'm quite new to the world of programming):
&fno = nullptr; //produces "error: lvalue required as left operand of assignment"
FILINFO *p = nullptr; //produces all sorts of errors
fno = p;
//these were shots in the dark
fno.fname[0] = 0;
memset(fno.fname, 0, sizeof(fno.fname));
I imagine this is some basic stuff that I'm juts not getting, though forgive me if I'm way off on this. I don't have access to a real programmer IRL unfortunately so I'm forced to poll the community.
Oh right, and I'm using the Eclipse IDE, GNU ARM Build tools with an STM32L.
Here:
while (res == FR_OK && fno.fname[0]) {
fno.fname[0] is unitialised - you get lucky the first time as it contains non-zero junk. The second time it likely contains whatever the function previously left in it - i.e. the NUL from the previous call - which terminates the loop immediately.
The following should work:
res = f_readdir( &dj, &fno ) ;
while( res == FR_OK && fno.fname[0] )
{
io->print((const char*)fno.fname);
io->print("\r\n");
res = f_readdir(&dj, &fno);
}
From your previous attempts fno.fname[0] = 0; was close, if only you'd not explicitly set to the loop termination value! The following minor change to your code should work too:
fno.fname[0] = 1;
while (res == FR_OK && fno.fname[0]) {
res = f_readdir(&dj, &fno);
io->print((const char*)fno.fname);
io->print("\r\n");
}
but it will print a blank line if the directory is empty.
To be honest the semantics of the ELM FatFs f_readdir() are somewhat odd. You might consider a wrapper to give it a more consistent and conventional interface:
FRESULT readdir( DIR* dp, /* [IN] Directory object */
FILINFO* fno /* [OUT] File information structure */ )
{
FRESULT res = f_readdir( dp, fno ) ;
if( res == FR_OK && fno->fname[0] == 0 )
{
res = FR_NO_FILE ;
}
return res ;
}
Then you can write (for example):
while( readdir( &dj, &fno ) == FR_OK )
{
io->print((const char*)fno.fname);
io->print("\r\n");
}
I have been working on a program that basically used brute force to work backward to find a method using a given set of operations to reach the given number. So, for example, if I gave in a set of operations +5,-7,*10,/3, and a given number say 100(*this example probably won't come up with a solution), and also a given max amount of moves to solve (let's say 8), it will attempt to come up with a use of these operations to get to 100. This part works using a single thread which I have tested in an application.
However, I wanted it to be faster and I came to multithreading. I have worked a long time to even get the lambda function to work, and after some serious debugging have realized that the solution "combo" is technically found. However, before it is tested, it is changed. I wasn't sure how this was possible considering the fact that I had thought that each thread was given its own copy of the lambda function and its variables to use.
In summary, the program starts off by parsing the information, then passes the information which is divided by the parser as paramaters into the array of an operation object(somewhat of a functor). It then uses an algorithm which generated combinations which are then executed by the operation objects. The algorithm, in simplicity, takes in the amount of operations, assigns it to a char value(each char value corresponds to an operation), then outputs a char value. It generates all possible combinations.
That is a summary of how my program works. Everything seems to be working fine and in order other than two things. There is another error which I have not added to the title because there is a way to fix it, but I am curious about alternatives. This way is also probably not good for my computer.
So, going back to the problem with the lambda expression inputted with the thread as seen is with what I saw using breakpoints in the debugger. It appeared that both threads were not generating individual combos, but more rather properly switching between the first number, but alternating combos. So, it would go 1111, 2211, rather than generating 1111, 2111.(these are generated as the previous paragraph showed, but they are done a char at a time, combined using a stringstream), but once they got out of the loop that filled the combo up, combos would get lost. It would randomly switch between the two and never test the correct combo because combinations seemed to get scrambled randomly. This I realized must have something to do with race conditions and mutual exclusion. I had thought I had avoided it all by not changing any variables changed from outside the lambda expression, but it appears like both threads are using the same lambda expression.
I want to know why this occurs, and how to make it so that I can say create an array of these expressions and assign each thread its own, or something similar to that which avoids having to deal with mutual exclusion as a whole.
Now, the other problem happens when I at the end delete my array of operation objects. The code which assigns them and the deleting code is shown below.
operation *operations[get<0>(functions)];
for (int i = 0; i < get<0>(functions); i++)
{
//creates a new object for each operation in the array and sets it to the corresponding parameter
operations[i] = new operation(parameterStrings[i]);
}
delete[] operations;
The get<0>(functions) is where the amount of functions is stored in a tuple and is the number of objects to be stored in an array. The paramterStrings is a vector in which the strings used as parameters for the constructor of the class are stored. This code results in an "Exception trace/breakpoint trap." If I use "*operations" instead I get a segmentation fault in the file where the class is defined, the first line where it says "class operation." The alternative is just to comment out the delete part, but I am pretty sure that it would be a bad idea to do so, considering the fact that it is created using the "new" operator and might cause memory leaks.
Below is the code for the lambda expression and where the corresponding code for the creation of threads. I readded code inside the lambda expression so it could be looked into to find possible causes for race conditions.
auto threadLambda = [&](int thread, char *letters, operation **operations, int beginNumber) {
int i, entry[len];
bool successfulComboFound = false;
stringstream output;
int outputNum;
for (i = 0; i < len; i++)
{
entry[i] = 0;
}
do
{
for (i = 0; i < len; i++)
{
if (i == 0)
{
output << beginNumber;
}
char numSelect = *letters + (entry[i]);
output << numSelect;
}
outputNum = stoll(output.str());
if (outputNum == 23513511)
{
cout << "strange";
}
if (outputNum != 0)
{
tuple<int, bool> outputTuple;
int previousValue = initValue;
for (int g = 0; g <= (output.str()).length(); g++)
{
operation *copyOfOperation = (operations[((int)(output.str()[g])) - 49]);
//cout << copyOfOperation->inputtedValue;
outputTuple = (*operations)->doOperation(previousValue);
previousValue = get<0>(outputTuple);
if (get<1>(outputTuple) == false)
{
break;
}
debugCheck[thread - 1] = debugCheck[thread - 1] + 1;
if (previousValue == goalValue)
{
movesToSolve = g + 1;
winCombo = outputNum;
successfulComboFound = true;
break;
}
}
//cout << output.str() << ' ';
}
if (successfulComboFound == true)
{
break;
}
output.str("0");
for (i = 0; i < len && ++entry[i] == nbletters; i++)
entry[i] = 0;
} while (i < len);
if (successfulComboFound == true)
{
comboFoundGlobal = true;
finishedThreads.push_back(true);
}
else
{
finishedThreads.push_back(true);
}
};
Threads created here :
thread *threadArray[numberOfThreads];
for (int f = 0; f < numberOfThreads; f++)
{
threadArray[f] = new thread(threadLambda, f + 1, lettersPointer, operationsPointer, ((int)(workingBeginOperations[f])) - 48);
}
If any more of the code is needed to help solve the problem, please let me know and I will edit the post to add the code. Thanks in advance for all of your help.
Your lambda object captures its arguments by reference [&], so each copy of the lambda used by a thread references the same shared objects, and so various threads race and clobber each other.
This is assuming things like movesToSolve and winCombo come from captures (it is not clear from the code, but it seems like it). winCombo is updated when a successful result is found, but another thread might immediately overwrite it right after.
So every thread is using the same data, data races abound.
You want to ensure that your lambda works only on two three types of data:
Private data
Shared, constant data
Properly synchronized mutable shared data
Generally you want to have almost everything in category 1 and 2, with as little as possible in category 3.
Category 1 is the easiest, since you can use e.g., local variables within the lambda function, or captured-by-value variables if you ensure a different lambda instance is passed to each thread.
For category 2, you can use const to ensure the relevant data isn't modified.
Finally you may need some shared global state, e.g., to indicate that a value is found. One option would be something like a single std::atomic<Result *> where when any thread finds a result, they create a new Result object and atomically compare-and-swap it into the globally visible result pointer. Other threads check this pointer for null in their run loop to see if they should bail out early (I assume that's what you want: for all threads to finish if any thread finds a result).
A more idiomatic way would be to use std::promise.
I'm currently working on a program, and I'm running into a small issue. It's reading data from a text file, and when the numbers are in ascending order, it runs fine, but when I have numbers in a random order, it crashes. I've debugged it and traced it to this if statement, but I can't figure out what the heck I did wrong.
if(tempNode != NULL)
{
struct doublyLinkNode* temp = new doublyLinkNode;
temp->nextNode = tempNode;
temp->previousNode = tempNode->previousNode;
temp->nodeValue = noToInsert;
tempNode->previousNode->nextNode = temp;
tempNode->previousNode= temp;
list->count++;
return true;
} // end if
The list building crashes when a new number to be added precedes the current top of the list. I think the pointer is attempting to write to an invalid pointer.
Your error is to be expected. You want to insert nodes before the current one (tempNode),
and you´re using tempNode->previousNode in the code.
If tempNode happens to be the first node, what´s tempNode->previousNode? Right, NULL
(unless you have a circular list, but then you wouldn´t have this problem). That means
tempNode->previousNode->nextNode = temp; will crash.
As solution to this part, just make an if:
if(tempNode->previousNode != NULL) tempNode->previousNode->nextNode = temp;
(assuming that everything is initialized properly). Depending on how you implemented the list, you may need to change the information what the first node is, too.
EDIT: Pastebin links to the entirety of the code at the bottom
for my CS215 course, I was given a class called String215 which is a basic string class to help in the understanding of dynamic memory allocation and pointer arithmetic with char arrays.
The class was given to me in a very basic skeleton form with prototypes but no implementations, along with a test function to test my implementations. I CAN NOT use any C String functions in this assignment.
The part of the program which is troubling is the append function, which just appends a parameter string215 object to the end of the current string215 object.
// Add a suffix to the end of this string. Allocates and frees memory.
void string215::append(const string215 &suffix)
{
char *output = new char[str_len(data)+suffix.length()+1];
for(int x = 0; x < str_len(data); x++) {
*output = *data;
output++;
data++;
}
for(int x = 0; x < suffix.length(); x++) {
*output = suffix.getchar(x);
output++;
}
*output = '\0';
output -= (str_len(data)+suffix.length()+1);
delete[] data;
data = output;
}
This portion of the code is tested in the 13th test of the test function as shown here:
string215 str("testing");
...
// Test 13: test that append works in a simple case.
curr_test++;
string215 suffix("123");
str.append(suffix);
if (strcmp(str.c_str(), "testing123") != 0) {
cerr << "Test " << curr_test << " failed." << endl;
failed++;
}
Here is the description of the append class:
Add the suffix to the end of this string. Allocates a new, larger, array; copies the old contents, followed by the suffix, to the new array; then frees the old array and updates the pointer to the new one.
My program aborts at the very end of the append function execution with the error message:
Debug Assertion Failed!
Program: [Source path]\dbgdel.cpp
Line: 52
Expression: _BLOCK_TYPE_IS_VALID(pHead->nBlockUse)
...
Abort || Retry || Ignore
I'm fairly certain it has something to do with my very poor memory management. I know it's not a lot to go on, but I've been struggling with this for hours on end and can't seem to figure it out.
Here's a pastebin of the .cpp and .h file for this program
string215.cpp: http://pastebin.com/Xh2SvDKJ
string215.h: http://pastebin.com/JfAJDEVN
Any help at all is greatly appreciated!
Thanks,
RAW-BERRY
You are changing data pointer before delete[]. You need to delete[] exactly the same value you got from new[].
Also, you are incrementing output pointer str_len(data)+suffix.length() times, and you take it back by str_len(data) + suffix.length() + 1.
I would use separate variables for iteration to solve these problems.
You increment output exactly str_len(data) + suffix.length() times. Note that you don't increment output after *output = '\0';.
So to go back to the start, you should use:
output -= (str_len(data) + suffix.length());
By the way, some of the code is not very efficient. For example, getchar uses a loop instead of simply returning data[index]. You use getchar in append, which means that the performance isn't great.
EDIT: As zch says, you use delete[] data after modifying data, but note that even before that you use str_len(data) after modifying data (when deciding how many bytes to go skip back), so the calculation is wrong (and my suggestion above is also wrong, because str_len(data) is now zero).
So I think your problem is with the line
for(int x = 0; x < str_len(data); x++) {
Notice that the size of 'data' is changing at each iteration of the loop. As you increment 'x', you are decreasing the length of 'data'. Suppose 'data' is a string holding "hello": in the first iteration of the loop x=0 and str_len(data)=5; in the second iteration x=1 and str_len(data)=4. Thus the for loop executes half as many times as you need it to and 'data' does not end up pointing to the end of the data string