C++ insertion/extraction operator - c++

I am having a bit of trouble with the extraction operator. It is reading in continuous lines of code.
The function is:
friend std::istream &operator>>(std::istream &in, Points2D &some_points) {
//If sequence exists, deallocate it.
if(some_points.sequence_ != NULL){
delete some_points.sequence_;
some_points.sequence_ = nullptr;
}
in >> some_points.size_;
some_points.sequence_ = new std::array<Object, 2> [some_points.size()];
for(size_t i = 0; i < some_points.size(); i++){
in >> some_points.sequence_[i][0] >> some_points.sequence_[i][1];
}
std::cout << std::endl;
return in;
};
It is supposed to add the values, The autograder for this assignment is not working properly because the input is not correct.
Your program is being tested for the following input:
4 1.5 2.5 6.6 9.2 4.5 3.2 5.4
3 200.0 6.0 450.2 8.8 9.6 3.4
Your program produced the following output:
Enter a sequence of points (double)
(1.5, 2.5) (6.6, 9.2) (4.5, 3.2) (5.4, 3) //THIS IS WHERE IT IS HAPPENING, ALLOCATES 5.4 AND 3
//TOGETHER INSTEAD OF STOPPING
Enter a sequence of points (double)
(0, 6) (450.2, 8.8) (9.6, 3.4) (0, 0) (0, 0) (0, 0) (0, 0) (0, 0) (0, 0)
(0, 0) (0, 0) (0, 0) ... (0, 0) (0, 0) (0, 0)
This line is stated to have 4 pairs, it should abort the program because 5.4 does not have any other value to assign to it.
4 1.5 2.5 6.6 9.2 4.5 3.2 5.4 // (1.5,2.5) (6.6,9.2) (4.5,3.2)
This line of 3 pairs is fine as every one of them has a pair
3 200.0 6.0 450.2 8.8 9.6 3.4 // (200.0,6.0) (450.2,8.8) (9.6,3.4)
My question is how do I make it so my program aborts if it doesn't have a second value instead of reading it from the next line?

Related

TFLite Segementation Fault by getting inputs and outputs with C++

I'm trying to run an TfLite Model on a x86_64 system. It seems that all is working fine. But when I try to get the input or output tensor with typed_input_tensor(0) then I get a null pointer.
My model is a simple HelloWorldNN:
import tensorflow as tf
import numpy as np
from tensorflow import keras
model = tf.keras.Sequential([keras.layers.Dense(units=1, input_shape=[1])])
model.compile(optimizer='sgd', loss='mean_squared_error')
xs = np.array([-1.0, 0.0, 1.0, 2.0, 3.0, 4.0], dtype=float)
ys = np.array([-3.0, -1.0, 1.0, 3.0, 5.0, 7.0], dtype=float)
model.fit(xs, ys, epochs=10)
print(model.predict([10.0]))
model.summary()
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open("linear.tflite","wb").write(tflite_model)
For the C++ part I cloned the tensorflow git and checked out the commit d855adfc5a0195788bf5f92c3c7352e638aa1109. This is the commit which is neccessary to using Coral hardware which I plan to use. I build the tensorflow-lite.a and linked it to my application.
std::unique_ptr<tflite::FlatBufferModel> model =
tflite::FlatBufferModel::BuildFromFile("linear.tflite");
if (tflite::InterpreterBuilder(*model, resolver)(&interpreter) != kTfLiteOk) {
std::cerr << "Failed to build interpreter." << std::endl;
}
if (interpreter->AllocateTensors() != kTfLiteOk) {
std::cerr << "Failed to allocate tensors." << std::endl;
}
std::cout << "Number of tensors" << interpreter->tensors_size() <<" Num of Inputs "<<
tflite::PrintInterpreterState(interpreter.get());
float* input = interpreter->typed_input_tensor<float>(0);
interpreter->Invoke();
float* output = interpreter->typed_output_tensor<float>(0);
If I run the code then both input and output pointers are null pointers. The output of interpreter.get() is the follow:
Number of tensors8 Num of Inputs 18446732345621392436
Interpreter has 8 tensors and 3 nodes
Inputs: 4
Outputs: 5
Tensor 0 dense/BiasAdd_int8 kTfLiteInt8 kTfLiteArenaRw 1 bytes ( 0.0 MB) 1 1
Tensor 1 dense/MatMul_bias kTfLiteInt32 kTfLiteMmapRo 4 bytes ( 0.0 MB) 1
Tensor 2 dense/kernel/transpose kTfLiteInt8 kTfLiteMmapRo 1 bytes ( 0.0 MB) 1 1
Tensor 3 dense_input_int8 kTfLiteInt8 kTfLiteArenaRw 1 bytes ( 0.0 MB) 1 1
Tensor 4 dense_input kTfLiteFloat32 kTfLiteArenaRw 4 bytes ( 0.0 MB) 1 1
Tensor 5 dense/BiasAdd kTfLiteFloat32 kTfLiteArenaRw 4 bytes ( 0.0 MB) 1 1
Tensor 6 (null) kTfLiteNoType kTfLiteMemNone 0 bytes ( 0.0 MB) (null)
Tensor 7 (null) kTfLiteNoType kTfLiteMemNone 0 bytes ( 0.0 MB) (null)
Node 0 Operator Builtin Code 114 QUANTIZE
Inputs: 4
Outputs: 3
Node 1 Operator Builtin Code 9 FULLY_CONNECTED
Inputs: 3 2 1
Outputs: 0
Node 2 Operator Builtin Code 6 DEQUANTIZE
Inputs: 0
Outputs: 5`
I've no idea where is my mistake. It worked with tensorflow 1.15. But 1.15 I can't use anymore with Coral hardware. I would be grateful for any help
Ok, I found my problem. I hadn't updated the include files. The files were still from 1.15. :-)

OpenCV col-wise standard deviation result vs MATLAB

I've seen linked questions but I can't understand why MATLAB and OpenCV give different results.
MATLAB Code
>> A = [6 4 23 -3; 9 -10 4 11; 2 8 -5 1]
A =
6 4 23 -3
9 -10 4 11
2 8 -5 1
>> Col_step_1 = std(A, 0, 1)
Col_step_1 =
3.5119 9.4516 14.2945 7.2111
>> Col_final = std(Col_step_1)
Col_final =
4.5081
Using OpenCV and this function:
double getColWiseStd(cv::Mat in)
{
CV_Assert( in.type() == CV_64F );
cv::Mat meanValue, stdValue, m2, std2;
cv::Mat colSTD(1, A.cols, CV_64F);
cv::Mat colMEAN(1, A.cols, CV_64F);
for (int i = 0; i < A.cols; i++)
{
cv::meanStdDev(A.col(i), meanValue, stdValue);
colSTD.at<double>(i) = stdValue.at<double>(0);
colMEAN.at<double>(i) = meanValue.at<double>(0);
}
std::cout<<"\nCOLstd:\n"<<colSTD<<std::endl;
cv::meanStdDev(colSTD, m2, std2);
std::cout<<"\nCOLstd_f:\n"<<std2<<std::endl;
return std2.at<double>(0,0);
}
Applied to the same matrix yields the following:
Matrix:
[6, 4, 23, -3;
9, -10, 4, 11;
2, 8, -5, 1]
COLstd:
[2.867441755680876, 7.71722460186015, 11.67142760000773, 5.887840577551898]
COLstd_f:
[3.187726614989861]
I'm pretty sure that the OpenCV and MATLAB std function are correct, and thus can't find what I'm doing wrong, am I missing a type conversion? Something else?
The standard deviation you're calculating in OpenCV is normalised by number of observations (N) whereas you're calculating standard deviation in MATLAB normalised by N-1 (which is also the default normalisation factor in MATLAB and is known as Bessel's correction). Hence there is the difference.
You can normalise by N in MATLAB by selecting the second input argument as 1:
Col_step_1 = std(A, 1, 1);
Col_final = std(Col_step_1, 1);

using stringstream and getline to read the first two numbers of each line

What I need to do: I have a vector of lines right now, v[0] is the first line and so on. I would like to read the first number from each line as the challenge and the second number from each line as the judge and then apply the conditions in the code. I want to use a stringstream to read extract the numbers from the lines.
What my code is doing right now: It is reading only the first number from each line. So the first number of the first line is the challenge and the first number of the second line is the judge and the first number of the third line is the challenge.
std::vector<string> v;
string line;
int i;
double challenge;
int judge;
while (getline(cin, line)) {
if (line.empty()) {
break;
}
v.push_back(line);
}
for (i = 0; i < v.size();i++ ) {
cin >> v[i];
std::stringstream ss(v[i]);
ss << v[i];
ss >> challenge >> judge;
if (challenge < 1 || challenge > 5) {
cout << "bad_difficulty" << endl; //must add the condition or empty
v.erase(v.begin() + i);
}
if (judge != 5 || judge != 7 ) {
cout << "bad_judges" << endl; //must add the condition or empty
v.erase(v.begin() + i);
}
cout << v[i] << endl;
}
return 0;
}
For example:
Input:
5.1 7 5.4 3.0 9.6 2.9 2.8 2.0 5.4
-3.8 7 2.9 1.1 5.7 7.2 4.8 8.5 3.9
2.2 5 9.4 4.7 7.3 1.9 5.7 6.0 7.1
2.4 6 9.2 5.2 1.0 2.9 4.9 7.4 7.9
2.1 7 7.9 4.9 0.0 7.2 9.1 7.8 6.7 4.3
3.8 5
2.0
4.0 7 2.4 1.9 3.2 8.3 14.8 0.1 9.7
2.5 7 8.4 -8.0 5.0 6.0 8.0 1.3 3.3
1.6 -1 9.5 2.5 5.8 7.9 5.5 1.6 7.9
Output should be:
bad_difficulty
bad_difficulty
2.2 5 9.4 4.7 7.3 1.9 5.7 6.0 7.1
bad_judges
2.1 7 7.9 4.9 0.0 7.2 9.1 7.8 6.7 4.3
3.8 5
bad_judges
4.0 7 2.4 1.9 3.2 8.3 14.8 0.1 9.7
2.5 7 8.4 -8.0 5.0 6.0 8.0 1.3 3.3
bad_judges
Current Output:
bad_difficulty
bad_judges
2.2 5 9.4 4.7 7.3 1.9 5.7 6.0 7.1
bad_judges
2.1 7 7.9 4.9 0.0 7.2 9.1 7.8 6.7 4.3
bad_judges
2.0
bad_judges
2.5 7 8.4 -8.0 5.0 6.0 8.0 1.3 3.3
bad_judges
1.6 -1 9.5 2.5 5.8 7.9 5.5 1.6 7.9
Let's go walking through that delete loop.
i = 0
v[0] contains line 1.
Line 1 is 5.1 7
challenge > 5, remove 0. The vector shifts up by 1 v[0] now contains line 2
Judge == 7 do not remove 0
increment i
i = 1
v[1] contains line 3. Line 2 has been skipped
Line 3 is 2.2 5
challenge < 5, do not remove 1.
Judge is not 5 or 7. remove 1. The vector shifts up by 1 v[1] now contains line 4
increment i
i = 2
v[2] contains line 5. Line 4 has been skipped
Line 5 is 2.1 7
challenge < 5, do not remove 2.
Judge is 7. do not remove 2.
increment i
i = 3
v[3] contains line 6.
Line 6 is 3.8 5
challenge < 5, do not remove 3.
Judge is 5. do not remove 3.
increment i
i = 3
v[3] contains line 7.
Line 7 is 2.0
challenge < 5, do not remove 3.
Judge is UNDEFINED! PANIC! PANIC! Crom only knows what happens.
increment i
Anyway, the basic pattern here should be clear. When you remove an element from a vector, all subsequent elements are shifted up. Solution: When you remove an element, do not increment i. An elsestatement will take care of this for you.
Next because there are two separate if statements there is the possibility that both conditions will be true and v[i] will be removed twice. There are a bunch of ways around this. Manthan Tilva's solution with continue is simple and effective, but this can be handled more obviously with an else if or by rolling both tests into the same if.
Third, using values that were not read from the stream are undefined. and should not be used. Discard the line without looking any further. if (ss >> challenge >> judge) will help here.

Issues Implementing Halstead's Complexity Metrics

I'm currently practicing with a simple program to understand the equations involved in deriving various metrics from Halstead's software science. I do believe I'm doing it correctly, but I feel like I haven't registered all operands and operators so that I can start with the mathematics.
The program I'm using is:
/*01*/ // counts how many items in sArray[] are also in tArray[]
/*02*/ int matched(int sArray[], int tArray[], int sMax, int tMax)
/*03*/ {
/*04*/ int count, i, first, middle, last;
/*05*/
/*06*/ for (i = 0; i < sMax; ++i)
/*07*/ {
/*08*/ last = tMax - 1;
/*09*/ for (int first = 0; first <= last;)
/*10*/ {
/*11*/ middle = (first + last) / 2;
/*12*/ if (tArray[middle] == sArray[i])
/*13*/ {
/*14*/ count++;
/*15*/ break;
/*16*/ }
/*17*/ if (tArray[middle] < sArray[i])
/*18*/ {
/*19*/ first = middle + 1;
/*20*/ }
/*21*/ else
/*22*/ {
/*23*/ last = middle - 1;
/*24*/ }
/*25*/ }
/*26*/ }
/*27*/ return count;
/*28*/ }
And I've come out with
n1 = the number of distinct operators = 10
n2 = the number of distinct operands = 9
N1 = the total number of operators = 24
N2 = the total number of operands = 34
These notes show the distinct operators and operands found:
Operators
= Assignment (line 6, 8, 9, 11, 19, 23) = 6 < Less Than (line 6, 17) = 2
++ Increment (line 6, 14) = 2
- Subtract (line 8, 23) = 2 <= Less Than or Equal to (line 9) = 1
+ Addition (line 11, 19) = 2 / Division (line 11) = 1
== Equal to (line 12) = 1 [] index (line 2*2, 12*2, 17*2 = 6 break (line 15) = 1
Operands count (line 4, 14) = 2 i (line 4, 6*3, 12, 17) = 6 first (line 4, 9*2, 11, 19) = 5 middle (line 4, 11, 12, 17, 19, 23) = 6 last (line 4, 8, 9, 11, 23) = 5 sArray (line
2, 12, 17) = 3 tArray (line 2, 12, 17) = 3 sMax (line 2, 6)
= 2 tMax (line 2, 8) = 2
Is there anything vital I've missed out? From my understanding:
Operands are values
Operators manipulate and check operands
The point of Halstead's metrics is to answer a lot of questions like "How difficult is the code to read", "How much effort was put into writing the code", etc. The formula for Halstead's Difficulty metric should provide a hint on how the first question answered:
Difficulty = (Unique Operators / 2) * (Operands / Unique Operands);
You can see that having more unique operators, obviously, makes the code harder to read.
On braces: A lot of sources on the subject consider {} to be operators, which I don't see the point of. Curly braces act as a structure (punctuation) element and in a lot of ways makes code easier to understand, not harder. (Take, for example, conditional block with and without braces)
Counting the function name matched is relevant only in a more general context, but not when you measure the metrics of the function implementation (given there is no recursion).
On operators: counting operators can be tricky. For example, [] appearing in function declaration and [] on lines 12 and 17, are actually different things. The first one is array declaration, the second is operator[] - accessing element by index. The same with postfix and prefix ++, having them both in the program makes it harder to read.
The same logic applies to language keywords: for, if, else, break, return. The more of them in the code the harder it is to read.
On types: type names in variable declaration is also tricky. Some attribute them to operators, some to operands. But if we look again at the Difficulty formula, we would see that type names would better go to operators, in the sense that having more different types in the code make it harder to read, not easier.
Your counts for operands seems to be alright.
Operators
= Assignment (line 6, 8, 9, 11, 19, 23) = 6
< Less Than (line 6, 17) = 2
++ Prefix Increment (line 6) = 1
++ Postfix Increment (line 14) = 1
- Subtract (line 8, 23) = 2
<= Less Than or Equal to (line 9) = 1
+ Addition (line 11, 19) = 2
/ Division (line 11) = 1
== Equal to (line 12) = 1
[] declaration (line 2) = 2
[] index (line 12, 17) = 4
for (line 6, 9) = 2
if (line 12, 17) = 2
else (line 21) = 1
break (line 15) = 1
return (line 27) = 1
int declaration = 7
Operands
count (line 4, 14) = 2
i (line 4, 6*3, 12, 17) = 6
first (line 4, 9*2, 11, 19) = 5
middle (line 4, 11, 12, 17, 19, 23) = 6
last (line 4, 8, 9, 11, 23) = 5
sArray (line 2, 12, 17) = 3
tArray (line 2, 12, 17) = 3
sMax (line 2, 6) = 2
tMax (line 2, 8) = 2
Metrics
n1 = 17
n2 = 9
N1 = 37
N2 = 34
Difficulty = (n1 * N2) / (2 * n2) = 32.1
I was referring to Wiki and this page on Virtual Machinery.
By the way, most things said are my opinion, and may not coincide with more official sources.
By the way: 2, here is exact and strict definition on what should be counted as operators and operands in a C++ code: http://www.verifysoft.com/en_halstead_metrics.html.
firstly, initialize count to 0 and next operators are not values they are variables.
operators
matched -1
() -6
[] -6
{} -6
int -7
for -2
if -2
else -1
return -1
= -6
< -2
<= -1
++ -2
- -2
+ -2
/ -1
== -1
break -1
operands
2 -line no. 11 -1
1 (8,19,23) -3
0 -1
count -3
i -6
first -5
middle -6
last -5
sArray -3
tArray -3
sMax -2
tMax -2
N1=50
N2=40
n1=18
n2=12
The book I am referring to is Software Metrics and Software Metrology By Alain Abran.
You can download it from here -> http://profs.etsmtl.ca/aabran/English/Accueil/ChapersBook/Abran%20-%20Chapter%20005.pdf
I hope it will solve all your doubts.
And function names,braces,type names,all other key words and all other well known operators come under operator section
Variables and constant values that are input to any functions or operators are operands.
Hence, I come up with this answer.

Dealing with vector of list of struct

I read text file as:
std::vector< std::list< struct> >
My data in the form:
1 0.933 0.9 2 0.865 0.6 3 0.919 0.2 4 0.726 0.5
3 0.854 0.6 5 0.906 0.2 6 0.726 0.5
1 0.906 0.2 2 0.726 0.5
1 0.933 0.2 2 0.865 0.5 4 0.919 0.1 5 0.726 0.5 6 0.933 0.9
Where each line consist of some integer numbers and each integer number has 2 real numbers,
for example:
in the first line, integer number 1 has to real number 0.933, and 0.9
This the code for scanning data:
struct Lines1 {
int Item;
float Prob;
float W;
};
std::istream& operator>>(std::istream &is, Lines1 &d)
{
return is >> d.Item >> d.Prob>> d.W;
}
float threshold;
std::map<int, float> FFISupp;
std::map <int, vector <int> > AssociatedItem;
std::vector<std::list<Lines1>> data;
void ScanData()
{
ifstream in;
in.open(dataFile);
std::string line;
int i = 0;
while (std::getline(in, line))
{
std::stringstream Sline1(line);
std::stringstream ss(line);
std::list<Lines1 > inner;
Lines1 info;
while (ss >> info)
{
inner.push_back(info);
}
data.push_back(inner);
}
}
Now I successfully stored the data in the text file, in the map data which is vector of list of strcut
BUT I didn't succeed in dealing with vector of list of strcut (data) to do the following:
1- create map namely FFISupp such that:
FFISupp (key = the 6 distinct integer number in the data struct, value = the summation of probabilities for each number)
For example:
since the integer number 1 presents in the data sets in three positions, the total probability for integer number 1 =0.933 + 0.906 + 0.933 = 2.772
==> The result of FFISupp
FFISupp (1, 2.772)
FFISupp (2, 2.456)
.
.
FFISupp (6,1.659)
2- create map namely AssociatedItem such that:
AssociatedItem (key = 6 distinct integer number, value = the associated items with this number)
associated items means, for example, the integer number 1 presents in the dataset with other integer number like (2,3,4,5,6)
AssociatedItem (1, (2,3,4,5,6))
AssociatedItem (2, (1,3,4,5,6))
AssociatedItem (3, (1,2,4,5,6))
AssociatedItem (4, (1,2,3,5,6))
AssociatedItem (5, (1,2,3,4,6))
AssociatedItem (6, (1,2,3,4,5))
3- delete all item that has the result of sum of its probabilities < threshold from FFISupp
and update both FFISupp and AssociatedItem
for example, if two items 3, and 6 have total probabilities < threshold, then, I will update FFISupp
FFISupp (1, 2.772)
FFISupp (2, 2.456)
FFISupp (4, 1.645)
FFISupp (5,1.632)
also update AssociatedItem
AssociatedItem (1, (2,4,5))
AssociatedItem (2, (1,4,5))
AssociatedItem (4, (1,2,5))
AssociatedItem (5, (1,2,4))
This my try:
void Pass()
{
for (unsigned i = 0; i < data.size() - 1; ++i)
{
for (unsigned k = 0; i < data[i].size() - 1; ++k)
{
for (unsigned l = k + 1; l < data[i].size(); ++l)
{
auto p1 = make_pair(data[i][k].Item, data[i][k].Prob);
FFISupp[p1.first] += p1.second;
AssociatedItem[data[i][k].Item].push_back(data[i][l].Item);
}
}
}
/*update the FFISupp, and AssociatedItem by erasing allitems with <= Min_Threshold*/
std::map<int, float> ::iterator current = FFISupp.begin();
std::map<int, vector <int>> ::iterator current2 = AssociatedItem.begin();
while (current != FFISupp.end())
{
if (current->second <= threshold)
{
current = FFISupp.erase(current);
while (current2 != AssociatedItem.end())
{
current2 = AssociatedItem.erase(current2);
++current2;
}
}
else
++current;
}
}
as i only understand what you meant in stage #1 - i'll help only it.
you code - as shown below should iterate over all the data vector elements - therefore the stop condition should be simply data.size().
Stating data.size() - 1 reminds me of C array... well. an std::vector is not an array, and by iterating until data.size() - 1 you lose the last item.
I don't understand what stage#2 and stage#3 goal are.