c++ cout with float producing strange results - c++

Currently I have the following:
float some_function(){
float percentage = 100;
std::cout << "percentage = " << percentage;
//more code
return 0;
}
which gives the output
percentage = 100
However when I add some std::endl like so:
float some_function(){
float percentage = 100;
std::cout << "percentage = " << percentage << std::endl;
//more code
return 0;
}
This gives the output:
percentage = 1000x6580a8
Adding more endl's just prints out more 0x6580a8's.
What could be causing this? This is compiled with gcc 4.4.3 on Ubuntu 10.04.

The function is written correctly. On my machine ( g++ 4.4.3 on Ubuntu 10.04 ) everything works smoothly.
Are you sure that the error isn't caused by some other part of the code ?

Your code is perfectly valid. I suspect that you could be smashing your stack or heap in some other part of your code as the most likely cause. 0x6580a8 is too short to be an object address. Also, he would never get the same address in two runs of the same program.

What if you tried \n ?
std::cout << "percentage = " << percentage << "\n";

Is this your actual code, or is there a different sort of stream instead of cout?
It's taking the address of the endl manipulator instead of applying it to the stream, which implies that it can't see the matching version of endl for the stream type you're using.
What happens if you use << "\n" << std::flush instead?

Related

weak_ptr reset affects shared_ptr?

I'm not very used to using weak_ptr and I'm facing a quite confusing situation. I'm using Intel XE 2019 Composer update 5 (package 2019.5.281) in combinaison with Visual Studio 2019 ver. 16.2.5. I compile in 64-bit. I use the standard C++ 17.
Here is the code for my spike solution:
#include <memory>
#include <iostream>
using namespace std;
int main( int argc, char* argv[] )
{
shared_ptr<int> sp = make_shared<int>( 42 );
cout << "*sp = " << *sp << endl;
weak_ptr<int> wp = sp;
cout << "*sp = " << *sp << ", *wp = " << *wp.lock() << endl;
wp.reset();
cout << "*sp = " << *sp << endl;
return 0;
}
The output I expected to have is:
*sp = 42
*sp = 42, *wp = 42
*sp = 42
...but here is what I obtained:
*sp = 42
*sp = 42, *wp = 42
*sp = -572662307
What is goin on? Is it normal for the shared_ptr to be modified/invalidated when the/an associated weak_ptr is reset? I'm a little confused about the results I obtained. To say the truth I didn't expect this result...
EDIT 1
While the bug occurs in 64-bit configuration, it doesn't in 32-bit. In this later configuration, the result is what is expected.
EDIT 2
The bug occurs only in Debug. When I build in Release, I get the expected result.
It appears it is a real bug on Intel ICC side; I have reported it.
Thanks again for helping me to pin-point this problem.
It looks like a bug in debug library, with sentinel values.It's easy to check, by using line I mentioned:
int i = 1; cout << i << " " << ++i << endl;
If output is 2 2 instead of 1 2, then compiler is not compliant and possibly still considers such case an UB. Sentinel values may be used erroneously in this case with call of reset(). Similar thing happens with deleting object created by placement new within preallocated static buffer, in debug mode it gets overwritten by some implementations with sentinel values.

C++ program behaviour differs when compiled with Cygwin and MinGW

I am fairly new to C++ and have been tinkering with some simple programs to teach myself the basics. I remember a little while ago installing MinGW on my desktop (I think for the C compiler). I decided to go for Cygwin for C++ and it seems to have been working a treat, up until I noticed that it seems to behave differently from the MinGW compiler for this program. I'm probably breaking some coding golden rule, but the reason for this is that Windows CMD uses the MinGW compiler, or I can open the Cygwin shell and use that instead. Variety!
For the program below, I am making notes on the ternary operator and switch statement. I initialize 'a' and 'b', ask for user input, and then use a function to check which is larger.
I then ask for user input again, to over-write the values in 'a' and 'b' to something else. When compiling from Windows CMD, this works fine and I can overwrite with new input, and the #define MAXIMUM and MINIMUM functions work fine on the new values. When I compile on Cygwin, however, after the first 'pause', the program launches past the two std::cin's and just runs MAXIMUM and MINIMUM on the old values of 'a' and 'b'.
Obviously I have tried just creating two new int variables, 'c' and 'd', and then there is no issue. The values are not immutable in any sense that I am aware of (although I don't know much, so I could be wrong).
I wasn't sure if this was something to do with the auto keyword, so I specified the type as int manually.
I also checked the version of both compilers with 'g++ -v' in Cygwin and at the CMD.
#include <iostream>
#include <cstdlib>
#define MAXIMUM(a,b) ((a > b) ? a : b)
#define MINIMUM(a,b) ((a < b) ? a : b)
int ternary_op(int a, int b)
{
char x = 'a';
char y = 'b';
auto result_var = 0;
result_var = a > b ? x : y; //Shorthand if statement with syntax (test_condition) ? (if_true) : (if_false)
return result_var;
}
int main()
{
auto a = 0;
auto b = 0;
auto larger = 0;
auto smaller = 0;
std::cout << "Enter an integer: " << "\n";
std::cin >> a;
std::cout << "Enter another integer: " << "\n";
std::cin >> b;
char greater_var = ternary_op(a,b); //Therefore if condition a > b is satisfied, greater_var is assigned x ('a')
std::cout << greater_var << std::endl;
switch(greater_var){
case 'a':
std::cout << "First integer " << a << " is larger than second, " << b << std::endl;
break;
case 'b':
std::cout << "Second integer " << b << " is larger than first integer, " << a << std::endl;
break;
}
std::cout << system("cmd /c pause") << std::endl;
std::cout << "We can also use defined functions to check equivalency and assign variables based upon the result." << "\n";
std::cout << "Enter an integer: " << std::endl;
std::cin >> a;
std::cout << "Enter another integer: " << std::endl;
std::cin >> b;
larger = MAXIMUM(a,b);
smaller = MINIMUM(a,b);
std::cout << "Larger and smaller numbers determined by defined function: " << larger << ", " << smaller << std::endl;
std::cout << system("cmd /c pause") << std::endl;
return 0;
}
Obviously if I make two new variables, 'c' and 'd', there is no issue. Changing the type to int myself did not change the way the program behaved using Cygwin. Unusually the MinGW version was 8.1.0, while Cygwin was 7.4.0. I'm not sure if this means that it is simply an older version of the same compiler.
Again, I'm very new to this so I'm just quite confused as to why they would behave so differently. I was also under the impression that different compilers were completely different beasts that simply read from the same standard hymn sheet, so to speak.
Just curious as to what is going on here!
Cheers!

Integer to string conversion issues

I am experiencing a few problems with Crypto++'s Integer class. I am using the latest release, 5.6.2.
I'm attempting to convert Integer to string with the following code:
CryptoPP::Integer i("12345678900987654321");
std::ostrstream oss;
oss << i;
std::string s(oss.str());
LOGDEBUG(oss.str()); // Pumps log to console and log file
The output appears to have extra garbage data:
12345678900987654321.ÍÍÍÍÍÍÍÍÍÍÍýýýý««««««««îþîþ
I get the same thing when I output directly to the console:
std::cout << "Dec: " << i << std::endl; // Same result
Additionally, I cannot get precision or scientific notation working. The following will output the same results:
std::cout.precision(5); // Does nothing with CryptoPP::Integer
std::cout << "Dec: " << std::setprecision(1) << std::dec << i << std::endl;
std::cout << "Sci: " << std::setprecision(5) << std::scientific << i << std::endl;
On top of all of this, sufficiently large numbers breaks the entire thing.
CryptoPP::Integer i("12345");
// Calculate i^16
for (int x = 0; x < 16; x++)
{
i *= i;
}
std::cout << i << std::endl; // Will never finish
Ultimately I'm trying to get something where I can work with large Integer numbers, and can output a string in scientific notation. I have no problems with extracting the Integer library or modifying it as necessary, but I would prefer working with stable code.
Am I doing something wrong, or is there a way that I can get this working correctly?
I'm attempting to convert Integer to string with the following code:
CryptoPP::Integer i("12345678900987654321");
std::ostrstream oss;
oss << i;
std::string s(oss.str());
LOGDEBUG(oss.str()); // Pumps log to console and log file
The output appears to have extra garbage data:
12345678900987654321.ÍÍÍÍÍÍÍÍÍÍÍýýýý««««««««îþîþ
I can't reproduce this with Crypto++ 5.6.2 on Visual Studio 2010. The corrupted output is likely the result of some other issue, not a bug in Crypto++. If you haven't done so already, I'd suggest trying to reproduce this in a minimal program just using CryptoPP::Integer and std::cout, and none of your other application code, to eliminate all other possible problems. If it's not working in a trivial stand-alone test (which would be surprising), there could be problems with the way the library was built (e.g. maybe it was built with a different C++ runtime or compiler version from what your application is using). If your stand-alone test passes, you can add in other string operations, logging code etc. until you find the culprit.
I do notice though that you're using std::ostrstream which is deprecated. You may want to use std::ostringstream instead. This Stack Overflow answer to the question "Why was std::strstream deprecated?" may be of interest, and it may even the case that the issues mentioned in that answer are causing your problems here.
Additionally, I cannot get precision or scientific notation working.
The following will output the same results:
std::cout.precision(5); // Does nothing with CryptoPP::Integer
std::cout << "Dec: " << std::setprecision(1) << std::dec << i << std::endl;
std::cout << "Sci: " << std::setprecision(5) << std::scientific << i << std::endl;
std::setprecision and std::scientific modify floating-point input/output. So, with regular integer types in C++ like int or long long this wouldn't work either (but I can see that especially with arbitrary-length integers like CryptoPP:Integer being able to output in scientific notation with a specified precision would make sense).
Even if C++ didn't define it like this, Crypto++'s implementation would still need to heed those flags. From looking at the Crypto++ implementation of std::ostream& operator<<(std::ostream& out, const Integer &a), I can see that the only iostream flags it recognizes are std::ios::oct and std::ios::hex (for octal and hex format numbers respectively).
If you want scientific notation, you'll have to format the output yourself (or use a different library).
On top of all of this, sufficiently large numbers breaks the entire
thing.
CryptoPP::Integer i("12345");
// Calculate i^16
for (int x = 0; x < 16; x++)
{
i *= i;
}
std::cout << i << std::endl; // Will never finish
That will actually calculate i^(2^16) = i^65536, not i^16, because on each loop you're multiplying i with its new intermediate value, not with its original value. The actual result with this code would be 268,140 digits long, so I expect it's just taking Crypto++ a long time to produce that output.
Here is the code adjusted to produce the correct result:
CryptoPP::Integer i("12345");
CryptoPP::Integer i_to_16(1);
// Calculate i^16
for (int x = 0; x < 16; x++)
{
i_to_16 *= i;
}
std::cout << i_to_16 << std::endl;
LOGDEBUG(oss.str()); // Pumps log to console and log file
The output appears to have extra garbage data:
12345678900987654321.ÍÍÍÍÍÍÍÍÍÍÍýýýý««««««««îþîþ
I suspect what you presented is slighty simplified from what you are doing in real life. I believe the problem is related to LOGDEBUG and the ostringstream. And I believe you are outputting char*'s, and not string's (though we have not seen the code for your loggers).
The std::string returned from oss.str() is temporary. So this:
LOGDEBUG(oss.str());
Is slighty different than this:
string t(oss.str());
LOGDEBUG(t);
You should always make a copy of the string in an ostringstream when you intend to use it. Or ensure the use is contained in one statement.
The best way I've found is to have:
// Note: reference, and the char* is used in one statement
void LOGDEBUG(const ostringstream& oss) {
cout << oss.str().c_str() << endl;
}
Or
// Note: copy of the string below
void LOGDEBUG(string str) {
cout << str.c_str() << endl;
}
You can't even do this (this one bit me in production):
const char* msg = oss.str().c_str();
cout << msg << endl;
You can't do it because the string returned from oss.str() is temporary. So the char* is junk after the statement executes.
Here's how you fix it:
const string t(oss.str());
const char* msg = t.c_str();
cout << msg << endl;
If you run Valgrind on your program, then you will probably get what should seem to be unexplained findings related to your use of ostringstream and strings.
Here is a similar logging problem: stringstream temporary ostream return problem. Also see Turning temporary stringstream to c_str() in single statement. And here was the one I experienced: Memory Error with std:ostringstream and -std=c++11?
As Matt pointed out in the comment below, you should be using an ostringstream, and not an ostrstream. ostrstream has been deprecated since C++98, and you should have gotten a warning when using it.
So use this instead:
#include <sstream>
...
std::ostringstream oss;
...
But I believe the root of the problem is the way you are using the std::string in the LOGDEBUG function or macro.
Your other questions related to Integer were handled in Softwariness's answer and related comments. So I won't rehash them again.

C++ clock measures time incorrectly

I have a program which reads 2 input files. First file contains some random words which are put into an BST and AVL tree. Then the program looks for the words listed in the second read file and says if they exist in the trees, then writes an output file with the information gathered. While doing this the program prints out the time spent for finding a certain item. However the program does not seem to be measuring the time spent.
BST* b = new BST();
AVLTree* t = new AVLTree();
string s;
ifstream in;
in.open(argv[1]);
while(!in.eof())
{
in >> s;
b->insert(s);
t->insert(s);
}
ifstream q;
q.open(argv[2]);
ofstream out;
out.open(argv[3]);
int bstItem = 0;
int avlItem = 0;
float diff1 = 0;
float diff2 = 0;
clock_t t1, t1e, t2, t2e;
while(!q.eof())
{
q >> s;
t1 = clock();
bstItem = b->findItem(s);
t1e = clock();
diff1 = (float)(t1e - t1)/CLOCKS_PER_SEC;
t2 = clock();
avlItem = t->findItem(s);
t2e = clock();
diff2 = (float)(t2e - t2)/CLOCKS_PER_SEC;
if(avlItem == 0 && bstItem == 0)
cout << "Query " << s << " not found in " << diff1 << " microseconds in BST, " << diff2 << " microseconds in AVL" << endl;
else
cout << "Query " << s << " found in " << diff1 << " microseconds in BST, " << diff2 << " microseconds in AVL" << endl;
out << bstItem << " " << avlItem << " " << s << "\n";
}
The clock() value I get just before entering while and just after finishing it is exactly the same. So it appears as if the program does not even run the while loop at all, so it print 0. I know that this is not the case since it takes around 10 seconds for the program the finish as it should. Also the output file contains correct results, so the possibility of having bad findItem() functions is also not true.
I did a little bit research in Stack Overflow, and saw that many people experience the same problem as me. However none of the answers I read solved it.
I solved my problem using a higher resolution clock, though the clock resolution was not my problem. I used clock_gettime() from time.h. As far as I know higher clock resolutions than clock() is platform dependent and this particular method I used in my code is only available for Linux. I still haven't figured out why I wasn't able to obtain healthy results from clock(), but I suspect platform dependency again.
An important note, the use of clock_gettime() requires you to include POSIX real time extension when compiling the code.
So you should do:
g++ a.cpp b.cpp c.cpp -lrt -o myProg
where -lrt is the parameter to include POSIX extensions.
If (t1e - t1) is < CLOCKS_PER_SEC your result will always be 0 because integer division is truncated. Cast CLOCKS_PER_SEC to float.
diff1 = (t1e - t1)/((float)CLOCKS_PER_SEC);

c++ program stops without errors warnings when printing matrix

I can't seem to find any solution for this.
I have a type 'route' that contains a matrix.if I do:
cout << route << endl;
it works it prints the memory
but if I try
cout << route[1][1] << endl;
program just ends without any error or anything.
debug says:
"(Suspended : Signal : SIGSEGV:Segmentation fault)"
here is the code:
//structure is a type I created
Structure ***route = list->searchRoute(startPoint, destination, time);
//should return a matrix
cout << "Avaible routes: \n" << endl;
for(int i = 0; i < 5;i++)
cout << route[1][1]->startPoint << endl;
Segmentation fault usually implies that you are accessing memory you are not supposed to access. What is probably happening is that our "matrix" is probably too small to have a block in the second row/ second column, so an error is thrown when you try to access that location(because you do not own it). Make sure you are allocating route correctly and at the right size.