I'm taking input using cin and storing it into a char variable. My question is if there is any input that could cause cin.fail() to return true.
I know that trying to store input such as "foo" into an int variable will fail, but is there any case in which this is possible with a char variable?
The overloads for operator>> which take a char follow the normal behavior of a formatted input function, that is they call rdbuf()->sbumpc() or rdbuf()->sgetc() to perform the extraction. Naturally, if eof is encountered, then eofbit is set. If one of the functions throw an exception, then badbit is set. If either of these are set, then failbit is set. There's no evidence to indicate that the operation would fail otherwise. (This is covered under section [istream] in the C++11 draft standard.) For other types, like int, do_get() is used to convert the character (similar to scanf). Of course, the conversion can fail, but no conversion is needed if the input is already a char.
Now the comments are misleading. CTRL+C would kill the application in Linux. CTRL+Z would send a character that signals EOF on some operating systems.
You can even use an emoji and it would work:
#include <iostream>
int main()
{
char c;
if (std::cin >> c)
std::cout << "Huzzah!";
}
With input š outputs "Huzzah!" as expected.
I guess not because a char variable with just take only the first character from the given input, no matter how long the input or of what type it is (int,long,double..)
No, failbit is only set if there's a logical error reading the input stream, AKA, someone rips out the USB flashdrive containing the file from which you're reading. ;)
Related
I have a problem that cin.ignore() can not remove input from the buffer.
#include <iostream>
#include <string>
int main() {
using namespace std;
int x=0;
string k;
cin >> x;
cin.ignore(100,'\n');
cin.clear();
cin >> k;
cout << k << endl;
}
For the above code:
input : abc (program ends when I just input abc)
output : abc
I was really surprised because cin.ignore() did not remove "abc" from the input buffer.
What is wrong with my code?
If I change the positions of cin.ignore() and cin.clear(), it works well, why is that?
This code:
int x=0;
cin >> x;
Causes cin to be put into an error state (specifically, the failbit flag is set) if the input is not convertible to an int.
Per cppreference.com:
std::basic_istream<CharT,Traits>::operator>>:
This function behaves as a FormattedInputFunction. After constructing and checking the sentry object, which may skip leading whitespace, extracts an integer value by calling std::num_get::get().
...
If extraction fails (e.g. if a letter was entered where a digit is expected), value is left unmodified and failbit is set.
(until C++11)
If extraction fails, zero is written to value and failbit is set. For signed integers, if extraction results in the value too large or too small to fit in value, std::numeric_limits<T>::max() or std::numeric_limits<T>::min() (respectively) is written and failbit flag is set. For unsigned integers, if extraction results in the value too large or too small to fit in value, std::numeric_limits<T>::max() is written and failbit flag is set.
(since C++11)
...
Thus, any further I/O operations on the stream are disabled, like ignore(), until you clear() the error state to re-enable I/O.
std::basic_ios<CharT,Traits>::clear:
Sets the stream error state flags by assigning them the value of state. By default, assigns std::ios_base::goodbit which has the effect of clearing all error state flags.
std::basic_istream<CharT,Traits>::ignore:
ignore behaves as an UnformattedInputFunction. After constructing and checking the sentry object, it extracts characters from the stream and discards them until any of the following conditions occurs:
...
C++ named requirements: UnformattedInputFunction:
An UnformattedInputFunction is a stream input function that performs the following:
Constructs an object of type basic_istream::sentry with automatic storage duration and with the noskipws argument set to true, which performs the following
if eofbit or badbit are set on the input stream, sets the failbit as well, and if exceptions on failbit are enabled in this input stream's exception mask, throws ios_base::failure.
flushes the tie()'d output stream, if applicable
Checks the status of the sentry by calling sentry::operator bool(), which is equivalent to basic_ios::good.
If the sentry returned false or sentry's constructor threw an exception:
sets the number of extracted characters (gcount) in the input stream to zero
if the function was called to write to an array of CharT, writes CharT() (the null character) to the first location of the array
...
cin.clear(); removes the error flag from cin
and cin.ignore(); ignores the next numbers of spaces.
Since you declared x as a Integer and you write it in first, the console expects that an integer to be read first. if you type abc at first instead you are typing in a string and the console returns an error flag because it expects an integer. That's why your program ends right after that.
If you put cin.clear(); right after cin>>x and type abc the error flag that is been thrown will be ignored and the console continues with cin>>k
Iām having a bit of a problem in C++. When I wrote this:
int a = ā:ā;
cout << a;
This printed out 58. It checks out with the ASCII table.
But if I write this:
int a;
cin >> a;
//i type in ā:ā
cout << a;
This will print out 0. It seems like if I put in any non-numeric input, a will be 0. I expected it to print out the equivalent ASCII number.
Can someone explain this for me? Thank you!
There are two things at work here.
First, ':' is a char, and although a char looks like a piece of text in your source code, it's really just a number (typically, an index into ASCII). This number can be assigned to other numeric types, such as int.
However, to deal with this oddity in a useful way, the IOStreams library treats char specially, for a numeric type. When you insert an int into a stream using formatted insertion (e.g. cout << 42), it automatically generates a string that looks like that number; but, when you insert a char into a stream using formatted extraction (e.g. cout << ';'), it does not do that.
Similarly, when you do formatted extraction, extracting into an int will interpret the user's input string as a number. Forgetting the char oddity, : in a more general sense is not a number, so your cin >> a does not succeed, as there is no string that looks like a number to interpret. (If a were a char, this "decoding" would again be disabled, and the task would succeed by simply copying the character from the user input.)
It can be confusing, but you're working in two separate data domains: user input as interpreted by IOStreams, and C++ data types. What is true for one, is not necessarily true for the other.
You're declaring a as an int, then the operator>> expects digits, but you give a punctuation, which makes extraction fails. As the result, since C++11, a is set to 0; before C++11 a won't be modified.
If extraction fails (e.g. if a letter was entered where a digit is expected), value is left unmodified and failbit is set. (until C++11)
If extraction fails, zero is written to value and failbit is set. (since C++11)
And
I expected it to print out the equivalent ASCII number.
No, even for valid digits, e.g. if you input 1, a will be set with value 1, but not its ASCII number, i.e. 49.
This will print out 0. It seems like if I put in any non-numeric input, a will be 0. I expected it to print out the equivalent ASCII number.
Since C++11 when extraction fails 0 will be automatically assigned.
However, there is a way where you can take a char input from std::cin and then print its ASCII value. It is called type-casting.
Here is an example:
#include <iostream>
int main()
{
char c;
std::cin >> c;
std::cout << int(c);
return 0;
}
Output:
:
58
Currently I'm self-learning C++ Primer 5th. Here comes something I'm not sure. (I couldn't find the exact relevant question on F.A.Q).
Consider this while loop:
while(std::cin>>value){...} \\value here was defined as int.
The text book says:
That expression reads the next number from the standard input and stores that number in value. The input operator (Ā§ 1.2, p. 8) returns its left operand, which in this case is std::cin. This condition, therefore, tests std::cin.When we use an istream as a condition, the effect is to test the state of the stream. If the stream is validāthat is, if the stream hasnāt encountered an errorāthen the test succeeds.
My question is: does std::cin read input into value first then test the validation of std::cin, or test std::cin first then decide whether to read into 'value'? I'm quite confused about when it "returns its left operand".
Remember that your code is equivalent to:
while (std::cin.operator>>(value)) { }
Or:
while (1) {
std::cin >> value ;
if (!std::cin) break ;
}
The "code" always tries to read from std::cin into value before testing std::cin.
Let's look at the quote:
[...] The input operator (Ā§ 1.2, p. 8) returns its left operand, which in this case is std::cin. [...]
This only means that std::cin.operator>>(value) return std::cin.
This condition, therefore, tests std::cin. When we use an istream as a condition, the effect is to test the state of the stream. If the stream is validāthat is, if the stream hasnāt encountered an errorāthen the test succeeds.
What the text book says is that after trying to read an integer from std::cin to value, the >> operator returns std::cin. If std::cin is in a good state after reading value, then the test passes, otherwize it fails.
Some extra details:
When you do std::cin >> value, you basically call istream::operator>>(int&), and yes there is a test inside that method: If the test passes, then the internal state of std::cin is set to ios_base::goodbit, if it fails, internal state is set to on of the error flag (eofbit, failbit or badbit).
Depending on the exception mask for std::cin, if the internal test fails, an exception may be thrown.
From your quote:
When we use an istream as a condition, the effect is to test the state of the stream.
This basically mean that:
if (std::cin) { }
Is equivalent to:
if (!std::cin.fail()) { }
And std::cin.fail() check for failbit or badbit. This means that while (std::cin >> value) { } does not test the eofbit flag and will only fail when the input cannot be converted to an integer value.
does std::cin read input into value first then test the validation of
std::cin, or test std::cin first then decide whether to read into
'value'
cin first tries to read an int from the standard input, if cin is in a good state: if it fails to, it will set the stream to a bad state; regardless of the operation done, it will return the stream itself (i.e. the "left operand" -- cin), that will allow you to check for success or failure.
If you wanted to explicitly test the validity of the stream first and only then try to read the value, you would have:
while (cin && cin >> value)
but it's pretty redundant, since, as I've told you, cin will not even try to read value if it's already in a bad state.
There are two tests.
The first test is the condition of the while statement
while(std::cin>>value){...}
This condition tests the result of calling operator function operator >>
The second test is a condition within the operator. If the state of the stream std::cin is good then the function tries to read an integer from the string. Otherwise it returns std::cin with the current erroneous state of std::cin.
In the while condition there is an expression
std::cin>>value
This expression must be evaluated. So this condition tests the result of the call of operator >> .
The result of the operator is the stream std::cin But it can be contextually converted to a bool value due to operator
explicit operator bool() const;
which returns the state of the stream
!fail().
I assume your "value" is for example an int
The stream tries to read input until the next whitespace.
if eof is found ... -> then the state will be set to "eof", >> will return the stream and the boolean evaluation of the stream will return false
if an error (I/O for example) happens during the reading process, the state will be set to "bad", >> will return the stream and the boolean evaluation of the stream will return false
if whitespace has been found, then a conversion from the read characters to int (the above assumption) will be attempted. If it fails (because the read input is for example: "xx" and not a number) the state of the stream will be set to "fail". >> will return the stream and the boolean evaluation of the stream will return false
if we are so far down the chain, eof was not found, no IO error (or other) happened, and the characters -> int conversion was successful. >> will return the stream and the boolean evaluation of the stream will return true.
And your "value" will contain the appropriate value
Presumably you wouldn't have any confusion with a simple function call:
SomeReturnType some_function(int&);
while (some_function(value)) { ... }
The above code will repeatedly call some_function until the return value from the function call, interpreted as a boolean, is false. The function is called for each step in the loop. Whether the function changes the value of value is up to the function. It certainly can do so, and presumably will do so (but that's an issue for the designer of the function).
The loop while (std::cin>>value) {...} is completely equivalent to while (std::cin.operator>>(value)) {...}. This is just a function call to the member function std::stream::operator>>(int&).
The operator first reads the value and then returns a reference to the object. The while statement first calls that operator and second tests the returned value.
Consider the following simple example
#include <string>
#include <sstream>
#include <iomanip>
using namespace std;
int main() {
string str = "string";
istringstream is(str);
is >> setw(6) >> str;
return is.eof();
}
At the first sight, since the explicit width is specified by the setw manipulator, I'd expect the >> operator to finish reading the string after successfully extracting the requested number of characters from the input stream. I don't see any immediate reason for it to try to extract the seventh character, which means that I don't expect the stream to enter eof state.
When I run this example under MSVC++, it works as I expect it to: the stream remains in good state after reading. However, in GCC the behavior is different: the stream ends up in eof state.
The language standard, it gives the following list of completion conditions for this version of >> operator
n characters are stored;
end-of-file occurs on the input sequence;
isspace(c,is.getloc()) is true for the next available input character c.
Given the above, I don't see any reason for the >> operator to drive the stream into the eof state in the above code.
However, this is what the >> operator implementation in GCC library looks like
...
__int_type __c = __in.rdbuf()->sgetc();
while (__extracted < __n
&& !_Traits::eq_int_type(__c, __eof)
&& !__ct.is(__ctype_base::space,
_Traits::to_char_type(__c)))
{
if (__len == sizeof(__buf) / sizeof(_CharT))
{
__str.append(__buf, sizeof(__buf) / sizeof(_CharT));
__len = 0;
}
__buf[__len++] = _Traits::to_char_type(__c);
++__extracted;
__c = __in.rdbuf()->snextc();
}
__str.append(__buf, __len);
if (_Traits::eq_int_type(__c, __eof))
__err |= __ios_base::eofbit;
__in.width(0);
...
As you can see, at the end of each successful iteration, it attempts to prepare the next __c character for the next iteration, even though the next iteration might never occur. And after the cycle it analyzes the last value of that __c character and sets the eofbit accordingly.
So, my question is: triggering the eof stream state in the above situation, as GCC does - is it legal from the standard point of view? I don't see it explicitly specified in the document. Is both MSVC's and GCC's behavior compliant? Or is only one of them behaving correctly?
The definition for that particular operator>> is not relevant to the setting of the eofbit, as it only describes when the operation terminates, but not what triggers a particular bit.
The description for the eofbit in the standard (draft) says:
eofbit - indicates that an input operation reached the end of an input sequence;
I guess here it depends on how you want to interpret "reached". Note that gcc implementation correctly does not set failbit, which is defined as
failbit - indicates that an input operation failed to read the expected characters, or
that an output operation failed to generate the desired characters.
So I think eofbit does not necessarily mean that the end of file impeded the extractions of any new characters, just that the end of file has been "reached".
I can't seem to find a more accurate description for "reached", so I guess that would be implementation defined. If this logic is correct, then both MSVC and gcc behaviors are correct.
EDIT: In particular, it seems that eofbit gets set when sgetc() would return eof. This is described both in the istreambuf_iterator section and in the basic_istream::sentry section. So now the question is: when is the current position of the stream allowed to advance?
FINAL EDIT: It turns out that probably g++ has the correct behavior.
Every character scan passes through <locale>, in order to allow different character sets, money formats, time descriptions and number formats to be parsed. While there does not seem to be a through description on how the operator>> works for strings, there are very specific descriptions on how do_get functions for numbers, time and money are supposed to operate. You can find them from page 687 of the draft forward.
All of these start off by reading a ctype (the "global" version of a character, as read through locales) from a istreambuf_iterator (for numbers, you can find the call definitions at page 1018 of the draft). Then the ctype is processed, and finally the iterator is advanced.
So, in general, this requires the internal iterator to always point to the next character after the last one read; if that was not the case you could in theory extract more than you wanted:
string str = "strin1";
istringstream is(str);
is >> setw(6) >> str;
int x;
is >> x;
If the current character for is after the extraction for str was not on the eof, then the standard would require that x gets the value 1, since for numeric extraction the standard explicitly requires that the iterator is advanced after the first read.
Since this does not make much sense, and given that all complex extractions described in the standard behave in the same way, it makes sense that for strings the same would happen. Thus, as the pointer for is after reading 6 characters falls on the eof, the eofbit needs to be set.
This code loops forever:
#include <iostream>
#include <fstream>
#include <sstream>
int main(int argc, char *argv[])
{
std::ifstream f(argv[1]);
std::ostringstream ostr;
while(f && !f.eof())
{
char b[5000];
std::size_t read = f.readsome(b, sizeof b);
std::cerr << "Read: " << read << " bytes" << std::endl;
ostr.write(b, read);
}
}
It's because readsome is never setting eofbit.
cplusplus.com says:
Errors are signaled by modifying the internal state flags:
eofbit The get pointer is at the end of the stream buffer's internal input
array when the function is called, meaning that there are no positions to be
read in the internal buffer (which may or not be the end of the input
sequence). This happens when rdbuf()->in_avail() would return -1 before the
first character is extracted.
failbit The stream was at the end of the source of characters before the
function was called.
badbit An error other than the above happened.
Almost the same, the standard says:
[C++11: 27.7.2.3]: streamsize readsome(char_type* s, streamsize n);
32. Effects: Behaves as an unformatted input function (as described in
27.7.2.3, paragraph 1). After constructing a sentry object, if !good() calls
setstate(failbit) which may throw an exception, and return. Otherwise extracts
characters and stores them into successive locations of an array whose first
element is designated by s. If rdbuf()->in_avail() == -1, calls
setstate(eofbit) (which may throw ios_base::failure (27.5.5.4)), and extracts
no characters;
If rdbuf()->in_avail() == 0, extracts no characters
If rdbuf()->in_avail() > 0, extracts min(rdbuf()->in_avail(),n)).
33. Returns: The number of characters extracted.
That the in_avail() == 0 condition is a no-op implies that ifstream::readsome itself is a no-op if the stream buffer is empty, but the in_avail() == -1 condition implies that it will set eofbit when some other operation has led to in_avail() == -1.
This seems like an inconsistency, even despite the "some" nature of readsome.
So what are the semantics of readsome and eof? Have I interpreted them correctly? Are they an example of poor design in the streams library?
(Stolen from the [IMO] invalid libstdc++ bug 52169.)
I think this is a customization point, not really used by the default stream implementations.
in_avail() returns the number of chars it can see in the internal buffer, if any. Otherwise it calls showmanyc() to try to detect if chars are known to be available elsewhere, so a buffer fill request is guaranteed to succeed.
In turn, showmanyc() will return the number of chars it knows about, if any, or -1 if it knows that a read will fail, or 0 if it doesn't have a clue.
The default implementation (basic_streambuf) always returns 0, so that is what you get unless you have a stream with some other streambuf overriding showmanyc.
Your loop is essentially read-as-many-chars-as-you-know-is-safe, and it gets stuck when that is zero (meaning "not sure").
I don't think that readsome() is meant for what you're trying to do (read from a file on disk)... from cplusplus.com:
The function is intended to be used to read binary data from certain
types of asynchronic sources that may wait for more characters, since
it stops reading when the local buffer exhausts, avoiding potential
unexpected delays.
So it sounds like readsome() is intended for streams from a network socket or something like that, and you probably want to just use read().
If no character is available (i.e. gptr() == egptr() for the std:streambuf) the virtual member function showhowmanyc() is called. I could have an implementation of showmanyc() which returns an error code. Why that may be useful is a different question. However, this could set eof(). Of course, in_avail() is meant not to fail and not to block and just return the characters known to be available. That is, the loop you have above is essentially guaranteed to be an infinite loop unless you have a rather odd stream buffer.
Others have answered why readsome won't set eofbit by design. I will suggest a way to read some bytes until eof without setting fail bit in a intuitive way, in the same way you were trying to use readsome. This is the result of research in another question.
#include <iostream>
#include <fstream>
#include <sstream>
using namespace std;
streamsize Read(istream &stream, char *buffer, streamsize count)
{
// This consistently fails on gcc (linux) 4.8.1 with failbit set on read
// failure. This apparently never fails on VS2010 and VS2013 (Windows 7)
streamsize reads = stream.rdbuf()->sgetn(buffer, count);
// This rarely sets failbit on VS2010 and VS2013 (Windows 7) on read
// failure of the previous sgetn()
stream.rdstate();
// On gcc (linux) 4.8.1 and VS2010/VS2013 (Windows 7) this consistently
// sets eofbit when stream is EOF for the conseguences of sgetn(). It
// should also throw if exceptions are set, or return on the contrary,
// and previous rdstate() restored a failbit on Windows. On Windows most
// of the times it sets eofbit even on real read failure
stream.peek();
return reads;
}
int main(int argc, char *argv[])
{
ifstream instream("filepath", ios_base::in | ios_base::binary);
while (!instream.eof())
{
char buffer[0x4000];
size_t read = Read(instream, buffer, sizeof(buffer));
// Do something with buffer
}
}