I am adding values into the combo box as a string. Below is my code.
Platform Windows XP and I am using Microsoft Visual Studio 2003
language C++
error encountered -> "Run-Time Check Failure #2 - Stack around the variable 'buffer' was corrupted."
If I increase the size of the buffer to say 4 and above then I won't get this error.
My question is not related to how to fix that error, but I am wondering why I got this error if buffer size = 2.
According to my logic I have given buffer size = 2 as char[0] will store the valve of char[1] = null terminated character.
Now since char can store values from 0 to 255 , I thought this should be ok as my inserted values are from 1 to 63 and then from 183 to 200.
CComboBox m_select_combo;
const unsigned int max_num_of_values = 63;
m_select_combo.AddString( "ALL" );
for( unsigned int i = 1; i <= max_num_of_values ; ++i )
{
char buffer[2];
std::string prn_select_c = itoa( i, buffer, 10 );
m_select_combo.AddString( prn_select_c.c_str() );
}
const unsigned int max_num_of_high_sats = 202 ;
for( unsigned int i = 183; i <= max_num_of_high_sats ; ++i )
{
char buffer[2];
std::string prn_select_c = itoa( i, buffer, 10 );
m_select_combo.AddString( prn_select_c.c_str() );
}
Could you guys please give me an idea as to what I'm not understanding?
itoa() zero-terminates it's output, so when you call itoa(63, char[2], 10) it writes three characters 6, 3 and the terminating \0. But your buffer is only two characters long.
itoa() function is best avoided in favour of snprintf() or boost::lexical_cast<>().
You should read the documentation for itoa.
Consider the following loop:
for( unsigned int i = 183; i <= max_num_of_high_sats ; ++i )
{
char buffer[2];
std::string prn_select_c = itoa( i, buffer, 10 );
m_select_combo.AddString( prn_select_c.c_str() );
}
The first iteration converts the integer 183 to the 3 character string "183", plus a terminating null character. That's 4 bytes, which you are trying to cram into a two byte array. The docs tell you specifically to make sure your buffer is large enough to hold any value; in this case it should be at least the number of digits in max_num_of_high_sats long, plus one for the terminating null.
You might as well make it large enough to hold the maximum value you can store in an unsigned int, which would be 11 (eg. 10 digits for 4294967295 plus a terminating null).
the ito function is used to convert a int to a C sytle string based on the 3rd parameter base.
As a example, it just likes to print out the int 63 in printf. you need two ASII byte, one is used to storage CHAR 6, the other is used to storage CHAR 3. the 3rd should be NULL. So in your case the max int is three digital. you need 4 bytes in the string
You are converting an integer to ASCII, that is what itoa does. If you have a number like 183 that is four chars as a string, '1', '8', '3', '\0'.
Each character takes one byte, for example character '1' is the value 0x31 in ASCII.
Related
I am using Visual C 6
I am trying to convert a character array (single-quotation) into an integer, then incrementing the value by 1, then storing the result back into a different character array..
But I keep getting an unexpected value when converting back to character..
Here is my code
char char_array[4];
char_array[0] = '1';
char_array[1] = '2';
char_array[2] = '3';
char_array[3] = '\0'; //Terminating character
int my_number = atoi(char_array);
printf("my_number = %d" , my_number); // output is 123
my_number++; // works and my_number is incremented =124
printf("now: my_number = %d" , my_number); // output is 124
char result[4]; //declared to store the result
result = itoa(my_number); // Output is unexpected.
printf("%c", result[0]); // Output is 2 instead of 1
printf("%c", result[1]); // Output is 2
printf("%c", result[2]); // Output as 3 instead of 4
It seems that the function itoa() somehow knows the original value 123 and in some weird way knows that I have incremented that value.. but the addition is done to the wrong digit. Instead of adding 1 to the least significant digit, the addition is done to the most significant digit.
I find it really difficult to believe that your compiler is letting this code through:
char result[4]; //declared to store the result
result = itoa(my_number); // Output is unexpected.
For one reason, you're attempting to reseat an array. Which shouldn't be allowed. For another, itoa() normally takes three arguments. It's prototype should look like:
char *itoa(int value, char * str, int base);
So you should be calling it as:
char result[4];
itoa(my_number, result, 10);
Or, if you'd like to use portable functions that don't have possible buffer overflows:
char result[4];
snprintf(result, 4, "%d", my_number);
itoa is not a standard C library function.
You can use
char result[sizeof(int) * CHAR_BIT / 10 * 3 + 4]; // '-1', '\0', max sizeof int on my 4 byte machine
// 10 bits are roughly equal to 3 digits at decimal base, extra 4 for '-', '\0', extra digit and safe character
sprintf(result, "%d", my_number);
If you still want to use itoa, consult the documentation of this function (in library/compiler documentation)
my_number is incremented and hence if you are using itoa() then it will know the new value of my_number which is 124.
Check the code below:
#include<stdio.h>
#include<stdlib.h>
#include<string.h>
int main()
{
char char_array[4];
char_array[0] = '1';
char_array[1] = '2';
char_array[2] = '3';
char_array[3] = '\0'; //Terminating character
int my_number = atoi(char_array);
printf("my_number = %d" , my_number); // output is 123
my_number++; // works and my_number is incremented =124
printf("now: my_number = %d" , my_number); // output is 124
char result[4]; //declared to store the result
snprintf(result,4,"%d",my_number);
printf("%c", result[0]);
printf("%c", result[1]);
printf("%c", result[2]);
return 0;
}
First the itoa(my_number) maybe wrong, I only know the followed function:
char * itoa ( int value, char * str, int base );
str should be an array long enough to contain any possible value:
(sizeof(int)*8+1) for radix=2 i.e. 17 bytes in 16-bits platforms and
33 in 32-bits platforms.
I want to convert a number into a not zero-terminated char array without any predifined C / C++ function (e.g itoa).
I don't have much space left (working on an embedded application with 5KB pgm space total of which I'm allready using 4.862KB) and my output function doesn't accept zero-terminated char arrays; only an array and the length.
EDIT 1: I'm not working for a company :P
EDIT 2: Doesn't accept means that for no reason it won't send if there's a 0-byte inside the array.
EDIT 3: I solved it using a modified version of the method from 'Vlad from Moscow' below. Still thanks to all of you, who tried helping me :)
EDIT 4: If anybody cares: The project is setting an AVR based alarm clock using bluetooth.
As my hourly rate is equal to zero $ (I am unemployed) then I will show a possible approach.:)
In my opinion the simplest way is to write a recursive function. For example
#include <iostream>
size_t my_itoa( char *s, unsigned int n )
{
const unsigned base = 10;
unsigned digit = n % base;
size_t i = 0;
if ( n /= base ) i += my_itoa( s, n );
s[i++] = digit + '0';
return i;
}
int main()
{
unsigned x = 12345;
char s[10];
std::cout.write( s, my_itoa( s, x ) );
return 0;
}
The output is
12345
Though I used unsigned int you can modify the function that it would accept objects of type int.
If you need to allocate the character array in the function then it will be even simpler and can be non-recursive.
A general algorithm:
Personally I can't recommend a recursive algorithm unless I knew the stack limits the embedded system imposes (PICs for example have a very limited stack depth) and what your current stack usage is.
orignumber = 145976
newnumber = orignumber
newnumber = new number / 10
remainder = new number % 10 answer: 6
digit = remainder + 30 hex -> store into array answer:0x36 ascii 6
increment array location
newnumber = new number / 10
remainder = new number % 10 answer: 7
digit = remainder + 30 hex -> store into array answer:0x37 ascii 7
increment array location
newnumber = new number / 10
remainder = new number % 10 answer: 9
digit = remainder + 30 hex -> store into array answer:0x39 ascii 9
increment array location
repeat these 4 steps while new number > 0
Array will contain: 0x36 0x37 0x39 0x35 0x34 0x31
null terminal array,
null termination allows easy calculation of string length and string reversal.
Another option would be to fill the array from the end by decrementing a pointer to avoid reversing the string.
Array will contain: 0x36 0x37 0x39 0x35 0x34 0x31 0x00
finally reverse array
Array will contain: 0x31 0x34 0x35 0x39 0x37 0x36 0x00
Im able to convert most things without a problem, a google search if needed. I cannot figure this one out, though.
I have a char array like:
char map[40] = {0,0,0,0,0,1,1,0,0,0,1,0,1... etc
I am trying to convert the char to the correct integer, but no matter what I try, I get the ascii value: 48/ 49.
I've tried quite a few different combinations of conversions and casts, but I cannot end up with a 0 or a 1, even as a char.
Can anyone help me out with this?
Thanks.
The ascii range of the characters representing integers is 48 to 57 (for '0' to '9'). You should subtract the base value 48 from the character to get its integer value.
char map[40] = {'0','0','0','0','0','1','1','0','0','0','1','0','1'...};
int integerMap[40];
for ( int i = 0 ;i < 40; i++)
{
integerMap[i] = map[i] - 48 ;
// OR
//integerMap[i] = map[i] - '0';
}
If the char is a literal, e.g. '0' (note the quotes), to convert to an int you'd have to do:
int i = map[0] - '0';
And of course a similar operation across your map array. It would also be prudent to error-check so you know the resulting int is in the range 0-9.
The reason you're getting 48/49 is because, as you noted, direct conversion of a literal like int i = (int)map[0]; gives the ASCII value of the char.
I was having a small play around with C++ today and came across this which I thought was odd, but perhaps more likely due to a misunderstanding by me and lack of pure C coding recently.
What I was looking to do originally was convert a double into an array of unsigned chars. My understanding was that the 64 bits of the double (sizeof(double) is 8) would now be represented as 8 8-bit chars. To do this I was using reinterpret_cast.
So here's some code to convert from double to char array, or at least I thought that's what it was doing. Problem was it was returning 15 from strlen instead of 8, why I'm not sure.
double d = 0.3;
unsigned char *c = reinterpret_cast<unsigned char*> ( &d );
std::cout << strlen( (char*)c ) << std::endl;
So the strlen was my first issue. But then I tried the following and found that it returned 11, 19, 27, 35. The difference between these numbers is 8 so on some level something right is going on. But why does this not return 15, 15, 15, 15, ( as it was returning 15 in the code above ).
double d = 0.3;
double d1 = 0.3;
double d2 = 0.3;
double d3 = 0.3;
unsigned char *c_d = reinterpret_cast<unsigned char*> ( &d );
unsigned char *c_d1 = reinterpret_cast<unsigned char*> ( &d1 );
unsigned char *c_d2 = reinterpret_cast<unsigned char*> ( &d2 );
unsigned char *c_d3 = reinterpret_cast<unsigned char*> ( &d3 );
std::cout << strlen( (char*)c_d ) << std::endl;
std::cout << strlen( (char*)c_d1 ) << std::endl;
std::cout << strlen( (char*)c_d2 ) << std::endl;
std::cout << strlen( (char*)c_d3 ) << std::endl;
So I looked at the addresses of the chars and they are.
0x28fec4
0x28fec0
0x28febc
0x28feb8
Now this makes sense given that the size of an unsigned char* on my system is 4 bytes, but I thought the correct amount of memory would be allocated from the cast, otherwise it seems like reinterpret_cast is a pretty dangerous thing... Furthermore if I do
for (int i = 0; i < 4; ++i) {
double d = 0.3;
unsigned char *c = reinterpret_cast<unsigned char*> ( &d );
std::cout << strlen( (char*)c ) << std::endl;
}
This prints 11, 11, 11, 11!
So what is going on here, clearly memory is getting overwritten in places and reinterpret cast is not working as I thought it would (i.e. I'm using it wrong). Having been using strings for so long in C++, sometimes when you go back to raw char arrays, you forget these things.
So I suppose this is a 3 part question.
Why was strlen initially returning 15?
Why did the 4 strlen calls grow in size?
Why did the loop return 11, 11, 11, 11?
Thanks.
strlen works by iterating through the array that it assumes the passed const char* points at until it finds a char with value 0. This is the null-terminating character that is automatically added to the end of string literals. The bytes that make up the value representation of your double do not end with a null character. The strlen will just keep going past the end of your double object until it finds a byte with value 0.
Consider the string literal "Hello". In memory, with an ASCII compatible execution character set, this will be stored as the following bytes (in hexadecimal):
48 65 6c 6c 6f 00
strlen would read through each of them until it found the byte with value 0 and report how many bytes it has seen so far.
The IEEE 754 double precision representation of 0.3 is:
3F D3 33 33 33 33 33 33
As you can see, there is no byte with value 0, so strlen just won't know when to stop.
Whatever value the function returns is probably just how far it got until it found a 0 in memory, but you've already hit undefined behaviour and so making any guesses about it is pointless.
Your problem is your use of strlen( (char*)c ), because strlen expects a pointer to a null-terminated character string.
It seems like you're expecting some sort of "boundary" between the 8th and 9th byte, since those first 8 bytes were originally a double.
That information is lost once you've cast that memory to a char*. It becomes the responsibility of your code to know how many chars are valid.
A couple of things:
sizeof(double) probably isn't 4. It's usually 8. Use the operator instead of a hard-coded assumption.
The pointer reinterpret_cast<unsigned char*>(&d) does not indicate a pointer to a null-terminated "string". strlen operates by iterating until it finds a null. You're into undefined behavior there.
I don't use correctly the format specifiers in C. A few lines of code:
int main()
{
char dest[]="stack";
unsigned short val = 500;
char c = 'a';
char* final = (char*) malloc(strlen(dest) + 6);
snprintf(final, strlen(dest)+6, "%c%c%hd%c%c%s", c, c, val, c, c, dest);
printf("%s\n", final);
return 0;
}
What I want is to copy at
final [0] = a random char
final [1] = a random char
final [2] and final [3] = the short array
final [4] = another char ....
My problem is that i want to copy the two bytes of the short int to 2 bytes of the final array.
thanks.
I'm confused - the problem is that you are saying strlen(dest)+6 which limits the length of the final string to 10 chars (plus a null terminator). If you say strlen(dest)+8 then there will be enough space for the full string.
Update
Even though a short may only be 2 bytes in size, when it is printed as a string each character will take up a byte. So that means it can require up to 5 bytes of space to write a short to a string, if you are writing a number above 10000.
Now, if you write the short to a string as a hexadecimal number using the %x format specifier, it will take up no more than 2 bytes.
You need to allocate space for 13 characters - not 11. Don't forget the terminating NULL.
When formatted the number (500) takes up three spaces, not one. So your snsprintf should give the final length as strlen(dest)+5+3. Then also fix your malloc call to adjust. If you want to compute the strlen of the number, do that with a call like this strlen(itoa(val)). Also, cant forget the NULL at the end of dest, but I think strlen takes this into account, but I'm not for sure.
Simple answer is you only allocated enough space for the strlen(dest) + 6 characters when in all reality it looks like you're going to have 8 extra characters... since you have 2 chars + 3 chars in your number + 2 chars after + dest (5 chars) = 13 char when you allocated 11 chars.
Unsigned shorts can take up to 5 characters, right? (0 - 65535)
Seems like you'd need to allocate 5 characters for your unsigned short to cover all of the values.
Which would point to using this:
char* final = (char*) malloc(strlen(dest) + 10);
You lose one byte because you think the short variable takes 2 byte. But it takes three: one for each digit character ('5', '0', '0'). Also you need a '\0' terminator (+1 byte).
==> You need strlen(dest) + 8
Use 8 instead of 6 on:
char* final = (char*) malloc(strlen(dest) + 6);
and
snprintf(final, strlen(dest)+6, "%c%c%hd%c%c%s", c, c, val, c, c, dest);
Seems like the primary misunderstanding is that a "2-byte" short can't be represented on-screen as 2 1-byte characters.
First, leave enough room:
char* final = (char*) malloc(strlen(dest) + 9);
The entire range of possible values for a 1-byte character are not printable. If you want to display this on screen and be readable, you'll have to encode the 2-byte short as 4 hex bytes, such as:
## as hex, 4 characters
snprintf(final, sizeof(final), "%c%c%4x%c%c%s", c, c, val, c, c, dest);
If you are writing this to a file, that's OK, and you might try the following:
## print raw bytes, upper byte, then lower byte.
snprintf(final, sizeof(final), "%c%c%c%c%c%c%s", c, c, ((val<<8)&0xFF), ((val>>8)&0xFF), c, c, dest);
But that won't make sense to a human looking at it, and is sensitive to endianness. I'd strongly recommend against it.