I am writing a program that accepts a text file of hex values. I store these hex values in a vector<string> and then use stol to convert the hex string to an integer then I store that in a new vector<int>.
vector<string> flir_times;
vector<int> flir_dec;
for(int i = 0; i < flir_times.size() ; i++){
int x = stol(flir_times[i], nullptr, 16);
flir_dec.push_back(x);
cout << flir_dec[i] << endl;
}
The program was originally working; but today for some reason it doesn't seem to be converting some new hex values correctly. Here is a short snippet of the hex values that need to be converted:
These are the values that the program should be converting them to:
However when I run my program it converts the hex values into large negative numbers then it crashes. Does anyone have any idea what could be causing the program to not convert the hex numbers correctly then crash?
Your program continues to work correctly, it's just that the added hex numbers that you are trying to read are representations of negative 32-bit integers. For example, the most significant byte of A4B844A2 is 10100100. It has 1 in the most significant "sign" bit, so the number is actually negative.
Switch to unsigned numbers, and use std::stoul to parse input to fix this problem:
vector<string> flir_times;
vector<unsigned> flir_dec;
for(int i = 0; i < flir_times.size() ; i++){
unsigned x = stoul(flir_times[i], nullptr, 16);
flir_dec.push_back(x);
cout << flir_dec[i] << endl;
}
Related
I just learned some simple encryption today and wrote a simple program to convert my text to 10-bit binary. Im not sure if i'm doing it correctly, but the commented section of the code and the actual code has 2 different 10-bit outputs. I am confused. Can someone explain it to me in layman terms?
#include <iostream>
#include <string>
#include <bitset>
#include "md5.h"
using namespace std;
using std::cout;
using std::endl;
int main(int argc, char *argv[])
{
string input ="";
cout << "Please enter a string:\n>";
getline(cin, input);
cout << "You entered: " << input << endl;
cout << "md5 of " << input << ": " << md5("input") << endl;
cout << "Binary is: ";
// cout << bitset<10>(input[1]);
for (int i=0; i<5; i++)
cout << bitset<2>(input[i]);
cout << endl;
return 0;
}
tl;dr : A char is 8 bits, and the string operator[] returns the different chars, as such you accessed different chars and took the first two bits of those. The solution comes in treating a char as exactly that: 8 bits. By doing some clever bit manipulation, we can achieve the desired effect.
The problem
While I still have not completely understood, what you were trying to do, I can answer what a problem could be with this code:
By calling
cout<<bitset<10>(input[1]);
you are reading the 10 bits starting from the second character ( input[0] would start from the first character).
Now, the loop does something entirely different:
for (int i=0; i<5; i++)
cout << bitset<2>(input[i]);
It uses the i-th character of the string and constructs a bitset from it.
The reference of the bitset constructor tells us this means the char is converted to an unsigned long long, which is then converted to a bitset.
Okay, so let's see how that works with a simple input string like
std::string input = "aaaaa";
The first character of this string is 'a', which gives you the 8 bits of '01100001' (ASCII table), and thus the 10 bit bitset that is constructed from that turns out to print
0001100001
where we see a clear padding for the bits to the left (more significant).
On the other hand, if you go through the characters with your loop, you access each character and take only 2 of the bits.
In our case of the character 'a'='01100001', these bits are '01'. So then your program would output 01 five times.
Now, the way to fix it is to actually think more about the bits you are actually accessing.
A possible solution
Do you want to get the first ten bits of the character string in any case?
In that case, you'd want to write something like:
std::bitset<10>(input[0]);
//Will pad the first two bits of the bitset as '0'
or
for(int i=0;i<5;++i){
char referenced = input[i/4];
std::bitset<2>((referenced>>(6-(i%4)*2)));
}
The loop code was redesigned to read the whole string sequentially into 2 bit bitsets.
So since in a char there are 8 bits, we can read 4 of those sets out of a single char -> that is the reason for the "referenced".
The bitshift in the lower part of the loop makes it so it starts with a shift of 6, then 4, then 2, then 0, and then resets to 6 for the next char, etc...
(That way, we can extract the 2 relevant bits out of each 8bit char)
This type of loop will actually read through all parts of your string and do the correct constructions.
A last remark
To construct a bitset directly from your string, you would have to use the raw memory in bits and from that construct the bitset.
You could construct 8 bit bitsets from each char and append those to each other, or create a string from each 8 bit bitset, concatenate those and then use the final string of 1 and 0 to construct a large bitset of arbitrary size.
I hope it helped.
I'm trying to work on a simple image encryption project and I have a few questions I want to ask.
Should I store each byte of data from ifstream into a character like I did in my code?
Each byte printed is a weird symbol (which is correct), but why does adding 10(an int) to that always results in a number when printed?
int main() {
vector <char> data; // Stores each byte from image.jpg
ifstream fileIn("image.jpg", ios::binary);
int i = 0; // Used for accessing each item in data vector
while (fileIn){
//Add each character from the image file into the vector
data.push_back(fileIn.get());
cout << "Original: " << data[i] << endl; // Print each character from image.jgp
cout << "Result after adding: " << data[i] + 10 << endl; // This line is where I need help with
i++;
system("pause");
}
fileIn.close();
system("pause");
return 0;
}
Output:
Original: å
Result after adding: -112
Original: Æ
Result after adding: -100
Original:
Result after adding: 12
As you can see, adding 10 always results in a number. How do I increment these values correctly so that I can change it back later?
Thank you for any help.
When you do an arithmetic operation (like addition) with a value of a type that is smaller than int (like char in your case) then that value will be promoted to int and the operation is done using two int values.
So the expression data[i] + 10 is equivalent to static_cast<int>(data[i]) + 10.
Read more about integral promotion and arithmetic operator conversions.
As for how to solve your problem, first you have to make sure that the result of the operation actually fits in a char. What if the byte you have read is 127 and you add 10? Then the result is out of bounds of a signed char (which seems to be what you have).
If the result is not out of bounds, then you can just cast it:
char result = static_cast<char>(data[i] + 10);
As a small side-note, if you're reading binary data you are not really reading characters, so I suggest using a fixed-with integer type like int8_t or uint8_t instead of char. On supported platforms (which is just about all these days) they are just aliases for signed char and unsigned char (respectively) but using the aliases is more informative for the readers of your code.
In a simple console application I am trying to read a file containing a hex value on each line.
It works for the first few, but after 4 or 5 it starts outputting cdcdcdcd.
Any idea why this is? Is there a limit on using read in this basic manner?
The first byte of the file is its size.
std::ifstream read("file.bin");
int* data;
try
{
data = new int [11398];
}
catch (int e)
{
std::cout << "Error - dynamic array not created. Code: [" << e << "]\n";
}
int size = 0;
read>>std::hex>>size;
std::cout<<std::hex<<size<<std::endl;
for( int i = 0; i < size; i++)
{
read>>std::hex>>data[i];
std::cout<<std::hex<<data[i]<<std::endl;
}
The values I get returned are:
576 (size)
1000323
2000000
1000005
cdcdcdcd
cdcdcdcd
cdcdcdcd
...
The first value that is meant to be output in cdcdcdcd's place is 80000000.
You are overflowing an int.
If you change to unsigned int. You will be able to fill to 0xFFFFFFFF
You can check with:
std::cout << "Range of integer: "
<< std::numeric_limits<int>::max()
<< " <Value> "
<< std::numeric_limits<int>::min()
<< "\n";
std::cout << "Range of integer: "
<< std::numeric_limits<unsigned int>::max()
<< " <Value> "
<< std::numeric_limits<unsigned int>::min()
<< "\n";
Note: There is no negative hex values (it is designed as a compact representation for a bit representation).
You should really check that the read worked:
if (read>>std::hex>>data[i])
{
// read worked
}
else
{
// read failed.
}
It sounds very much like your read fails.
Note that on a 32-bit int system, 0x80000000 is out of range for int. The range of valid values is probably -0x80000000 through to 0x7FFFFFFF.
It's important not to mix up values with representations. "0x80000000" , when read via std::hex, means the positive integer which is written as 80000000 in base 16. It's neither here nor there that a particular negative integer may be stored internally in a signed int in 2's complement with the same binary representation as a positive value of type unsigned int has when the positive integer 80000000 is stored in it.
Consider reading into unsigned int if you intend to use this technique. Also, it is essential that you check the read operation for success or failure. If a stream extraction fails then the stream is put into an error state, where all subsequent reads fail until you call .clear() on the stream.
NB. std::hex (and all other modifiers actually) are "sticky": once you set it, it stays set until you actually specify std::dec to restore the default.
I'm working on a C++ class, and we're learning about the MD5 hashing function. I'm running into this issue however, where I do something like:
string input = "testInput";
unsigned char *toHash = new unsigned char[input.size()+1];
strcpy( (char*)toHash, input.c_str() );
unsigned char output[MD5_DIGEST_LENGTH];
MD5(toHash, input.size(), output);
cout << hex << output << endl;
But I always get some weird garbage characters instead of what I'm looking for, something like a long string of numbers / letters. What's going on?
~
Very confused by low level C++
Don't get fooled by the "unsigned char" type of the array, all that means here is that each value in the array is going to be an 8-bit unsigned integer. In particular it doesn't imply that the data written to the array will be human-readable ASCII characters.
If you wanted to see the contents of the array in human-readable hex form, you could do this (instead of the cout command):
for (int i=0; i< MD5_DIGEST_LENGTH; i++) printf("%02x ", output[i]);
printf("\n");
I need a function that returns the ASCII value of a character, including spaces, tabs, newlines, etc...
On a similar note, what is the function that converts between hexadecimal, decimal, and binary numbers?
char c;
int ascii = (int) c;
s2.data[j]=(char)count;
A char is an integer, no need for conversion functions.
Maybe you are looking for functions that display integers as a string - using hex, binary or decimal representations?
You don't need a function to get the ASCII value -- just convert to an integer by an (implicit) cast:
int x = 'A'; // x = 65
int y = '\t'; // x = 9
To convert a number to hexadecimal or decimal, you can use any of the members of the printf family:
char buffer[32]; // make sure this is big enough!
sprintf(buffer, "%d", 12345); // decimal: buffer is assigned "12345"
sprintf(buffer, "%x", 12345); // hex: buffer is assigned "3039"
There is no built-in function to convert to binary; you'll have to roll your own.
If you want to get the ASCII value of a character in your code, just put the character in quotes
char c = 'a';
You may be confusing internal representation with output. To see what value a character has:
char c = 'A';
cout << c << " has code " << int(c) << endl;
Similarly fo hex valuwes - all numbers are hexadecimal numbers, so it's just a question of output:
int n = 42;
cout << n << " in hex is " << hex << n << endl;
The "hex" in the output statement is a C++ manipulator. There are manipulators for hex and decimal (dec), but unfortunately not for binary.
As far as hex & binary - those are just representations of integers. What you probably want is something like printf("%d",n), and printf("%x",n) - the first prints the decimal, the second the hex version of the same number. Clarify what you are trying to do -