Why is my decrypted data formatted like this? - c++

I am currently working on a side project to learn how to use Crypto++ for encryption/decryption. For testing my project I was given the following values to help setup and validate that my project is working:
original string: "100000"
encrypted value: "f3q2PYciHlwmS0S1NFpIdA=="
key and iv: empty byte array
key size: 24 bytes
iv size: 16 bytes
The project runs and decrypts the encrypted value okay, but instead of returning
"100000"
it returns
"1 0 0 0 0 0 "
where each space is really "\0". Here is my minimal code that I use for decryption:
#include "modes.h"
#include "aes.h"
#include "base64.h"
using namespace CryptoPP;
void main()
{
string strEncoded = "f3q2PYciHlwmS0S1NFpIdA==";
string strDecrypted;
string strDecoded;
byte abKey[24];
byte abIV[AES::BLOCKSIZE];
memset(abKey, 0, sizeof(abKey));
memset(abIV, 0, AES::BLOCKSIZE);
AES::Decryption cAESDecryption(abKey, sizeof(abKey));
CBC_Mode_ExternalCipher::Decryption cCBCDecryption(cAESDecryption, abIV);
StringSource(strEncoded, true, new Base64Decoder(new StringSink(strDecoded)));
StreamTransformationFilter cDecryptor(cCBCDecryption, new StringSink(strDecrypted));
cDecryptor.Put(reinterpret_cast<const byte*>(strDecoded.c_str()), strDecoded.size());
cDecryptor.MessageEnd();
}
I am okay with using the decrypted value as is, but what I need help understanding is why the decrypted value is showing "1 0 0 0 0 0 " instead of "100000"? By the way, this is built in VS2005 as a Windows Console Application with Crypto++ as a static library and I am using Debug mode to look at the values.

Add a strHex string, and add the following line after you decrypt the text:
StringSource ss2(strDecrypted, true, new HexEncoder(new StringSink(strHex)));
cout << strHex << endl;
You should see something similar to:
$ ./cryptopp-test.exe
310030003000300030003000
As #Maarten said, it looks like UTF-16 LE without the BOM. My guess is the sample was created in .Net, and they are asking you to decrypt in C++/Crypto++. I'm guessing .Net because its UTF-16 and little endian, while Java is UTF-16 and big endian by default (IIRC).
You could also ask that they provide you with strings produced by getBytes(Encoding.UTF8). That will side step the issue, too.
So the value in strDecrypted is not a std::string. Its just a binary string (a.k.a a Rope) that needs to be converted. For the conversion to UTF-8 (or other narrow character set), I believe you can use iconv. libiconv is built into GNU Linux's GLIBC (IIRC), and it can be found in the lib directory of the BSDs.
If you are on Windows, then use WideCharToMultiByte function.

It's very probably just text that is encoded using the UTF-16LE or UCS-2LE character-encoding, apparently without Byte Order Mark (BOM). So to display the text you have to decode it first.

Related

Convert utf8 wstring to string on windows in C++

I am representing folder paths with boost::filesystem::path which is a wstring on windows OS and I would like to convert it to std::string with the following method:
std::wstring_convert<std::codecvt_utf8_utf16<wchar_t>> conv1;
shared_dir = conv1.to_bytes(temp.wstring());
but unfortunatelly the result of the following text is this:
"c:\git\myproject\bin\árvíztűrőtükörfúrógép" ->
"c:\git\myproject\bin\árvíztűrőtükörfúrógép"
What do I do wrong?
#include <string>
#include <locale>
#include <codecvt>
int main()
{
// wide character data
std::wstring wstr = L"árvíztűrőtükörfúrógép";
// wide to UTF-8
std::wstring_convert<std::codecvt_utf8<wchar_t>> conv1;
std::string str = conv1.to_bytes(wstr);
}
I was checking the value of the variable in visual studio debug mode.
The code is fine.
You're taking a wstring that stores UTF-16 encoded data, and creating a string that stores UTF-8 encoded data.
I was checking the value of the variable in visual studio debug mode.
Visual Studio's debugger has no idea that your string stores UTF-8. A string just contains bytes. Only you (and people reading your documentation!) know that you put UTF-8 data inside it. You could have put something else inside it.
So, in the absence of anything more sensible to do, the debugger just renders the string as ASCII*. What you're seeing is the ASCII* representation of the bytes in your string.
Nothing is wrong here.
If you were to output the string like std::cout << str, and if you were running the program in a command line window set to UTF-8, you'd get your expected result. Furthermore, if you inspect the individual bytes in your string, you'll see that they are encoded correctly and hold your desired values.
You can push the IDE to decode the string as UTF-8, though, on an as-needed basis: in the Watch window type str,s8; or, in the Command window, type ? &str[0],s8. These techniques are explored by Giovanni Dicanio in his article "What's Wrong with My UTF-8 Strings in Visual Studio?".
It's not even really ASCII; it'll be some 8-bit encoding decided by your system, most likely the code page Windows-1252 given the platform. ASCII only defines the lower 7 bits. Historically, the various 8-bit code pages have been colloquially (if incorrectly) called "extended ASCII" in various settings. But the point is that the multi-byte nature of the data is not at all considered by the component rendering the string to your screen, let alone specifically its UTF-8-ness.

C++ output Unicode in variable

I'm trying to output a string containing unicode characters, which is received with a curl call. Therefore, I'm looking for something similar to u8 and L options for literal strings, but than applicable for variables. E.g.:
const char *s = u8"\u0444";
However, since I have a string containing unicode characters, such as:
mit freundlichen Grüßen
When I want to print this string with:
cout << UnicodeString << endl;
it outputs:
mit freundlichen Gr??en
When I use wcout, it returns me:
mit freundlichen Gren
What am I doing wrong and how can I achieve the correct output. I return the output with RapidJSON, which returns the string as:
mit freundlichen Gr��en
Important to note, the application is a CGI running on Ubuntu, replying on browser requests
If you are on Windows, what I would suggest is using Unicode UTF-16 at the Windows boundary.
It seems to me that on Windows with Visual C++ (at least up to VS2015) std::cout cannot output UTF-8-encoded-text, but std::wcout correctly outputs UTF-16-encoded text.
This compilable code snippet correctly outputs your string containing German characters:
#include <fcntl.h>
#include <io.h>
#include <iostream>
int main()
{
_setmode(_fileno(stdout), _O_U16TEXT);
// ü : U+00FC
// ß : U+00DF
const wchar_t * text = L"mit freundlichen Gr\u00FC\u00DFen";
std::wcout << text << L'\n';
}
Note the use of a UTF-16-encoded wchar_t string.
On a more general note, I would suggest you using the UTF-8 encoding (and for example storing text in std::strings) in your cross-platform C++ portions of code, and convert to UTF-16-encoded text at the Windows boundary.
To convert between UTF-8 and UTF-16 you can use Windows APIs like MultiByteToWideChar and WideCharToMultiByte. These are C APIs, that can be safely and conveniently wrapped in C++ code (more details can be found in this MSDN article, and you can find compilable C++ code here on GitHub).
On my system the following produces the correct output. Try it on your system. I am confident that it will produce similar results.
#include <string>
#include <iostream>
using namespace std;
int main()
{
string s="mit freundlichen Grüßen";
cout << s << endl;
return 0;
}
If it is ok, then this points to the web transfer not being 8-bit clean.
Mike.
containing unicode characters
You forgot to specify which unicode encoding does the string contain. There is the "narrow" UTF-8, which can be stored in a std::string and printed using std::cout, as well as wider variants, which can't. It is crucial to know which encoding you're dealing with. For the remainder of my answer, I'm going to assume you want to use UTF-8.
When I want to print this string with:
cout << UnicodeString << endl;
EDIT:
Important to note, the application is a CGI running on Ubuntu, replying on browser requests
The concerns here are slightly different from printing onto a terminal.
You need to set the Content-Type response header appropriately or else the client cannot know how to interpret the response. For example Content-Type: application/json; charset=utf-8.
You still need to make sure that the source string is in fact the correct encoding corresponding to the header. See the old answer below for overview.
The browser has to support the encoding. Most modern browsers have had support for UTF-8 a long time now.
Answer regarding printing to terminal:
Assuming that
UnicodeString indeed contains an UTF-8 encoded string
and that the terminal uses UTF-8 encoding
and the font that the terminal uses has the graphemes that you use
the above should work.
it outputs:
mit freundlichen Gr??en
Then it appears that at least one of the above assumptions don't hold.
Whether 1. is true, you can verify by inspecting the numeric value of each code unit separately and comparing it to what you would expect of UTF-8. If 1. isn't true, then you need to figure out what encoding does the string actually use, and either convert the encoding, or configure the terminal to use that encoding.
The terminal typically, but not necessarily, uses the system native encoding. The first step of figuring out what encoding your terminal / system uses is to figure out what terminal / system you are using in the first place. The details are probably in a manual.
If the terminal doesn't use UTF-8, then you need to convert the UFT-8 string within your program into the character encoding that the terminal does use - unless that encoding doesn't have the graphemes that you want to print. Unfortunately, the standard library doesn't provide arbitrary character encoding conversion support (there is some support for converting between narrow and wide unicode, but even that support is deprecated). You can find the unicode standard here, although I would like to point out that using an existing conversion implementation can save a lot of work.
In the case the character encoding of the terminal doesn't have the needed grapehemes - or if you don't want to implement encoding conversion - is to re-configure the terminal to use UTF-8. If the terminal / system can be configured to use UTF-8, there should be details in the manual.
You should be able to test if the font itself has the required graphemes simply by typing the characters into the terminal and see if they show as they should - although, this test will also fail if the terminal encoding does not have the graphemes, so check that first. Manual of your terminal should explain how to change the font, should it be necessary. That said, I would expect üß to exist in most fonts.

SHA256 in Crypto++ library

Let us focus on SHA256.
According to the following website,
http://www.fileformat.info/tool/hash.htm, the 'Binary hash' of 123 is 3d73c0...... and the 'String hash' of 123 is a665a4.......
I can obtain the 'String hash' by using the library of crypto++ as the following code:
CryptoPP::SHA256 hash;
string digest;
CryptoPP::StringSource d1pk("123", true, new CryptoPP::HashFilter(hash, new HexEncoder(new CryptoPP::StringSink(digest))));
cout<< "digest : " << digest <<endl;
How can I obtain the 'Binary hash' by using the library of crypto++?
The website you linked is a hash tool, and allows for input as either string or bytes.
When you enter a string it will get the bytes of it and then hash that, so the "Binary Hash" is no different. It accepts data in another format, hexadecimal, and converts that to bytes to be hashed.
This is the best explanation of what is going on, but I can not be completely definitive without seeing their source.

I get "Invalid utf 8 error" when checking string, but it seems correct when i use std::cout

I am writing some code that must read utf 8 encoded text files, and send them to OpenGL.
Also using a library which i downloaded from this site: http://utfcpp.sourceforge.net/
When i write down this i can show the right images on OpenGL window:
std::string somestring = "abcçdefgğh";
// Convert string to utf32 encoding..
// I also set local on program startup.
But when i read the utf8 encoded string from file:
The library warns me about that the string has not a valid utf encoding
I can't send the 'read from file' string to OpenGL. It crashes.
But i can still use std::cout for the string that i read from file (it looks right).
I use this code to read from file:
void something(){
std::ifstream ifs("words.xml");
std::string readd;
if(ifs.good()){
while(!ifs.eof()){
std::getline(ifs, readd);
// do something..
}
}
}
Now the question is:
If the string which is read from file is not correct, how does it look as expected when i check it with std::cout?
How can i get this issue solved?
Thanks in advance:)
The shell to which you write output is probably rather robust against characters it doesn't understand. It seems, not all of the used software is. It should, however, be relatively straight forward to verify if you byte sequence is a valid UTF-8 sequence: the UTF-8 encoding is relatively straight forward:
each code point starts with a byte representing the number of bytes to be read and the first couple of bytes:
if the high bit is 0, the code point consists of one byte represented by the 7 lower bits
otherwise the number of leading 1 bits represent the total number of bytes followed by a zero bit (obiously) and the remaining bits become the high bits of the code point
since 1 byte is already represented, bytes with the high bit set and the next bit not set are continuation bytes: the lower 6 bits are part of the representation of the code point
Based on these rules, there are two things which can go wrong and make the UTF-8 invalid:
a continuation byte is encountered at a point where a start byte is expected
there was a start byte indicating more continuation bytes then followed
I don't have code around which could indicate where things are going wrong but it should be fairly straight forward to write such code.

UCS-2LE text file parsing

I have a text file which was created using some Microsoft reporting tool. The text file includes the BOM 0xFFFE in the beginning and then ASCII character output with nulls between characters (i.e "F.i.e.l.d.1."). I can use iconv to convert this to UTF-8 using UCS-2LE as an input format and UTF-8 as an output format... it works great.
My problem is that I want to read in lines from the UCS-2LE file into strings and parse out the field values and then write them out to a ASCII text file (i.e. Field1 Field2). I have tried the string and wstring-based versions of getline – while it reads the string from the file, functions like substr(start, length) do interpret the string as 8-bit values, so the start and length values are off.
How do I read the UCS-2LE data into a C++ String and extract the data values? I have looked at boost and icu as well as numerous google searches but have not found anything that works. What am I missing here? Please help!
My example code looks like this:
wifstream srcFile;
srcFile.open(argv[1], ios_base::in | ios_base::binary);
..
..
wstring srcBuf;
..
..
while( getline(srcFile, srcBuf) )
{
wstring field1;
field1 = srcBuf.substr(12, 12);
...
...
}
So, if, for example, srcBuf contains "W.e. t.h.i.n.k. i.n. g.e.n.e.r.a.l.i.t.i.e.s." then the substr() above returns ".k. i.n. g.e" instead of "g.e.n.e.r.a.l.i.t.i.e.s.".
What I want is to read in the string and process it without having to worry about the multi-byte representation. Does anybody have an example of using boost (or something else) to read these strings from the file and convert them to a fixed width representation for internal use?
BTW, I am on a Mac using Eclipse and gcc.. Is it possible my STL does not understand wide character strings?
Thanks!
Having spent some good hours tackling this question, here are my conclusions:
Reading an UTF-16 (or UCS2-LE) file is apparently manageable in C++11, see How do I write a UTF-8 encoded string to a file in Windows, in C++
Since the boost::locale library is now part of C++11, one can just use codecvt_utf16 (see bullet below for eventual code samples)
However, in older compilers (e.g. MSVC 2008), you can use locale and a custom codecvt facet/"recipe", as very nicely exemplified in this answer to Writing UTF16 to file in binary mode
Alternatively, one can also try this method of reading, though it did not work in my case. The output would be missing lines which were replaced by garbage chars.
I wasn't able to get this done in my pre-C++11 compiler and had to resort to scripting it in Ruby and spawning a process (it's just in test so I think that kind of complications are ok there) to execute my task.
Hope this spares others some time, happy to help.
substr works fine for me on Linux with g++ 4.3.3. The program
#include <string>
#include <iostream>
using namespace std;
int main()
{
wstring s1 = L"Hello, world";
wstring s2 = s1.substr(3,5);
wcout << s2 << endl;
}
prints "lo, w" as it should.
However, the file reading probably does something different from what you expect. It converts the files from the locale encoding to wchar_t, which will cause each byte becoming its own wchar_t. I don't think the standard library supports reading UTF-16 into wchar_t.