std::cin.read() fails to read stream - c++

I'm implementing a native host for a browser extension. I designed my implementation around std::cin instead of C-style getchar()
The issue here is that std::cin not opened in binary mode and this has effects on Windows based hosts because Chrome browser don't work well with Windows style \r\n hence I have to read it in binary mode.
To read in binary mode, I have to use _setmode(_fileno(stdin), _O_BINARY);
My IDE can't find definition for _fileno and I found that the workaround is to use the following macro,
#if !defined(_fileno)
#define _fileno(__F) ((__F)->_file)
#endif
However, I'm not confident enough with this macro. I believe something is wrong, but I'm using the latest MinGW compiler and not sure why it's not defined.
Update: it seems the function is behind a __STRICT_ANSI__ and I have no idea how to disable it.
Whatever, the program compiles fine and the browser starts it, and when I send message from browser, the application able to read the length of message, and when it try to read the content, the std::cin.read() operation inserts nothing to the buffer vector and the message is not null terminated, but I don't think that causing the issue.
I also made an attempt to send a dummy message to browser without reading but it seems freezing the browser.
#include <iostream>
#include <cstdio>
#include <string>
#include <vector>
#ifdef __WIN32
#include <fcntl.h>
#include <io.h>
#endif
#if !defined(_fileno)
#define _fileno(__F) ((__F)->_file)
#endif
enum class Platforms {
macOS = 1,
Windows = 2,
Linux = 3
};
Platforms platform;
#ifdef __APPLE__
constexpr Platforms BuildOS = Platforms::macOS;
#elif __linux__
constexpr Platforms BuildOS = Platforms::Linux;
#elif __WIN32
constexpr Platforms BuildOS = Platforms::Windows;
#endif
void sendMessage(std::string message) {
auto *data = message.data();
auto size = uint32_t(message.size());
std::cout.write(reinterpret_cast<char *>(&size), 4);
std::cout.write(data, size);
std::cout.flush();
}
int main() {
if constexpr(BuildOS == Platforms::Windows) {
// Chrome doesn't deal well with Windows style \r\n
_setmode(_fileno(stdin), _O_BINARY);
_setmode(_fileno(stdout), _O_BINARY);
}
while(true) {
std::uint32_t messageLength;
// First Four contains message legnth
std::cin.read(reinterpret_cast<char*>(&messageLength), 4);
if (std::cin.eof())
{
break;
}
std::vector<char> buffer;
// Allocate ahead
buffer.reserve(std::size_t(messageLength) + 1);
std::cin.read(&buffer[0], messageLength);
std::string message(buffer.data(), buffer.size());
sendMessage("{type: 'Hello World'}");
}
}

Solution:
buffer.reserve(std::size_t(messageLength) + 1);
should be
buffer.resize(std::size_t(messageLength) + 1);
or we can presize the buffer during construction with
std::vector<char> buffer(messageLength +1);
Problem Explanation:
buffer.reserve(std::size_t(messageLength) + 1);
reserves capacity but doesn't change the size of the vector, so technically
std::cin.read(&buffer[0], messageLength);`
is illegal, and at
std::string message(buffer.data(), buffer.size());`
buffer.size() is still 0.

Related

fopen returning NULL in gdb

Im trying to solve a binary exploitation problem from picoCTF, but I'm having trouble with gdb.
Here is the source code of the problem (I've commented some stuff to help me).
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <sys/types.h>
#include <wchar.h>
#include <locale.h>
#define BUF_SIZE 32
#define FLAG_LEN 64
#define KEY_LEN 4
void display_flag() {
char buf[FLAG_LEN];
FILE *f = fopen("flag.txt","r");
if (f == NULL) {
printf("'flag.txt' missing in the current directory!\n");
exit(0);
}
fgets(buf,FLAG_LEN,f);
puts(buf);
fflush(stdout);
}
// loads value into key, global variables ie not on stack
char key[KEY_LEN];
void read_canary() {
FILE *f = fopen("/problems/canary_3_257a2a2061c96a7fb8326dbbc04d0328/canary.txt","r");
if (f == NULL) {
printf("[ERROR]: Trying to Read Canary\n");
exit(0);
}
fread(key,sizeof(char),KEY_LEN,f);
fclose(f);
}
void vuln(){
char canary[KEY_LEN];
char buf[BUF_SIZE];
char user_len[BUF_SIZE];
int count;
int x = 0;
memcpy(canary,key,KEY_LEN); // copies "key" to canary, an array on the stack
printf("Please enter the length of the entry:\n> ");
while (x<BUF_SIZE) {
read(0,user_len+x,1);
if (user_len[x]=='\n') break;
x++;
}
sscanf(user_len,"%d",&count); // gives count the value of the len of user_len
printf("Input> ");
read(0,buf,count); // reads count bytes to buf from stdin
// compares canary (variable on stack) to key
// if overwriting need to get the value of key and maintain it, i assume its constant
if (memcmp(canary,key,KEY_LEN)) {
printf("*** Stack Smashing Detected *** : Canary Value Corrupt!\n");
exit(-1);
}
printf("Ok... Now Where's the Flag?\n");
fflush(stdout);
}
int main(int argc, char **argv){
setvbuf(stdout, NULL, _IONBF, 0);
int i;
gid_t gid = getegid();
setresgid(gid, gid, gid);
read_canary();
vuln();
return 0;
}
When I run this normally, with ./vuln, I get normal execution. But when I open it in gdb with gdb ./vuln and then run it with run, I get the [ERROR]: Trying to Read Canary message. Is this something that is intended to make the problem challenging? I don't want the solution, I just don't know if this is intended behaviour or a bug. Thanks
I don't want the solution, I just don't know if this is intended behaviour or a bug.
I am not sure whether you'll consider it intended behavior, but it's definitely not a bug.
Your ./vuln is a set-gid program. As such, it runs as group canary_3 when run outside of GDB, but as your group when run under GDB (for obvious security reason).
We can assume that the canary_3 group has read permissions on the canary.txt, but you don't.
P.S. If you printed strerror(errno) (as comments suggested), the resulting Permission denied. should have made the failure obvious.

Reading an input of mixed unicode characters and integers [duplicate]

I ask a code snippet which cin a unicode text, concatenates another unicode one to the first unicode text and the cout the result.
P.S. This code will help me to solve another bigger problem with unicode. But before the key thing is to accomplish what I ask.
ADDED: BTW I can't write in the command line any unicode symbol when I run the executable file. How I should do that?
I had a similar problem in the past, in my case imbue and sync_with_stdio did the trick. Try this:
#include <iostream>
#include <locale>
#include <string>
using namespace std;
int main() {
ios_base::sync_with_stdio(false);
wcin.imbue(locale("en_US.UTF-8"));
wcout.imbue(locale("en_US.UTF-8"));
wstring s;
wstring t(L" la Polynésie française");
wcin >> s;
wcout << s << t << endl;
return 0;
}
Depending on what type unicode you mean. I assume you mean you are just working with std::wstring though. In that case use std::wcin and std::wcout.
For conversion between encodings you can use your OS functions like for Win32: WideCharToMultiByte, MultiByteToWideChar or you can use a library like libiconv
Here is an example that shows four different methods, of which only the third (C conio) and the fourth (native Windows API) work (but only if stdin/stdout aren't redirected). Note that you still need a font that contains the character you want to show (Lucida Console supports at least Greek and Cyrillic). Note that everything here is completely non-portable, there is just no portable way to input/output Unicode strings on the terminal.
#ifndef UNICODE
#define UNICODE
#endif
#ifndef _UNICODE
#define _UNICODE
#endif
#define STRICT
#define NOMINMAX
#define WIN32_LEAN_AND_MEAN
#include <iostream>
#include <string>
#include <cstdlib>
#include <cstdio>
#include <conio.h>
#include <windows.h>
void testIostream();
void testStdio();
void testConio();
void testWindows();
int wmain() {
testIostream();
testStdio();
testConio();
testWindows();
std::system("pause");
}
void testIostream() {
std::wstring first, second;
std::getline(std::wcin, first);
if (!std::wcin.good()) return;
std::getline(std::wcin, second);
if (!std::wcin.good()) return;
std::wcout << first << second << std::endl;
}
void testStdio() {
wchar_t buffer[0x1000];
if (!_getws_s(buffer)) return;
const std::wstring first = buffer;
if (!_getws_s(buffer)) return;
const std::wstring second = buffer;
const std::wstring result = first + second;
_putws(result.c_str());
}
void testConio() {
wchar_t buffer[0x1000];
std::size_t numRead = 0;
if (_cgetws_s(buffer, &numRead)) return;
const std::wstring first(buffer, numRead);
if (_cgetws_s(buffer, &numRead)) return;
const std::wstring second(buffer, numRead);
const std::wstring result = first + second + L'\n';
_cputws(result.c_str());
}
void testWindows() {
const HANDLE stdIn = GetStdHandle(STD_INPUT_HANDLE);
WCHAR buffer[0x1000];
DWORD numRead = 0;
if (!ReadConsoleW(stdIn, buffer, sizeof buffer, &numRead, NULL)) return;
const std::wstring first(buffer, numRead - 2);
if (!ReadConsoleW(stdIn, buffer, sizeof buffer, &numRead, NULL)) return;
const std::wstring second(buffer, numRead);
const std::wstring result = first + second;
const HANDLE stdOut = GetStdHandle(STD_OUTPUT_HANDLE);
DWORD numWritten = 0;
WriteConsoleW(stdOut, result.c_str(), result.size(), &numWritten, NULL);
}
Edit 1: I've added a method based on conio.
Edit 2: I've messed around with _O_U16TEXT a bit as described in Michael Kaplan's blog, but that seemingly only had wgets interpret the (8-bit) data from ReadFile as UTF-16. I'll investigate this a bit further during the weekend.
If you have actual text (i.e., a string of logical characters), then insert to the wide streams instead. The wide streams will automatically encode your characters to match the bits expected by the locale encoding. (And if you have encoded bits instead, the streams will decode the bits, then re-encode them to match the locale.)
There is a lesser solution if you KNOW you have UTF-encoded bits (i.e., an array of bits intended to be decoded into a string of logical characters) AND you KNOW the target of the output stream is expecting that very same bit-format, then you can skip the decoding and re-encoding steps and write() the bits as-is. This only works when you know both sides use the same encoding format, which may be the case for small utilities not intended to communicate with processes in other locales.
It depends on the OS. If your OS understands you can simply send it UTF-8 sequences.

Best way to get exe folder path?

I found this in another forum that is supposed to give it to you. But I think this may not be the best way, also I think it results in a memory leak due to the array not being deleted. Is this true?
Also is this the best way? Best way being a cross platform command (if it doesn't exist then use Windows) that gives the folder directory directly.
std::string ExePath()
{
using namespace std;
char buffer[MAX_PATH];
GetModuleFileName(NULL, buffer, MAX_PATH);
string::size_type pos = string(buffer).find_last_of("\\/");
if (pos == string::npos)
{
return "";
}
else
{
return string(buffer).substr(0, pos);
}
}
There is no memory leak in your code, but there are some issues with it:
it is Windows-specific,
it works with local code-page and does not support arbitrary Unicode files names.
Unfortunately, there is no standard way of accomplishing this task just with C++ library, but here is a code that will work on Windows and Linux, and support Unicode paths as well. Also it utilizes std::filesystem library from C++17:
#include <filesystem>
#ifdef _WIN32
#include <windows.h>
#elif
#include <unistd.h>
#endif
std::filesystem::path GetExeDirectory()
{
#ifdef _WIN32
// Windows specific
wchar_t szPath[MAX_PATH];
GetModuleFileNameW( NULL, szPath, MAX_PATH );
#else
// Linux specific
char szPath[PATH_MAX];
ssize_t count = readlink( "/proc/self/exe", szPath, PATH_MAX );
if( count < 0 || count >= PATH_MAX )
return {}; // some error
szPath[count] = '\0';
#endif
return std::filesystem::path{ szPath }.parent_path() / ""; // to finish the folder path with (back)slash
}

Printing unicode character in C

I got some local language font installed in my system (windows 8 OS). Through character map tool in windows, i got to know the unicode for those characters for that particular font.
All i wanted to print those character in command line through a C program.
For example: Assume greek letter alpha is represented with unicode u+0074.
Taking "u+0074" as an input, i would like my C program to output alpha character
Can anyone help me?
There are several issues. If you're running in a console window, I'd convert the code to UTF-8, and set the code page for the window to 65001. Alternatively, you can use wchar_t (which is UTF-16 on Windows), output via std::wostream and set the code page to 1200. (According the the documentation I've found, at least. I've no experience with this, because my code has had to be portable, and on the other platforms I've worked on, wchar_t has been either some private 32 bit encoding, or UTF-32.)
First you should set TrueType font (Consolas) in console's Properties. Then this code should suffice in your case -
#include <stdio.h>
#include <tchar.h>
#include <iostream>
#include <string>
#include <Windows.h>
#include <fstream>
//for _setmode()
#include <io.h>
#include <fcntl.h>
using namespace std;
int _tmain(int argc, _TCHAR* argv[])
{
TCHAR tch[1];
tch[0] = 0x03B1;
// Test1 - WriteConsole
HANDLE hStdOut = GetStdHandle(STD_OUTPUT_HANDLE);
if (hStdOut == INVALID_HANDLE_VALUE) return 1;
DWORD dwBytesWritten;
WriteConsole(hStdOut, tch, (DWORD)_tcslen(tch), &dwBytesWritten, NULL);
WriteConsole(hStdOut, L"\n", 1, &dwBytesWritten, NULL);
_setmode(_fileno(stdout), _O_U16TEXT);
// Test2 - wprintf
_tprintf_s(_T("%s\n"),tch);
// Test3 - wcout
wcout << tch << endl;
wprintf(L"\x03B1\n");
if (wcout.bad())
{
_tprintf_s(_T("\nError in wcout\n"));
return 1;
}
return 0;
}
MSDN -
setmode is typically used to modify the default translation mode of
stdin and stdout, but you can use it on any file. If you apply
_setmode to the file descriptor for a stream, call _setmode before performing any input or output operations on the stream.
use the Unicode version of the WriteConsole function.
also, be sure to store the source code as UTF-8 with BOM, which is supported by both g++ and visual c++
Example, assuming that you want to present a greek alpha given its Unicode code in the form "u+03B1" (the code you listed stands for a lowercase "t"):
#include <stdlib.h> // exit, EXIT_FAILURE, wcstol
#include <string> // std::wstring
using namespace std;
#undef UNICODE
#define UNICODE
#include <windows.h>
bool error( char const s[] )
{
::FatalAppExitA( 0, s );
exit( EXIT_FAILURE );
}
namespace stream_handle {
HANDLE const output = ::GetStdHandle( STD_OUTPUT_HANDLE );
} // namespace stream_handle
void write( wchar_t const* const s, int const n )
{
DWORD n_chars_written;
::WriteConsole(
stream_handle::output,
s,
n,
&n_chars_written,
nullptr // overlapped i/o structure
)
|| error( "WriteConsole failed" );
}
int main()
{
wchar_t const input[] = L"u+03B1";
wchar_t const ch = wcstol( input + 2, nullptr, 16 );
wstring const s = wstring() + ch + L"\r\n";
write( s.c_str(), s.length() );
}
In C there is the primitive type of wchar_t which defines a wide-character. There are also corresponding functions like strcat -> wstrcat. Of course it depends on the environment you are using. If you use Visual Studio have a look here.

How can I cin and cout some unicode text?

I ask a code snippet which cin a unicode text, concatenates another unicode one to the first unicode text and the cout the result.
P.S. This code will help me to solve another bigger problem with unicode. But before the key thing is to accomplish what I ask.
ADDED: BTW I can't write in the command line any unicode symbol when I run the executable file. How I should do that?
I had a similar problem in the past, in my case imbue and sync_with_stdio did the trick. Try this:
#include <iostream>
#include <locale>
#include <string>
using namespace std;
int main() {
ios_base::sync_with_stdio(false);
wcin.imbue(locale("en_US.UTF-8"));
wcout.imbue(locale("en_US.UTF-8"));
wstring s;
wstring t(L" la Polynésie française");
wcin >> s;
wcout << s << t << endl;
return 0;
}
Depending on what type unicode you mean. I assume you mean you are just working with std::wstring though. In that case use std::wcin and std::wcout.
For conversion between encodings you can use your OS functions like for Win32: WideCharToMultiByte, MultiByteToWideChar or you can use a library like libiconv
Here is an example that shows four different methods, of which only the third (C conio) and the fourth (native Windows API) work (but only if stdin/stdout aren't redirected). Note that you still need a font that contains the character you want to show (Lucida Console supports at least Greek and Cyrillic). Note that everything here is completely non-portable, there is just no portable way to input/output Unicode strings on the terminal.
#ifndef UNICODE
#define UNICODE
#endif
#ifndef _UNICODE
#define _UNICODE
#endif
#define STRICT
#define NOMINMAX
#define WIN32_LEAN_AND_MEAN
#include <iostream>
#include <string>
#include <cstdlib>
#include <cstdio>
#include <conio.h>
#include <windows.h>
void testIostream();
void testStdio();
void testConio();
void testWindows();
int wmain() {
testIostream();
testStdio();
testConio();
testWindows();
std::system("pause");
}
void testIostream() {
std::wstring first, second;
std::getline(std::wcin, first);
if (!std::wcin.good()) return;
std::getline(std::wcin, second);
if (!std::wcin.good()) return;
std::wcout << first << second << std::endl;
}
void testStdio() {
wchar_t buffer[0x1000];
if (!_getws_s(buffer)) return;
const std::wstring first = buffer;
if (!_getws_s(buffer)) return;
const std::wstring second = buffer;
const std::wstring result = first + second;
_putws(result.c_str());
}
void testConio() {
wchar_t buffer[0x1000];
std::size_t numRead = 0;
if (_cgetws_s(buffer, &numRead)) return;
const std::wstring first(buffer, numRead);
if (_cgetws_s(buffer, &numRead)) return;
const std::wstring second(buffer, numRead);
const std::wstring result = first + second + L'\n';
_cputws(result.c_str());
}
void testWindows() {
const HANDLE stdIn = GetStdHandle(STD_INPUT_HANDLE);
WCHAR buffer[0x1000];
DWORD numRead = 0;
if (!ReadConsoleW(stdIn, buffer, sizeof buffer, &numRead, NULL)) return;
const std::wstring first(buffer, numRead - 2);
if (!ReadConsoleW(stdIn, buffer, sizeof buffer, &numRead, NULL)) return;
const std::wstring second(buffer, numRead);
const std::wstring result = first + second;
const HANDLE stdOut = GetStdHandle(STD_OUTPUT_HANDLE);
DWORD numWritten = 0;
WriteConsoleW(stdOut, result.c_str(), result.size(), &numWritten, NULL);
}
Edit 1: I've added a method based on conio.
Edit 2: I've messed around with _O_U16TEXT a bit as described in Michael Kaplan's blog, but that seemingly only had wgets interpret the (8-bit) data from ReadFile as UTF-16. I'll investigate this a bit further during the weekend.
If you have actual text (i.e., a string of logical characters), then insert to the wide streams instead. The wide streams will automatically encode your characters to match the bits expected by the locale encoding. (And if you have encoded bits instead, the streams will decode the bits, then re-encode them to match the locale.)
There is a lesser solution if you KNOW you have UTF-encoded bits (i.e., an array of bits intended to be decoded into a string of logical characters) AND you KNOW the target of the output stream is expecting that very same bit-format, then you can skip the decoding and re-encoding steps and write() the bits as-is. This only works when you know both sides use the same encoding format, which may be the case for small utilities not intended to communicate with processes in other locales.
It depends on the OS. If your OS understands you can simply send it UTF-8 sequences.