fopen returning NULL in gdb - c++

Im trying to solve a binary exploitation problem from picoCTF, but I'm having trouble with gdb.
Here is the source code of the problem (I've commented some stuff to help me).
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <sys/types.h>
#include <wchar.h>
#include <locale.h>
#define BUF_SIZE 32
#define FLAG_LEN 64
#define KEY_LEN 4
void display_flag() {
char buf[FLAG_LEN];
FILE *f = fopen("flag.txt","r");
if (f == NULL) {
printf("'flag.txt' missing in the current directory!\n");
exit(0);
}
fgets(buf,FLAG_LEN,f);
puts(buf);
fflush(stdout);
}
// loads value into key, global variables ie not on stack
char key[KEY_LEN];
void read_canary() {
FILE *f = fopen("/problems/canary_3_257a2a2061c96a7fb8326dbbc04d0328/canary.txt","r");
if (f == NULL) {
printf("[ERROR]: Trying to Read Canary\n");
exit(0);
}
fread(key,sizeof(char),KEY_LEN,f);
fclose(f);
}
void vuln(){
char canary[KEY_LEN];
char buf[BUF_SIZE];
char user_len[BUF_SIZE];
int count;
int x = 0;
memcpy(canary,key,KEY_LEN); // copies "key" to canary, an array on the stack
printf("Please enter the length of the entry:\n> ");
while (x<BUF_SIZE) {
read(0,user_len+x,1);
if (user_len[x]=='\n') break;
x++;
}
sscanf(user_len,"%d",&count); // gives count the value of the len of user_len
printf("Input> ");
read(0,buf,count); // reads count bytes to buf from stdin
// compares canary (variable on stack) to key
// if overwriting need to get the value of key and maintain it, i assume its constant
if (memcmp(canary,key,KEY_LEN)) {
printf("*** Stack Smashing Detected *** : Canary Value Corrupt!\n");
exit(-1);
}
printf("Ok... Now Where's the Flag?\n");
fflush(stdout);
}
int main(int argc, char **argv){
setvbuf(stdout, NULL, _IONBF, 0);
int i;
gid_t gid = getegid();
setresgid(gid, gid, gid);
read_canary();
vuln();
return 0;
}
When I run this normally, with ./vuln, I get normal execution. But when I open it in gdb with gdb ./vuln and then run it with run, I get the [ERROR]: Trying to Read Canary message. Is this something that is intended to make the problem challenging? I don't want the solution, I just don't know if this is intended behaviour or a bug. Thanks

I don't want the solution, I just don't know if this is intended behaviour or a bug.
I am not sure whether you'll consider it intended behavior, but it's definitely not a bug.
Your ./vuln is a set-gid program. As such, it runs as group canary_3 when run outside of GDB, but as your group when run under GDB (for obvious security reason).
We can assume that the canary_3 group has read permissions on the canary.txt, but you don't.
P.S. If you printed strerror(errno) (as comments suggested), the resulting Permission denied. should have made the failure obvious.

Related

Inconsistent encryption and decryption with OpenSSL RC4 in C++

First off, I understand that RC4 is not the safest encryption method and that it is outdated, this is just for a school project. Just thought I put it out there since people may ask.
I am working on using RC4 from OpenSSL to make a simple encryption and decryption program in C++. I noticed that the encryption and decryption is inconsistent. Here is what I have so far:
#include <fcntl.h>
#include <openssl/evp.h>
#include <openssl/rc4.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#include <unistd.h>
int main(int argc, char *argv[]) {
int inputFile = open(argv[1], O_RDONLY);
if (inputFile < 0) {
printf("Error opening file\n");
return 1;
}
unsigned char *keygen = reinterpret_cast<unsigned char*>(argv[2]);
RC4_KEY key;
size_t size = lseek(inputFile, 0, SEEK_END);
lseek(inputFile, 0, SEEK_SET);
unsigned char *fileIn = (unsigned char*) calloc(size, 1);
if (pread(inputFile, fileIn, size, 0) == -1) {
perror("Error opening read\n");
return 1;
}
unsigned char *fileOut = (unsigned char*) calloc(size, 1);
unsigned char *actualKey;
EVP_BytesToKey(EVP_rc4(), EVP_sha256(), NULL, keygen, sizeof(keygen), 1, actualKey, NULL);
RC4_set_key(&key, sizeof(actualKey), actualKey);
RC4(&key, size, fileIn, fileOut);
int outputFile = open(argv[3], O_WRONLY | O_TRUNC | O_CREAT, 0644);
if (outputFile < 0) {
perror("Error opening output file");
return 1;
}
if (pwrite(outputFile, fileOut, size, 0) == -1) {
perror("error writing file");
return 1;
}
close(inputFile);
close(outputFile);
free(fileIn);
free(fileOut);
return 0;
}
The syntax for running this in Ubuntu is:
./myRC4 test.txt pass123 testEnc.txt
MOST of the time this works fine, and encrypts and decrypts the file. However occasionally I get a Segmentation fault. If I do, I run the same exact command again and it encrypts or decrypts fine, at least for .txt files.
When I test on .jpg files, or any larger file, the issue seems to be more common and inconsistent. I notice that sometimes the images appear to have been decrypted (no segmentation fault) but in reality it has not, which I test by doing a diff between the original and the decrypted file.
Any ideas as to why I get these inconsistencies? Does it have to do with how I allocate memory for fileOut and fileIn?
Thank you in advance
actualKey needs to be pointing to a buffer of appropriate size before you pass it to EVP_BytesToKey. As it is you are passing in an uninitialised pointer which would explain your inconsistent results.
The documentation for EVP_BytesToKey has this to say:
If data is NULL, then EVP_BytesToKey() returns the number of bytes needed to store the derived key.
So you can call EVP_BytesToKey once with the data parameter set to NULL to determine the length of actualKey, then allocate a suitable buffer and call it again with actualKey pointing to that buffer.
As others have noted, passing sizeof(keygen) to EVP_BytesToKey is also incorrect. You probably meant strlen (argv [2]).
Likewise, passing sizeof(actualKey) to RC4_set_key is also an error. Instead, you should pass the value returned by EVP_BytesToKey.

I can not overflow buffer

I have seen a buffer overflow code but I can not over flow it. Is there any gcc option to compile that? Or any wrong with that code.
The code is:
#include <stdlib.h>
#include <unistd.h>
#include <stdio.h>
#include <string.h>
int main(int argc, char **argv)
{
volatile int modified;
char buffer[64];
if(argc == 1) {
errx(1, "please specify an argument\n");
}
modified = 0;
strcpy(buffer, argv[1]);
if(modified == 0x61626364) {
printf("you have correctly got the variable to the right value\n");
} else {
printf("Try again, you got 0x%08x\n", modified);
}
}
and I am trying to run it this way:
perl -e 'print "A"x64 . "dcba"' | xargs ./main
You need to know
Know the stack memory layout and the address difference between the variable modified and buffer
You can solve it by finding the offset between modified and buffer as (char *)&modified - (char *)buffer
Your machine endianess. I have used the stack overflow answer for this purpose
The linked demonstrates how to run the modified code that serves the purpose of determining the correct argument as well as stack smashing. The first Demo provides you with the argument that you can feed to your second Demo

c++ BOF vulnerability in this code please some one explain

what is the vulnerabilty in this code please some one explain me
#include <stdio.h>
#include <string.h>
int main(int arc, char* argv[])
{
char buff[50];
strcpy(buffer, argv[1]);
printf("You are string: %s", buff);
return 0;
}
argv[1] may not exist, or it could be longer than 50 chars => problem.
Solution: Check if argc >=2 and strlen(argv[1])<50
Otherwise, your program has undefined behaviour,
means it can do something strange and unexpected.
Additionally, malicious humans maybe can inject own code

working of fwrite in c++

I am trying to simulate race conditions in writing to a file. This is what I am doing.
Opening a.txt in append mode in process1
writing "hello world" in process1
prints the ftell in process1 which is 11
put process1 in sleep
open a.txt again in append mode in process2
writing "hello world" in process2 (this correctly appends to the end of the file)
prints the ftell in process2 which is 22 (correct)
writing "bye world" in process2 (this correctly appends to the end of the file).
process2 quits
process1 resumes, and prints its ftell value, which is 11.
writing "bye world" by process1 --- i assume as the ftell of process1 is 11, this should overwrite the file.
However, the write of process1 is writing to the end of the file and there is no contention in writing between the processes.
I am using fopen as fopen("./a.txt", "a+)
Can anyone tell why is this behavior and how can I simulate the race condition in writing to the file?
The code of process1:
#include <iostream>
#include <fstream>
#include <string>
#include <stdio.h>
#include "time.h"
using namespace std;
int main()
{
FILE *f1= fopen("./a.txt","a+");
cout<<"opened file1"<<endl;
string data ("hello world");
fwrite(data.c_str(), sizeof(char), data.size(), f1);
fflush(f1);
cout<<"file1 tell "<<ftell(f1)<<endl;
cout<<"wrote file1"<<endl;
sleep(3);
string data1 ("bye world");;
cout<<"wrote file1 end"<<endl;
cout<<"file1 2nd tell "<<ftell(f1)<<endl;
fwrite(data1.c_str(), sizeof(char), data1.size(), f1);
cout<<"file1 2nd tell "<<ftell(f1)<<endl;
fflush(f1);
return 0;
}
In process2, I have commented out the sleep statement.
I am using the following script to run:
./process1 &
sleep 2
./process2 &
Thanks for your time.
The writer code:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define BLOCKSIZE 1000000
int main(int argc, char **argv)
{
FILE *f = fopen("a.txt", "a+");
char *block = malloc(BLOCKSIZE);
if (argc < 2)
{
fprintf(stderr, "need argument\n");
}
memset(block, argv[1][0], BLOCKSIZE);
for(int i = 0; i < 3000; i++)
{
fwrite(block, sizeof(char), BLOCKSIZE, f);
}
fclose(f);
}
The reader function:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#define BLOCKSIZE 1000000
int main(int argc, char **argv)
{
FILE *f = fopen("a.txt", "r");
int c;
int oldc = 0;
int rl = 0;
while((c = fgetc(f)) != EOF)
{
if (c != oldc)
{
if (rl)
{
printf("Got %d of %c\n", rl, oldc);
}
oldc = c;
rl = 0;
}
rl++;
}
fclose(f);
}
I ran ./writefile A & ./writefile B then ./readfile
I got this:
Got 1000999424 of A
Got 999424 of B
Got 999424 of A
Got 4096 of B
Got 4096 of A
Got 995328 of B
Got 995328 of A
Got 4096 of B
Got 4096 of A
Got 995328 of B
Got 995328 of A
Got 4096 of B
Got 4096 of A
Got 995328 of B
Got 995328 of A
Got 4096 of B
Got 4096 of A
Got 995328 of B
Got 995328 of A
Got 4096 of B
Got 4096 of A
Got 995328 of B
Got 995328 of A
As you can see, there are nice long runs of A and B, but they are not exactly 1000000 characters long, which is the size I wrote them. The whole file, after a trialrun with a smaller size in the first run is just short of 7GB.
For reference: Fedora Core 16, with my own compiled 3.7rc5 kernel, gcc 4.6.3, x86-64, and ext4 on top of lvm, AMD PhenomII quad core processor, 16GB of RAM
Writing in append mode is an atomic operation. This is why it doesn't break.
Now... how to break it?
Try memory mapping the file and writing in the memory from the two processes. I'm pretty sure this will break it.
I'm pretty sure you can't RELY on this behaviour, but it may well work reliably on some systems. Writing to the same file from two different processes is likely to cause problems sooner or later, if you "try hard enough". And sod's law says that that's exactly when your boss is checking if the software works, when your customer takes delivery of the system you've sold, or when you are finalizing your report that took ages to produce, or some other important time.
The behavior you're trying to break or see depends on which OS you are working on, as writing in a file is a system call.
On what you told us about the first file descriptor to not overwrite what the second process wrote, the fact you opened the file in append mode in both process may have actualized the ftell value before actually writing in it.
Did you try to do the same with the standard open and write functions? Might be interesting as well.
EDIT: The C++ Reference doc explains about the fopen append option here:
"append/update: Open a file for update (both for input and output) with all output operations writing data at the end of the file. Repositioning operations (fseek, fsetpos, rewind) affects the next input operations, but output operations move the position back to the end of file."
This explains the behavior you observed.

Why is calling ReadConsole in a loop corrupting the stack?

I've disabled line input with the following code:
DWORD dwConsoleMode;
GetConsoleMode(hStdIn, &dwConsoleMode);
dwConsoleMode ^= ENABLE_LINE_INPUT;
SetConsoleMode(hStdIn, dwConsoleMode);
Then I am calling ReadConsole in a loop...in a loop:
wchar_t cBuf;
while (1) {
/* Display Options */
do {
ReadConsole(hStdIn, &cBuf, 1, &dwNumRead, NULL);
} while (!iswdigit(cBuf));
putwchar(cBuf);
if (cBuf == L'0') break;
}
If I run the program and press 0 right away, it exists cleanly.
But if I press a bunch of keys, then press 0, when the program exists it crashes with:
Run-Time Check Failure #2 - Stack around the variable 'cBuf' was corrupted.
Why is this causing the stack to become corrupt? The code is simple, so I can't figure out what is wrong.
Little program that I can reproduce the problem with:
#include <windows.h>
#include <stdio.h>
int wmain(int argc, wchar_t *argv[])
{
DWORD dwNumRead;
wchar_t cBuf;
HANDLE hStdIn = GetStdHandle(STD_INPUT_HANDLE);
DWORD dwConsoleMode;
GetConsoleMode(hStdIn, &dwConsoleMode);
dwConsoleMode ^= ENABLE_LINE_INPUT;
SetConsoleMode(hStdIn, dwConsoleMode);
while (true)
{
wprintf(L"\nEnter option: ");
do {
ReadConsoleW(hStdIn, &cBuf, 1, &dwNumRead, NULL);
} while (!iswdigit(cBuf));
putwchar(cBuf);
if (cBuf == L'0') break;
}
return 0;
}
You have to kind of mash your keyboard after you run it, then press 0, and it crashes with the stack corruption.
I also can't reproduce the problem every time, it takes a few tries.
I was running it under Visual Studio 2010, after creating a new empty console project and adding a file with that code.
As far as I can tell, this is a bug in Windows. Here is a slightly simpler program that demonstrates the problem:
#include <windows.h>
#include <crtdbg.h>
int wmain(int argc, wchar_t *argv[])
{
DWORD dwNumRead;
wchar_t cBuf[2];
cBuf[0] = cBuf[1] = 65535;
HANDLE hStdIn = GetStdHandle(STD_INPUT_HANDLE);
SetConsoleMode(hStdIn, 0);
while (true)
{
_ASSERT(ReadConsoleW(hStdIn, &cBuf[0], 1, &dwNumRead, NULL));
_ASSERT(dwNumRead == 1);
_ASSERT(cBuf[1] == 65535);
Sleep(5000);
}
}
The sleep makes it a bit easier to trigger the issue, which occurs whenever more than one character is waiting at the time you call ReadConsoleW.
Looking at the content of cBuf[1] at the time the relevant assertion fails, it appears that ReadConsoleW is writing one extra byte at the end of the buffer.
The workaround is straightforward: make sure your buffer has at least one extra byte. In your case, use the first character of a two-character array.